|
Boost : |
Subject: Re: [boost] [GSoC, MPL11] Community probe
From: Zach Laine (whatwasthataddress_at_[hidden])
Date: 2014-04-30 10:03:50
On Tue, Apr 29, 2014 at 4:40 PM, Louis Dionne <ldionne.2_at_[hidden]> wrote:
> Gonzalo Brito Gadeschi <g.brito <at> aia.rwth-aachen.de> writes:
>
> > In my experience, relaxed constexpr allows for beautiful metaprogramming
> > with respect to C++11, in particular when combined with Boost.Fusion,
> and I
> > think that it is really a game changer with respect to C++11 constexpr,
> > since it allows not only "functional" metaprogramming but also
> "imperative"
> > metaprogramming, significantly lowering the learning curve of
> > metaprogramming for C++ programmers.
>
> Could you please expand on the nature of metaprogramming you have been
> doing
> with constexpr? Are you talking about manipulating values of a literal
> type,
> or about "pure type" computations? See below for more on that.
>
>
I agree, but even more so when it comes to c++14, due to the availability
of generalized automatic return type deduction. This plus relaxed
constexpr as *completely* changed the way I write metaprograms.
We might be talking about something slightly different, though, as I've
found that std::tuple used in conjunction with these two new language
features means that I no longer need MPL or Fusion for most things. I
still have need for MPL's sorted data structures, though.
>
> > Have you consider the influence of relaxed constexpr (C++14) in your
> > library?
> >
> > Could it simplify the design/implementation/usage of the MPL?
>
> I did consider the impact of C++14 on the design of the library, and I
> still am. At this point, my conclusion is that we must define what we
> mean by a "template metaprogramming library".
>
I differentiate between two main kinds of computations that can be done at
> compile-time. The first is manipulating "pure types" with metafunctions
> (e.g. type traits):
>
> using void_pointer = std::add_pointer_t<void>;
> using pointers = mpl::transform<
> mpl::list<int, char, void, my_own_type>,
> mpl::quote<std::add_pointer>
> >::type;
>
> The above example is completely artificial but you get the point. The
> second
> kind of computation is manipulating values of a literal type at
> compile-time.
>
> using three = mpl::plus<mpl::int_<1>, mpl::int_<2>>::type;
>
> The MPL wraps these values into types so it can treat computations on those
> as computations of the first kind, but another library could perhaps handle
> them differently (probably using constexpr).
>
> constexpr int three = plus(1, 2);
>
>
I recently decided to completely rewrite a library for linear algebra on
heterogeneous types using Clang 3.4, which is c++14 feature-complete
(modulo bugs). My library previously used lots of MPL and Boost.Fusion
code, and was largely an unreadable mess. The new version only uses MPL's
set, but no other MPL and no Fusion code, and is quite easy to understand
(at least by comparison). The original version took me months of spare
time to write, including lots of time trying to wrestle MPL and Fusion into
doing what I needed them to do. The rewrite was embarrassingly easy; it
took me about two weeks of spare time. I threw away entire files of
return-type-computing metaprograms. The overall line count is probably 1/4
what it was before. My library and its needs are probably atypical with
respect to MPL usage overall, but is probably representative of much use of
Fusion, so keep that in mind below.
Here are the metaprogramming capabilities I needed for my Fusion-like data
structures:
1) compile-time type traits, as above
2) simple compile-time computation, as above
3) purely compile-time iteration over every element of a single list of
types
4) purely compile-time iteration over every pair of elements in two lists
of types (for zip-like operations, e.g. elementwise matrix products)
5) runtime iteration over every element of a single tuple
6) runtime iteration over every pair of elements in two tuples (again, for
zip-like operations)
For my purposes, operations performed at each iteration in 3 through 6
above may sometimes require the index of the iteration. Again, this is
probably atypical.
1 is covered nicely by existing traits, and 2 is covered by ad hoc
application-specific code (I don't see how a library helps here).
There are several solutions that work for at least one of 3-6:
- Compile-time foldl(); I did mine as constexpr, simply for readability.
- Runtime foldl().
- Direct expansion of a template parameter pack; example:
template <typename MatrixLHS, typename MatrixRHS, std::size_t ...I>
auto element_prod_impl (
MatrixLHS lhs,
MatrixRHS rhs,
std::index_sequence<I...>
) {
return std::make_tuple(
(tuple_access::get<I>(lhs) * tuple_access::get<I>(rhs))...
);
}
(This produces the actual result of multiplying two matrices
element-by-element (or at least the resulting matrix's internal tuple
storage). I'm not really doing any metaprogramming here at all, and that's
sort of the point. Any MPL successor should be as easy to use as the above
was to write, or I'll always write the above instead. A library might help
here, since I had to write similar functions to do elementwise division,
addition, etc., but if a library solution has more syntactic weight than
the function above, I won't be inclined to use it.)
- Ad hoc metafunctions and constexpr functions that iterate on type-lists.
- Ad hoc metafunctions and constexpr functions that iterate over the values
[1..N).
- Ad hoc metafunctions and constexpr functions that iterate over [1..N)
indices into a larger or smaller range of values.
I was unable to find much in common between my individual ad hoc
implementations that I could lift up into library abstractions, or at least
not without increasing the volume of code more than it was worth to me. I
was going for simple and maintainable over abstract. Part of the lack of
commonality was that in one case, I needed indices for each iteration, in
another one I needed types, in another case I needed to accumulate a
result, in another I needed to return multiple values, etc. Finding an
abstraction that buys you more than it costs you is difficult in such
circumstances.
So, I'm full of requirements, and no answers. :) I hope this helps, if
only with scoping. I'll be in Aspen if you want to discuss it there too.
> It is easy to see how constexpr (and relaxed constexpr) can make the second
> kind of computation easier to express, since that is exactly its purpose.
> However, it is much less clear how constexpr helps us with computations of
> the first kind. And by that I really mean that using constexpr in some way
> to perform those computations might be more cumbersome and less efficient
> than good old metafunctions.
>
>
I've been using these to write less code, if only a bit less.
Instead of:
template <typename Tuple>
struct meta;
template <typename ...T>
struct meta<std::tuple<T...>>
{ using type = /*...*/; };
I've been writing:
template <typename ...T>
constexpr auto meta (std::tuple<T...>)
{ return /*...*/; }
...and calling it as decltype(meta(std::tuple</*...*/>{})). This both
eliminates the noise coming from having a base/specialization template pair
instead of one template, and also removes the need for a *_t template alias
and/or typename /*...*/::type.
[snip]
> So a valid question that must be answered before I/we can come up with
> a "final" version of the library that can be proposed to Boost (or for
> standardization) is:
>
> "What is the purpose of a TMP library?"
>
> Once that is well defined, we won't be shooting at a moving target anymore.
> Right now, I have avoided these questions as much as possible by focusing
> on
> computations of the first kind. For those computations, my research so far
> shows that constexpr is unlikely to be of any help. If someone can come up
> with counter-examples or ideas that seem to refute this, _please_ let me
> know
> and I'll even buy you a beer in Aspen. This is _very_ important; it's
> central
> to my current work.
>
>
Zach
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk