
Boost : 
Subject: Re: [boost] [GSoC, MPL11] Community probe
From: Zach Laine (whatwasthataddress_at_[hidden])
Date: 20140507 08:57:40
On Mon, May 5, 2014 at 9:22 AM, Louis Dionne <ldionne.2_at_[hidden]> wrote:
> Zach Laine <whatwasthataddress <at> gmail.com> writes:
>
> [...]
>
> I looked at the UnitsBLAS codebase (assuming that's what you were
> referring
> to) to get a better understanding of your use case. It was very helpful in
> understanding at least some of the requirements for a TMP library; thank
> you
> for that. In what follows, I sketch out possible solutions to some of your
> issues. I'm mostly thinking out loud.
>
>
That's the one. I hope you looked at the C++14 branch though. It seems
from the comments below that you did.
>
> > Here are the metaprogramming capabilities I needed for my Fusionlike
> data
> > structures:
> >
> > 1) compiletime type traits, as above
> > 2) simple compiletime computation, as above
> > 3) purely compiletime iteration over every element of a single list of
> > types
> > 4) purely compiletime iteration over every pair of elements in two lists
> > of types (for ziplike operations, e.g. elementwise matrix products)
> > 5) runtime iteration over every element of a single tuple
> > 6) runtime iteration over every pair of elements in two tuples (again,
> for
> > ziplike operations)
> >
> > For my purposes, operations performed at each iteration in 3 through 6
> > above may sometimes require the index of the iteration. Again, this is
> > probably atypical.
>
> Some kind of counting range with a zip_with constexpr function should do
> the trick. Hence you could do (pseudocode):
>
> zip_with(your_constexpr_function, range_from(0), tuple1, ..., tupleN)
>
> where range_from(n) produces a range from n to infinity. I have been able
> to implement zip_with, but I'm struggling to make it constexpr because I
> need a lambda somewhere in there. The range_from(n) should be quite
> feasible.
>
>
If your intent is that zip_with produces only a type, I don't actually have
a use for it. I can directly do the numeric computation, and the type
computation comes along for free, thanks to automatic return type deduction
and a foldl()type approach.
>
> > 1 is covered nicely by existing traits, and 2 is covered by ad hoc
> > applicationspecific code (I don't see how a library helps here).
>
> I agree.
>
>
> > There are several solutions that work for at least one of 36:
> >
> >  Compiletime foldl(); I did mine as constexpr, simply for readability.
> >  Runtime foldl().
> >  Direct expansion of a template parameter pack; example:
> >
> > template <typename MatrixLHS, typename MatrixRHS, std::size_t ...I>
> > auto element_prod_impl (
> > MatrixLHS lhs,
> > MatrixRHS rhs,
> > std::index_sequence<I...>
> > ) {
> > return std::make_tuple(
> > (tuple_access::get<I>(lhs) * tuple_access::get<I>(rhs))...
> > );
> > }
> >
> > (This produces the actual result of multiplying two matrices
> > elementbyelement (or at least the resulting matrix's internal tuple
> > storage). I'm not really doing any metaprogramming here at all, and
> that's
> > sort of the point. Any MPL successor should be as easy to use as the
> above
> > was to write, or I'll always write the above instead. A library might
> help
> > here, since I had to write similar functions to do elementwise division,
> > addition, etc., but if a library solution has more syntactic weight than
> > the function above, I won't be inclined to use it.)
>
> I think direct expansion of parameter packs would not be required in this
> case if we had a zip_with operation:
>
> zip_with(std::multiplies<>{}, lhs, rhs)
>
>
Except that, as I understand it, direct expansion is cheaper at compile
time, and the make_tuple(...) expression above is arguably clearer to
maintainers than zip_with(...).
> However, there are other similar functions in your codebase that perform
> operations that are not covered by the standard function objects. For this,
> it would be _very_ useful to have constexpr lambdas.
>
>
> >  Ad hoc metafunctions and constexpr functions that iterate on
> typelists.
> >  Ad hoc metafunctions and constexpr functions that iterate over the
> values
> > [1..N).
> >  Ad hoc metafunctions and constexpr functions that iterate over [1..N)
> > indices into a larger or smaller range of values.
>
> I'm sorry, but I don't understand the last one. Do you mean iteration
> through a slice [1, N) of another sequence?
>
>
Yes.
>
> > I was unable to find much in common between my individual ad hoc
> > implementations that I could lift up into library abstractions, or at
> least
> > not without increasing the volume of code more than it was worth to me.
> I
> > was going for simple and maintainable over abstract. Part of the lack of
> > commonality was that in one case, I needed indices for each iteration, in
> > another one I needed types, in another case I needed to accumulate a
> > result, in another I needed to return multiple values, etc. Finding an
> > abstraction that buys you more than it costs you is difficult in such
> > circumstances.
>
> I do think it is possible to find nice abstractions, but it will be worth
> lifting into a separate library. I also think it will require departing
> from the usual STL concepts and going into FPworld.
>
>
Sounds good, although I tried for quite a while to do just this (using an
FP style), and failed to find anything that added any clarity.
> [...]
> > > It is easy to see how constexpr (and relaxed constexpr) can make the
> second
> > > kind of computation easier to express, since that is exactly its
> purpose.
> > > However, it is much less clear how constexpr helps us with
> computations of
> > > the first kind. And by that I really mean that using constexpr in some
> way
> > > to perform those computations might be more cumbersome and less
> efficient
> > > than good old metafunctions.
> > >
> > >
> > I've been using these to write less code, if only a bit less.
> >
> > Instead of:
> >
> > template <typename Tuple>
> > struct meta;
> >
> > template <typename ...T>
> > struct meta<std::tuple<T...>>
> > { using type = /*...*/; };
> >
> > I've been writing:
> >
> > template <typename ...T>
> > constexpr auto meta (std::tuple<T...>)
> > { return /*...*/; }
> >
> > ...and calling it as decltype(meta(std::tuple</*...*/>{})). This both
> > eliminates the noise coming from having a base/specialization template
> pair
> > instead of one template, and also removes the need for a *_t template
> alias
> > and/or typename /*...*/::type.
>
> That's valid in most use cases, but this won't work if you want to
> manipulate
> incomplete types, void and function types. Unless I'm mistaken, you can't
> instantiate a tuple holding any of those. Since a TMP library must clearly
> be able to handle the funkiest types, I don't think we can base a new TMP
> library on metafunctions with that style, unless a workaround is found.
>
Right. I was using expansion into a tuple as an example, but this would
work just as well:
template <typename ...T>
constexpr auto meta (some_type_sequence_template<T...>)
{ return some_type_sequence_template</*...*/>{}; }
And this can handle whatever types you like.
> I also fear this might be slower because of possibly complex overload
> resolution, but without benchmarks that's just FUD.
>
>
That's not FUD. I benchmarked Clang and GCC (albeit ~3 years ago), and
found that consistently using function templates instead of struct
templates to increase compile times by about 20%. For me, the clarity and
reduction in codenoise are worth the compile time hit. YMMV.
> Like you said initially, I think your use case is representative of a C++14
> Fusionlike library more than a MPLlike one. I'll have to clearly define
> the boundary between those before I can claim to have explored the whole
> design space for a new TMP library.
>
This is true. However, I have found in my use of TMP (again, for a
somewhat specific use case), in code that used to rely on it, to be largely
irrelevant. That is, I was able to simply throw away so much TMP code that
I assert that TMP in C++14 is actually relatively pedestrian stuff. The
interesting bit to me is how to create a library that handles both MPL's
old domain and Fusion's old domain as a single new library domain. I
realize this may be a bit more than you intended to bite off in one summer,
though.
Zach
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk