Boost logo

Boost :

Subject: Re: [boost] [GSoC, MPL11] Community probe
From: Louis Dionne (ldionne.2_at_[hidden])
Date: 2014-05-09 13:33:28


Zach Laine <whatwasthataddress <at> gmail.com> writes:

>
> On Mon, May 5, 2014 at 9:22 AM, Louis Dionne <ldionne.2 <at> gmail.com> wrote:
>
> > Zach Laine <whatwasthataddress <at> gmail.com> writes:
> >
> > [...]
> >
> > I looked at the Units-BLAS codebase (assuming that's what you were
> > referring
> > to) to get a better understanding of your use case. It was very helpful in
> > understanding at least some of the requirements for a TMP library; thank
> > you
> > for that. In what follows, I sketch out possible solutions to some of your
> > issues. I'm mostly thinking out loud.
> >
> >
> That's the one. I hope you looked at the C++14 branch though. It seems
> from the comments below that you did.

Yes, I looked at the C++14 branch.

[...]

> > Some kind of counting range with a zip_with constexpr function should do
> > the trick. Hence you could do (pseudocode):
> >
> > zip_with(your_constexpr_function, range_from(0), tuple1, ..., tupleN)
> >
> > where range_from(n) produces a range from n to infinity. I have been able
> > to implement zip_with, but I'm struggling to make it constexpr because I
> > need a lambda somewhere in there. The range_from(n) should be quite
> > feasible.
> >
> >
> If your intent is that zip_with produces only a type, I don't actually have
> a use for it. I can directly do the numeric computation, and the type
> computation comes along for free, thanks to automatic return type deduction
> and a foldl()-type approach.

No, zip_with should return a tuple. I should have shown the implementation:
https://gist.github.com/ldionne/fd460b13ef26856b1f3b

> > I think direct expansion of parameter packs would not be required in this
> > case if we had a zip_with operation:
> >
> > zip_with(std::multiplies<>{}, lhs, rhs)
> >
> >
> Except that, as I understand it, direct expansion is cheaper at compile
> time, and the make_tuple(...) expression above is arguably clearer to
> maintainers than zip_with(...).

It is possible to use direct expansion in the implementation of zip_with (see
the Gist). This way, you get the same compile-time performance improvement
over a naive recursive approach, but you abstract the details away from the
user. However, I have not benchmarked the zip_with above.

Regarding the clarity of the expression, I would on the contrary argue that
zip_with is clearer. It is a well-known idiom in FP and it is more succint
and general than the hand-written solution.

[...]

> > That's valid in most use cases, but this won't work if you want to
> > manipulate
> > incomplete types, void and function types. Unless I'm mistaken, you can't
> > instantiate a tuple holding any of those. Since a TMP library must clearly
> > be able to handle the funkiest types, I don't think we can base a new TMP
> > library on metafunctions with that style, unless a workaround is found.
> >
>
> Right. I was using expansion into a tuple as an example, but this would
> work just as well:
>
> template <typename ...T>
> constexpr auto meta (some_type_sequence_template<T...>)
> { return some_type_sequence_template</*...*/>{}; }
>
> And this can handle whatever types you like.

Dumb me. But there's still a problem; how would you implement e.g. front()?

    template <typename ...xs>
    struct list { };

    template <typename x, typename ...xs>
    constexpr auto front(list<x, xs...>)
    { return x{}; }

If the front type is not nice, this won't work, so we still need some
kind of workaround. We could perhaps wrap those problematic types in
the following way:

    template <typename T>
    struct box { using type = T; };

and use them as

    using void_ = decltype(front(list<box<void>>{}))::type;

However, another problem arises in this case. How would you map a
metafunction (e.g. a type trait) over a sequence of such types?

    struct add_pointer {
        template <typename T>
        constexpr std::add_pointer_t<T> operator()(T) const {
            return std::add_pointer_t<T>{};
        }
    };

    template <typename F, typename ...xs>
    constexpr auto map(F f, list<xs...>) {
        return list<decltype(f(xs{}))...>{};
    }

    using pointers = decltype(
        map(add_pointer{}, list<box<void>, int, char>{})
    );

But then, we would have

    pointers == list<box<void>*, int*, char*>

instead of

    pointers == list<box<void*>, int*, char*>

Of course, we could specialize all the type traits for box<>, but I'm not
sure that's the best option.

> > I also fear this might be slower because of possibly complex overload
> > resolution, but without benchmarks that's just FUD.
> >
> >
> That's not FUD. I benchmarked Clang and GCC (albeit ~3 years ago), and
> found that consistently using function templates instead of struct
> templates to increase compile times by about 20%. For me, the clarity and
> reduction in code-noise are worth the compile time hit. YMMV.

Good to know. It would be possible to still use structs (and aliases) in
the implementation of core operations like foldl and foldr. That could
help mitigate the issue.

> > Like you said initially, I think your use case is representative of a C++14
> > Fusion-like library more than a MPL-like one. I'll have to clearly define
> > the boundary between those before I can claim to have explored the whole
> > design space for a new TMP library.
> >
>
> This is true. However, I have found in my use of TMP (again, for a
> somewhat specific use case), in code that used to rely on it, to be largely
> irrelevant. That is, I was able to simply throw away so much TMP code that
> I assert that TMP in C++14 is actually relatively pedestrian stuff. The
> interesting bit to me is how to create a library that handles both MPL's
> old domain and Fusion's old domain as a single new library domain. I
> realize this may be a bit more than you intended to bite off in one summer,
> though.

That is a large bite for sure. I'm not sure yet that merging the MPL and
Fusion in a single library is feasible, but I'm currently trying to figure
this out. I think the Aspen meeting will be helpful.

Regards,
Louis


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk