Boost logo

Boost :

Subject: Re: [boost] [Hana] Informal review request
From: Louis Dionne (ldionne.2_at_[hidden])
Date: 2015-03-07 11:38:46


Vicente J. Botet Escriba <vicente.botet <at> wanadoo.fr> writes:

>
> [...]
>
> I appreciate what has been done already to improve the tutorial. Next
> follows some comments about the design.
>
> DataTypes and TypeClasses
> ----------------------
> I appreciate what has been done already. I have some trouble with the
> "datatype" use. If I understand it correctly, the type you map to a
> "type class" is not the type itself but its datatype (which can be the
> same).

That is correct. Hana also calls them generalized types, because
they are like types, but ones that don't care about the actual
representation (memory layout) of the objects. In a sense, a data
type is like a C++ concept, but one that would only be modeled by
a couple of types that would be exactly the same up to
representation differences.

> In Haskell, type classes can be universally quantified or not. E.g. the
> "type class" Eq is not universally quantified
>
> [...]
>
> That means that the instance mapping for Eq is from types (*) and the
> instance mapping for Functor is for type constructors having one
> parameter (* ->*).
>
> [...]
>
> I don't see this nuance in your library. [...]

This is because this nuance is _voluntarily_ left aside. More
generally, the design choice I made was to leave parametric
data types to the documentation level only, as is documented
in the section about generalized data types[1]. As I already
said on this list during the last informal review, what I make
is a tradeoff between mathematical correctness and usability
of the library. While Hana provides structures that satisfy
the laws rigorously, I want users to be able to use those
structures in a quick and dirty way as a mean of being more
productive when needed. For example, I would not object to
someone writing [pseudocode]:

    auto to_string = [](auto x) { /* convert x to a std::string */ };
    auto xs = hana::make_tuple(1, 2.2, "abc");
    auto ys = hana::transform(xs, to_string);

Since `1`, `2.2` and `"abc"` do not share a common data type and
the signature of `transform` is (for a Functor F)

    transform : F(T) x (T -> U) -> F(U)

, the above should normally be ill-formed. However, the library
makes such usage possible because I kept in mind that we're
working in a "dirty" heterogeneously-typed context. Bottom line:
Concepts in Hana are rigorously specified and coherent internally.
You are free to bypass them when you need to, because, while it
would always be possible to do it, it would be painful to always
justify how operations are mathematically correct [2].

> Compile-time error reporting
> --------------------
>
> I would expect that the library provide some king of "TypeClass" check
> use (As Eric's Range library does with Concepts) so that the compile
> time errors are more explicit.

Doing this for all parameters of all functions is intractable
due to the heterogeneous context, but basic support could
be added easily. I'm promoting this on my todo list.

> Unnamed data types
> -----------------
> I'm missing, between other, the concrete types _either<A,B> and
> _maybe<T>, as we have _pair<A,B>. How the user can declare a structure
> having data members with these data types?

You can't if you don't know beforehand whether it's going to be
a `just` or a `nothing`. Hence, it is useless to create such
a data member _explicitly_. The reason is that whether a Maybe
is a `just` or a `nothing` is encoded in its type, so you
have to know it if you want to _explicitly_ declare such a
data member. That's the downside. The upside is that it allows
you to interact with heterogeneous objects, which was the
initial goal. There's an example of how this can be used to
encode SFINAE-failures at [3].

> About a pure type meta programming sub-library Hana/Meta
> ----------------------------------------------------
>
> While I agree that it is good to be able to define the algorithms only
> once for types and values, I find the syntax cumbersome when the user
> wants only to work at the meta-programming level.

I think you overestimate how often you actually need to do
computations on types. We were just accustomed by the MPL
to write everything with raw types, but this is not the
only way to do it. The only times we actually need to
manipulate types is when we use <type_traits>, which
does not represent the largest part of metaprogramming
in my experience.

> I wonder if the library shouldn't contain a sublibrary Hana/Meta that
> defines in a meta namespace all the algorithms/types that work directly
> on types and integral constants.
>
> Instead of defining a hana::tuple_t, I suggest to define it in
> meta::tuple that works only with types and a meta::transform that works
> only on types.
>
> This will allow to write ([1])
>
> static_assert(
> meta::transform<meta:: tuple<int, char const, void>,
> std::add_pointer>{} ==
> meta::tuple<int*, char const*, void*>{}
> , "");
>
> instead of
>
> static_assert(
> hana::transform(hana::tuple_t<int, char const, void>,
> hana::metafunction<std::add_pointer>) ==
> hana::tuple_t<int*, char const*, void*>
> , "");
>
> The definition of meta::transform will just do the steps 2 and 3 of the
> process you describe:
>
> 1. Wrap the types with |type<...>| so they become values
> 2. Apply whatever type transformation |F| by using |metafunction<F>|
> 3. Unwrap the result with |decltype(...)::type|
>
> [...]

First of all, the first alternative will almost surely need to
look more like:

    static_assert(
        meta::transform<meta::tuple<int, char const, void>,
            meta::quote<std::add_pointer>>{} ==
        meta::tuple<int*, char const*, void*>{}
     , "");

Notice meta::quote. Now, if you compare with

    static_assert(
        hana::transform(hana::tuple_t<int, char const, void>,
            hana::metafunction<std::add_pointer>) ==
        hana::tuple_t<int*, char const*, void*>
    , "");

, I think you have to admit the difference is tiny. Hana also
provides integration with <type_traits>, which means that you
can actually write:

    static_assert(
        hana::transform(hana::tuple_t<int, char const, void>,
            hana::traits::add_pointer) ==
        hana::tuple_t<int*, char const*, void*>
    , "");

The difference is that with Hana, you would then need to use
`decltype(...)::type` to get back the actual type. Again, my
answer is that this is only required at some thin boundaries
where you actually need the types.

HOWEVER, there seem to be some demand for this from the
community. I'll be very honest and direct with you and
everyone else on this list: if this addition can make
people feel more at home because it resembles the MPL,
and if in turn that eases the process of making Hana
part of Boost, then I could do it. However, on my part,
I would find that it encourages people to leverage only
the _tiny_ part of Hana that deals with type-level
metaprogramming, and hence reduce what can be done
with it to a small portion of its full power.

Also, if I could be shown examples in which the 3-step
process described above is really painful, and if those
examples turned out not to simply be non-idiomatic Hana,
I would add those facilities by the end of the day.

On a more serious note, and because this issue has to be
closed once and for all, I created a poll at [4] to resolve
whether such facilities should be added. It is not a guarantee
that I'll do it if people want it, but I'd like to have an idea
of how badly people want it.

> An additional advantage of having this Meta library is that it can be
> defined using just a C++11 compiler, as it is done in Eric's Meta library.

I understand they _could_ be implemented in C++11 only,
but concretely they would use Hana as a backend and so
they would need C++14. I would not reimplement those
operations from scratch.

Thanks for your comments and questions that never fail
to challenge me, Vicente.

Regards,
Louis

[1]: http://ldionne.github.io/hana/#tutorial-hetero

[2]: For the example I gave, it would be possible, for example, to
say that we're creating a Tuple of things that can be converted to
a `std::string`. Let's call this data type `Stringizable`. Then,
the data type of the tuple would be `Tuple(Stringizable)`, and
the signature of transform that we would be using is

transform : F(Stringizable) x (Stringizable -> std::string) -> F(std::string)

Until I have a deeper formal understanding of the way this process
of "intersecting the concepts of the objects in a sequence" works,
which I'm working on as my bachelor's thesis, I'd rather keep Hana's
concepts free of higher-order data types. Even if/when I'll have a
deeper formal understanding, I would likely need language support
to make this applicable.

[3]: http://goo.gl/MgUIpl
[4]: https://github.com/ldionne/hana/issues/19


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk