Subject: Re: [boost] [review] Formal review period for VMD library begins today, Aug 21, and ends Sat, Aug 30
Date: 2014-08-25 23:59:31
----- Original Message -----
> From: "Edward Diener" <eldiener_at_[hidden]>
> On 8/25/2014 6:04 PM, Rodrigo Madera wrote:
> > On Mon, Aug 25, 2014 at 3:12 PM, Edward Diener <eldiener_at_[hidden]>
> > wrote:
> >> The problem of merging VMD into Boost PP is twofold.
> >> First Boost PP is still Paul Mensonides library and his philosophy of
> >> safety, which I totally respect, is part of Boost PP.
> >> Secondly, as you have already said and realize, the philosophy of VMD is
> >> that preprocessing errors can occur if the functionality in VMD is misused
> >> by a programmer. In other words VMD depends on the constraints discussed
> >> in
> >> the library to work correctly. But within those constraints it does work
> >> and adds a great deal of flexibility to macro metaprogramming and the
> >> design of macro input.
> > Could you please provide a link to this mentioned safety philosophy of
> > Boost PP?
> There is no link I can give, other than to say it is my interpretation
> of the philosophy of Boost PP from looking at code and working/talking
> privately with Paul.
> I do not believe that passing incorrrect data to Boost PP functionality
> will ever produce a preprocessing error. It may produce an undefined
> result but that is then the programmer's problem misusing some
> functionality of Boost PP.
It can produce preprocessing errors. E.g. nothing stops the user from writing BOOST_PP_CAT(+,-) which technically should produce a preprocessing error.
> Another example of the "safety philosophy" of Boost PP is that Paul was
> well aware that he had written a better variadic macro to test for
> emptiness, as well as that the Boost PP library has an undocumented
> non-variadic test for emptiness used internally. He did not want the
> current non-variadic version to be documented and he made no move to add
> the variadic version to Boost PP.
My opinion WRT the above is as follows.
Unconstrained arguments to macros are sequences of preprocessing tokens and whitespace separations (hereafter called "token sequences"). That token sequence may be empty. Which is no different than passing an empty vector, empty range, or empty string to a function at runtime. A preprocessor tuple, for example, can hold arbitrary unconstrained data (aside from those tokens which would interfere with actual parameter delineation such as commas and unbalanced parentheses--which I call "pathological input"). Therefore, the library must interpret (,,) as a ternary tuple, (,) as a binary tuple, and () as a unary tuple, and there is no machinery in place for a rogue value representing a nullary tuple. For example, if one is processing a tuple of cv-qualifiers and popping elements off the front as it processes, that tuple might start as (const volatile, volatile, const,) which is a four-element tuple. The front is popped, and it becomes a ternary tuple (volatile, const,). The front is popped again, and it becomes a binary tuple (const,). After a final pop, it becomes a unary tuple () _not_ a nullary tuple. So, with something like MACRO(a, b,, d), the third argument is not elided, it is supplied as an empty tokens sequence.
One *could* supply a rogue value representing a nullary tuple (and, in fact, chaos-pp does this) and then design all of the tuple-related machinery around this, but this is not in place in the Boost pp-lib.
Even if it was logical to consider () to be a nullary tuple, there is no way to detect emptiness on unconstrained (but not pathological) input. It is flat-out impossible. There are a variety of ways you can *almost* do it. If the input is constrained such that those "almost" cases are removed, only then can you do it, but then you must add constraints to (e.g.) tuple elements which you cannot add to such a general-purpose container.
With all of the above said, you can have optional arguments to a macro provided there is at least one non-optional argument. For example, if you have MACRO(a, b, c) where you want c to be optional, you change the definition to MACRO(a, ...), you detect whether the __VA_ARGS__ is unary or binary (which can be detected), and branch in some way based on that result. Chaos does this too in a number of places.
Once you start constraining data, you can detect lots of things. The basic problems are that (1) you cannot reasonably constrain the data held in general-purpose containers and (2) you cannot change the way arguments to a macro are delineated by the preprocessor. So, I am 100% against ever treating () as a nullary tuple. Something else like an empty token sequence representing a nullary tuple? I am okay with that provided all of the tuple-related machinery is updated to deal with this rogue value and provided it works across all supported preprocessors--which is not an easy task in some cases.
I am also not particularly fond of fractured interfaces. By that I mean that I do not like interfaces that only work on some preprocessors in a library that is targeting preprocessors rather than the standards. I especially don't like scenarios like interface A works, interface B works, but interface A + B does not work--i.e. combinatorial nightmare (which is what VC++ produces, BTW)--which is what happens when you start exposing the low-level primitives as interfaces. Right now that means that, in particular, variadic macros and placemarkers are not supported well because preprocessors don't support them very well--especially "important" ones like VC++. If this was not the case and if we had good preprocessors across a broad range of compilers and in particular the important compilers, the Boost pp-lib would simply get a breaking ground-up reimplementation built to support variadic macros and placeholders from the ground up. Which, in essence, is what chaos-pp already is. So, from my point of view, Boost.Preprocessor is a relic that serves as a lowest common denominator. Otherwise, get a real preprocessor (i.e. not VC++) and use better libraries like Chaos.