|
Boost : |
From: David Abrahams (dave_at_[hidden])
Date: 2008-03-28 12:36:02
on Thu Mar 27 2008, Eric Niebler <eric-AT-boost-consulting.com> wrote:
> Dave gave a lot of good feedback. Any that I leave out here, I
> implicitly accept.
>
> David Abrahams wrote:
>> * I'm not sure that ::
>>
>> proto::terminal< std::ostream & >::type cout_ = { std::cout };
>>
>> is guaranteed to have the nice initialization properties you aim
>> for. If ``cout_`` contains a reference, it isn't a POD, and
>> therefore is not obliged to be statically initialized.
>
> That's a bummer. I suppose I could add a POD pointer wrapper that
> Proto knows to dereference on each access.
Yeah, I s'pose. Seems unlikely that any real implementation is going to
treat the reference as non-POD for the purposes of initialization, but
you never know I guess.
>> * That page also goes on at length about static initialization but
>> doesn't really explain why it's important. Imagine the reader
>> doesn't know the difference between static and dynamic
>> initialization.
>
> There's a separate rationale for static initialization in an appendix.
> I'll add a link to it from here.
OK.
>> * What happens if your type has a generalized operator? ::
>>
>> namespace fu
>> {
>> struct zero {};
>> #if 1
>> template <class T> T operator+(T x,zero) { return x; }
>> #else
>> double operator+(double x,zero) { return x; }
>> #endif
>> }
>>
>> int main()
>> {
>> // Define a calculator context, where _1 is 45 and _2 is 50
>> calculator_context ctx( 45, 50 );
>>
>> // Create an arithmetic expression and immediately evaluate it
>> double d = proto::eval( (_2 - _1) / _2 * 100 + fu::zero(), ctx );
>>
>> // This prints "10"
>> std::cout << d << std::endl;
>> }
>>
>> Answer: a nasty error message (at least on g++). Anything we can
>> do to improve it (just wondering)?
>
>
> It's similar to what happens in e.g. a linear algebra domain where
> vector terminals want to define += that actually does work as opposed to
> build expression trees. In that case, you'll need to disable proto's
> operator overloads with a grammar. Otherwise, the operators are ambiguous.
>
> I'm can't think of anything better.
In this case what I really wanted was to disable the existing operator.
Think of zero as a non-lazy type like double, that you might want to use
in a lazy context.
Anyway, I was only aiming at "can we improve the error message," rather
than, "can we actually make this work," although I can imagine some
approaches to the latter.
>> * is there a reason we need ``ref_`` as opposed to using true
>> references? (just curious, but the docs don't answer this
>> question).
>
> It's not strictly necessary, and in branches/proto/v3 there's a version
> of proto that uses true references. I found it complicated the the
> implementation and causes a bunch of unnecessary remove_reference<>
> instantiations.
If the existing implementation isn't too complicated, you could always
add an internal "convert T& to ref_<T>" step, just to keep ref_ out of
the users' face.
>> - It's a little jarring that the semantic value of a terminal
>> node is accessed by treating it as a child rather than as
>> something known as the node's *value*. Normally in attributed
>> parse trees, terminals store associated values and don't have
>> children. In fact I think "no children" is the very
>> *definition* of a "terminal", isn't it?
>
> That's understandable. I can add proto::value() that extracts the value
> from a terminal node. Do you think I should actively prevent people from
> using child_c<0>(term)? It "just works", and making it fail to compile
> would actually add overhead (a compile-time check).
No, I don't think making it fail to compile has any value. If you don't
promise people it will work, it becomes illegal (or at least
deprecated), and leaving the functionality in place preserves backward
compatibility.
>> It would also help if you clued us in about how function call
>> exprs work outside the code comment, i.e. that the first node is
>> the function object itself. It's not intuitively obvious at
>> first -- I realize it's wrong, but I expected to see fun stored
>> inside the parent Expr node, but not as a child.
>
> Right, you're not the first person who got hung up on this. It's worth a
> blurb explaining why the "function" is a child along with the "arguments".
I don't even need a "why" so much as a description of the _fact_ as more
than an aside in a code example.
>> - where does ``functional::arg<>()`` come from in this example,
>> and what does it do? If it's some other library and you're not
>> going to explain it, at least please give me a pointer so I can
>> learn about it. It appears not to be from Boost.Functional.
>
> I need to document the naming idioms. proto::foo() is a function,
> proto::result_of::foo is a metafunction that calculates the return type,
> proto::functional::foo is the equivalent function object, and (where
> relevant) proto::transform::foo is the primitive transform, and
> proto::_foo is an alias for proto::transform::foo.
>
> Now that you know, do you have any suggestions for improvements?
Hmm, that's hard to keep track of. If you dopped the alias it would be
a little simpler. Maybe s/functional/function_obj/ or even
s/functional/object/...
wait, isn't proto::foo in the above case actually an instance of
proto::functional::foo? In which case, it's not a function and you
should make that clear in the language you use.
>> - what's the motivation for flattening?
>
> In some DSELs, the tree structure is irrelevant. E.g., in xpressive,
> (_>>_)>>_ is the same as _>>(_>>_) and they both need to be flattened
> into a list of matchers, because the leftmost must invoke the rightmost.
The conclusion isn't obvious from the "because."
> I'll try to add some motivation to the docs.
Thanks
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_construction/tags_and_meta_functions.html
> ...
>> - What purpose do the <>s serve in the table if you're not going
>> to show the metafunction signature?
>
> Elsewhere in the docs, I've used "foo()" to emphasize that foo is a
> function, and "foo<>" to emphasize that foo is a template. Not sure if
> it's a worthwhile convention.
I think as long as you're going to put the <>s in, it would help a lot
to remind people how the thing is used, so I would add arguments.
>> - Does it use the same tag and metafunction no matter the arity
>> of a function call?
>
> Not sure what you mean. What function call?
I'm talking about the tag::function row.
>> - What namespaces are the metafunctions in?
>
> boost::proto
My point was that there's no guide at the beginning that tells me how to
interpret such names. I would prefer in most cases to see everything
written as though
namespace proto = boost::proto;
were in effect, so there would be a lot more qualification.
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_construction/construction_utils.html
> ...
>> - You're using "..." notation in the ``make_expr`` synopsys. Is
>> that intended to be C++0x, or...? You could either do it with
>> subscripts in a traditional way:
>>
>> .. parsed-literal::
>>
>> A\ :sub:`0`, A\ :sub:`1`, ... A\ :sub:`n`
>>
>> or spell out how to interpret your notation in the text. A
>> reference to a C++0x paper would be enough if that's what
>> you're trying to do.
>
> It's a C++0x variadic. But maybe subscripts would be better. And I need
> to say that the number of arguments are limited by BOOST_PROTO_MAX_ARITY.
I don't care which you do as long as people have some guidance about how
to interpret it.
>> - The ``DomainOrArg`` argument to ``result_of::make_expr`` is
>> confusing. I don't see a correponding argument to the function
>> object. I might not have been confused by this except that you
>> seem to use that argument in the very next example.
>
> The make_expr function object does have an optional Domain template
> parameter:
>
> template<typename Tag, typename Domain = default_domain>
> struct make_expr : callable
>
> What I'm not showing is an overload of the proto::make_expr() free
> function that doesn't require you to specify the domain.
Can you do anything to clear up the confusion?
>> - I don't know how well or badly it would work with the rest of
>> the library, but I'm thinkin' in cases like this one it might
>> be possible to save the user some redundant typing::
>>
>> // One terminal held by reference:
>> int i = 0;
>>
>> typedef
>> proto::result_of::make_expr<
>> MyTag
>> , int & // <-- Note reference here
>> , char
>> >::type
>> expr_type;
>>
>> expr_type expr = proto::make_expr<MyTag>(boost::ref(i), 'a');
>>
>> I'm thinking the result of ``proto::make_expr<...>(...)`` could hold
>> everything by reference, but the type of
>> ``proto::result_of::make_expr< ... >`` would hold things by the
>> specified type.
>
> And rely on an implicit conversion between expression types?
Yes.
> I've tried to avoid that. Figuring what is convertible to what can be
> expensive, and it's hard to know whether you are paying that cost in
> your code or not because the conversions are implicit.
I don't understand, sorry. This doesn't look like something that
requires any type checking that wouldn't happen anyway.
>> Thus you'd end up being able to drop the use
>> of ``boost::ref()`` above. If you don't like the lack of
>> correspondence between the two ``make_expr``\ s, naturally you
>> could call one of them something else.
>
> That's a possibility I've considered. In branches/proto/v3, I have
> make_expr and make_expr_ref. It's not flexible enough, though. Sometimes
> you want one child held by reference and the other by value. This
> happens in transforms when inserting nodes into a tree. The new node
> must be held by value, but existing nodes can be held by reference since
> they won't go out of scope.
OK.
>> - testing out some of the examples on this page, I notice that
>> you explicitly specify namespaces in some places but not in
>> others (e.g. ``default_domain`` is unqualified), so they don't
>> compile without some "using." Can you use the automated
>> example testing that Joel developed?
>
> I use it in some places. I'll be more consistent about it.
I find that if I don't automate testing of examples, they are pretty
much always wrong.
>> - Is all this time spent on ``make_expr`` really appropriate at
>> this early stage of the tutorial? Seems to me we *ought* to be
>> able to do a lot of more sophisticated things with the library
>> before we start into the nitty-gritty of building expressions
>> explicitly (i.e. framework details). No?
>
> You're probably right. Currently, the users' guide is neatly divided
> into 5 sections: construction, evaluation, introspection, transformation
> and extension.
That sounds like an excellent structure for a reference section :-)
> That means I have to exhaustively cover construction
> before I get to anything else -- even make_expr() which is rather
> esoteric. I suppose I should rethink the overall structure. Suggestions
> welcome.
Walk us through practical examples in increasing order of
sophistication, showing the most useful features first and the more
esoteric ones laater.
>> - Is this the first time you're showing us how to build a simple
>> lazy function? That should come *much, much* earlier.
>
> Not everything can come first!
Does it sound like I'm saying that about everything? I just think lazy
functions are so fundamental and familiar an example for expression
templates that people's heads will start to spin if they don't see
that it's possible with Proto early on.
>>
>> - If this is *truly* a lazy function, why can't I evaluate it
>> like this? ::
>>
>> S s = construct_S(1,'a')();
>
> In proto, operator() doesn't apply a lazy function, it builds another
> (even lazier?) function. But I'm sure you know that. Is your issue with
> my terminology here? This is a lazy function in the sense that it looks
> like a function call, but it is deferred to later.
Yeah, it's just terminology. You might explain what you mean, since for
most people I think a lazy function looks like a function call at both
the creation and the ultimate invocation.
>> - it
>> sez:
>>
>> What is new in this case is the fourth macro argument, which
>> specifies that there is an implicit first argument to
>> ``construct()`` of type ``construct_<X>``, where ``X`` is a
>> template parameter of the function
>>
>> * what is "the function?"
>
> construct()
My point is that it's not clear; just spell it out in the text please.
>> * which template parameter? (the first I think)
>
> Yes, so you can invoke it with construct<X>(_1,_2).
Ditto.
>> * what tells the library to substitute the function's template
>> parameter there?
>
> Not sure I understand the question.
What is the underlying rule that says how this works? Does the library
substitute the function's template parameter into all first arguments,
or what?
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_evaluation/proto_eval.html
>>
>> - It finally dawns on me that the 3rd ``make_expr`` is an
>> instance of the function object type in ``functional::``.
>
> It's not, actually, and it can't be. You need to be able to invoke it as:
>
> make_expr<MyTag>(args...)
>
> So proto::make_expr is a free function, not an instance of
> functional::make_expr.
>
>> Seems very familiar now, and I should have been very familiar
>> with this idiom before I started this review but nonetheless
>> was confused, so it bears explanation.
>
> You were misled by the fact that proto::eval is an instance of
> proto::functional::eval. It shouldn't be. Better to make it a free
> function like the others. No reason why it shouldn't be find-able with
> ADL.
OK. Well, the larger point is that the idioms need to be explained up
front so people don't have to create a series of wrong hypotheses like I
did before they understand the patterns you're using.
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_evaluation/contexts.html
>>
>> - This nested eval function object doesn't follow the pattern of
>> other function objects. Maybe you should tell the reader why
>> not.
>
> How does it not? It's a TR1 function object like the others.
To begin with, you haven't said a peep up to now about the fact that
you're using the TR1 function object protocol. But that's not all: all
your other function object up 'till now have been polymorphic, and thus
have had a templated operator() and a nested result<...> metafunction.
I'm arguing for more handholding when the patterns change up like this.
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_evaluation/canned_contexts.html
> ...
>> - That said, a really good motivating case for using matches<>
>> seems to be missing.
>
> I should talk about how it can be used to improve error messages by
> validating expressions at API boundaries.
Yep. In general I've found enable_if's impact on error messages to be
disappointing (a long list of the overloads that didn't match, with
little explanation). It usually turns out to be better to match with an
overload that generates a specific error message.
>> - These are really vague questions because I sense difficult
>> territory but I'd need some help to make them more concrete.
>> I'm wondering how these grammars would handle algebraic
>> structures like Rings and vector spaces, where the important
>> things are a relation between the parts. For example, IIRC
>> integers are a ring in two ways: with identity 0 over +, and
>> identity 1 over \*.
>
> I'm not seeing the relationship with Proto grammars.
I'll have to think more about how to describe it, or if it really matters.
>> A vector space consists of a matrix type, a
>> vector type, and a scalar type, all of which need to be
>> compatible.
>
> Compatible how? Value type? Dimension?
Yes and yes.
> You are wondering if a grammar can be used to discover these
> incompatibilities? That's an interesting question ... I don't have an
> answer for you right now.
Yeah, vague questions that I'd like to try to answer eventually by
applying the library in that domain.
>> - It might be possible to use function overloading to speed up
>> compile-time evaluation of ``matches<>``, if you're not already
>> doing things that way. See the techniques used in
>> ``mpl::set``.
>
> IIRC, my first approach to matches<> was to use overload sets. It didn't
> work out, but I don't remember why. I'll dig into it again when I have
> some time.
Good question, because it tends to make a really dramatic difference in
compile times.
>> - "When given a grammar like this, Proto will deconstruct the
>> grammar and the terminal it is being matched against and see if
>> it can match all the constituents."
>>
>> Do you really mean "deconstruct?" It's not obvious to the
>> reader what you mean by that part of the sentence. Would it be
>> better to just cut that whole sentence?
>
> It uses the MPL lambda technique of using template-template parameters
> to rip apart template types. Deconstruct seemed as good a word as any,
> but maybe I don't need to talk about it at that level here.
Again, I'd stick to "what" and cover "how" elsewhere, if at all.
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_introspection/if_and_not.html
> ...
>> - is ``is_same`` another proto facility, or is it a boost type trait?
>
> boost::is_same, the type trait.
This is why I favor explicit qualification in such docs.
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_introspection/defining_dsel_grammars.html
> ...
>> - I would be *very* interested to see what Proto would look like
>> under ConceptGCC -- nobody really seems to understand the
>> interaction of DSELs and metaprogramming with concepts,
>> AFAICT. Have you thought about it?
>
> I haven't. You're the second person to ask me that. The first was Gary
> Powell, and it's a topic that has him deeply concerned. He feels there
> may be some inherent incompatibilities between the necessarily(?)
> unconstrained system of templates that libraries like Lambda seem to
> require and the constrained templates that people are likely to write
> in the future. It's an open question. I don't have the concept-foo yet
> to answer it myself.
To start with, there's one very basic thing we won't be able to do: name
template specializations that cannot be instantiated, like
vector<foo(bar)>
just for DSEL purposes.
>> - You need to do some work to convince me this is even simpler
>> than the EBNF form. There's certainly more of it!
>
> Simpler in the sense that you need not encode precedence and
> associativity into your grammar, the way you need to with EBNF.
Can you make that more obvious in the text?
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation.html
> ...
>
>> - Let me guess at the real point behind transformations when you
>> have ``eval()``. I think it's that ``eval()`` only lets you
>> operate on nodes independently of their siblings and ancestors.
>> A transformation lets you gather all the context in the
>> expression and use it at one time. Right? If so, say that
>> explicitly. If not, please do clarify.
>
> Yes, that's the major reason.
>
>
>> - "It says to create an object of type ``terminal<long>::type`` and
>> initialize it with the result of the ``_arg`` transform. ``_arg`` is a
>> transform defined by Proto which essentially calls ``proto::arg()``
>> on the current expression."
>>
>> â¦which is a terminal, and in this world, terminals have a
>> child, which is known as an "argument." Now what does
>> ``proto::arg()`` do? Ah, yes, in this case it extracts the
>> value associated with the terminal node. If I keep all those
>> translations in mind, it begins to make sense.
>
> You got it.
>
>> - ``when< grammar, transform >`` seems like it would be better
>> named ``replace< grammar, transform >`` or something.
>
> Why do you say that?
My understanding was that tree transformations might match parts of the
tree and replace them with transformed versions, while leaving other
subtrees untouched.
I'm not sure I believe that there's a better name than "when," but this
shows you how I was thinking about what you wrote, anyway.
>> - A grammar decorated with transforms is a function object that takes three
>> parameters:
>>
>> * expr -- the Proto expression to transform
>> * state -- the initial state of the transformation
>> * visitor -- any optional mutable state information
>>
>> This is really confusing. What's the difference between state
>> and visitor? The descriptions make them sound like the same
>> thing.
>
>
> State is an accumulation variable, for use primarily by the fold family
> of transforms.
Right, that's as I expected. Well in this case it isn't a variable, but
I know what you mean.
> In flight, it is the current state of the transformation
> so far (e.g., a partially constructed fusion::list). The type of the
> state object usually changes during transformation.
>
> Visitor is just a blob of mutable data, whatever you want. The type of
> the visitor usually doesn't change during the transformation. None of
> proto's built in transforms touch it in any way --- it is passed through
> unchanged.
Oh, oh... please don't call that a visitor then! It's just auxilliary
data; you might call it "data" or "aux". When I see "visitor" I
automatically think of an object with associated operations that are
applied at each traversed element of a structure.
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/example__calculator_arity_transform.html
>>
>> - "Our job will be to write a transform that calculates the arity
>> of any calculator expression."
>>
>> I think you need to explain why that's a realistic example.
>
> OK. It's actually essential in the Lambda example, where the nullary
> lambda<>::operator() needs to state its return type, but it would be a
> compile error to try to compute it on a non-nullary lambda expression.
>
> And easier-to-grok motivation would be to be able to catch errors such
> as applying too many or too few arguments to a calculator expression.
> The number of arguments must match the expression's arity, all of which
> is knowable at compile time.
Either explanation would be fine, although the 2nd is better (is-nullary
would do for the first); just add it to the document.
>> - I think ::
>>
>> when< unary_expr< _, CalcArity >, CalcArity(_arg) >
>>
>> should be spelled ::
>>
>> when< unary_expr< _, CalcArity >, CalcArity(_arg(_)) >
>
> CalcArity(_arg) and CalcArity(_arg(_)) are synonyms today. Do you feel
> that the (_) should be required? (_) is optional because _arg is a
> so-called primitive transform, for which expr, state, and visitor can be
> implicit. It's not just syntactic sugar -- it's less work for Proto.
In that case, no, I don't feel it should be required. However, I think
this library is hard enough to grok that consistency of notation should
be a primary goal in the user guide, so I would use the more consistent
spelling. Can you get most of the efficiency back with a single
template specialization on <_arg(_)> ? That's often the case in such
situations.
>> or if you accept my naming scheme,
>>
>> when< unary_expr< _, CalcArity >, CalcArity(_child(_)) >
>
> Sure, _child works.
>
>> - This seems to imply that if you have two different transforms
>> for the same grammar, you end up essentially repeating the
>> syntax part... right?
>
> Yes. I've toyed with ways to non-intrusively decorate a grammar with
> transforms, but they all end up being syntactically heavier than just
> repeating the grammar.
I guess one possibility is to create a metafunction that essentially
builds the chosen transform into a parameterized grammar. I'm not
attached to solving this "problem," though ;-)
>> - I think you're missing a good example of why you'd do two
>> different evaluations on the same expr. A classic might be
>> regular expression evaluation and symbolic differentiation.
>> E.g., ::
>>
>> x = eval(_1 * _1, normal(7)); // 49
>> y = eval(_1 * _1, differentiate(7)); // 14
>
> Symbolic differentiation?
d(x*x)/dx = 2x
(sorry, when I said "regular expression evaluation" I just meant "normal
expression evaluation"). The idea here is that given an algebraic
expression, you can create an evaluator that computes the Nth-order
derivative of that expression for any N, including zero. IIRC this sort
of thing is actually useful in situations where you want to use the
Newton-Raphson method to find the zeroes of a function.
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/canned_transforms.html
> ...
>> - "It can be used without needing to explicitly specify any
>> arguments to the transform." Huh? What kind of arguments?
>
> This is why _arg is a synonym for _arg(_). In this case, "_" is an
> argument to the transform.
Okay. I tend to think of placeholders as not being actual arguments.
> Because _arg is "callable", you can leave off
> _ and also _state and _visitor. They are implicit.
The point being that the language confused me. Do what you will with
that fact; clarifying 50% of the things I complain about may make the
other 50% understandable to me.
>> - "These are the building blocks from which you can compose
>> larger transforms using function types." *Which* are the
>> building blocks? "These" needs an antecedent, and in this case
>> it's really not clear to me what it should be.
>
> Primitive transforms. Composite transforms are built using function
> types from primitive transforms. That's where it starts. If a transform
> is DNA, the primitive transforms are the base pairs.
Add an antecedent to "these," then. "These primitive transforms..."
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/canned_transforms/arg_c_and_friends.html
>>
>> - "``_arg``, ``_left``, and ``_right``": shouldn't you include
>> ``arg_c`` here?
>>
>> - huh, these things are all in ``boost::proto::transform``.
>> Didn't I see them used without qualification earlier?
>
> proto::_left is a typedef for proto::transform::left. I need to be way
> more explicit about that.
Or something :-)
>> Hmmm... ::
>>
>> transform::right::result<void(Expr, State, Visitor)>::type
>>
>> I think I'd understand this better as:
>>
>> boost::result_of<transform::right(Expr,State,Visitor)>::type
My point here is that it makes the connection with result_of explicit
and leaves out implementation details (e.g. does right have a nested
result_type or a nested result template?)
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/canned_transforms/if.html
>>
>> - The example at the bottom of the page is::
>>
>> struct ByValOrRef
>> : when<
>> terminal<_>
>> , if_<
>> mpl::less_equal<
>> mpl::sizeof_<_arg>
>> , mpl::size_t<4>
>> >()
>> , _make_terminal(_arg)
>> , _make_terminal(_ref(_arg))
>> >
>> >
>> {};
>>
>> 1. you should probably be checking has_trivial_destructor
>> before you decide to store by value. A clone_ptr might
>> easily be 4 bytes lnog.
>
> IMO, that would complicate the example without illustrating anything
> additional about proto::if_.
OK
>
>
>> 2. Can we replace ``_make_terminal(_arg)`` above with
>> ``_expr``? If not, why not?
>
> No, _expr is analogous to lambda::_1 ... whatever the first argument is,
> return that. (_state is analogous to lambda::_2 and _visitor is
> analogous to lambda::_3 --
Oooooh! Please, a statement up front about that!
> they return the state and visitor
> parameters). So _expr(_arg) would be the same as _arg. It wouldn't
> create a new expression node, which is what _make_terminal(_arg) does.
>
> _make_terminal is a typedef for functional::make_expr<tag::terminal>. I
> don't think I say that anywhere, though. :-/
There's a lot of that going around ;-)
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/canned_transforms/and_or_not.html
> ...
>> - Does ``and_< T0,T1, ... Tn >`` *really* only apply ``T``\ *n*?
>
> Yes. It can't apply all the transforms. The result of applying the first
> transform might have a type that the second transform can't make sense
> of. Transforms don't chain that way.
They could be chained by and_, could they not?
> The default transform for and_ is admittedly a bit arbitrary and not
> terribly useful. But you can always override it with proto::when
I'm lost. Why override it when you don't have to use the arbitrary and
not-terribly-useful thing in the first place?
>> - wow, you totally lost me on this page. I can't understand why
>> the stated behaviors of these transforms make sense
>> (e.g. correspond to their names), and I can't understand why
>> the usage of ``and_`` in ``UnwrapReference`` is an example of a
>> transform and not a grammar element. The outer ``or_`` is a
>> grammar element, right? When ``or_`` is used as a grammar
>> element, aren't its arguments considered grammar elements also?
>
>
> and_, or_ and not_ are both grammar elements and transforms.
Yes, I'm aware of that duality. My understanding is that how they are
treated depends on context. My point is that in the UnwrapReference
example, and_ is treated as a grammar element and not as a transform.
That's doubly true because given its sub-elements, even the default
transform associated with and_ has no interesting effects: whether it
returned the result of the first, the last, or chained the transforms,
you'd get the same answer.
> Every grammar element has a default transform. The default transform
> of or_ should make sense ... apply the transform associated with
> whichever subgrammar matched. and_ we've just discussed. not_ just
> returns the expression unmodified because there's nothing else it can
> do. If an expression matches not_<G>, all we know is that it doesn't
> match G. And if it doesn't match G, we can't apply G's transform,
> because G won't know what to do with it.
>
> Then why bother with default transforms at all, you ask?
I wansn't asking; they seem obviously useful, and I don't see why the
above would lead me to ask that.
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/canned_transforms/call.html
> ...
>> - This business of handling transforms specially when they can
>> accept 3 arguments is hard to understand. Aside from the fact
>> that special cases tend to make things harder to use, it's not
>> clear what it's really doing or why. I guess you're saying
>> that state and visitor are implicitly passed through when they
>> can be?
>
> Yes, and the expression, too.
It would be nice to have a logical overview up front that describes the
transformation as a uniform process of operating on these three values
at every matched node in the tree, and that you have these three
specially-named placeholders that pick up those values... unless their
values are overridden by supplying a round lambda expression.
>> I can understand why you'd want something like that,
>> but let's look at this more closely:
>>
>> "For callable transforms that take 0, 1, or 2 arguments,
>> special handling is done to see if the transform actually
>> expects 3 arguments..."
>>
>> Do you really mean, "for callable transforms that are *passed*
>> 0, 1, or 2 arguments...?" Or maybe it's something more
>> complicated, like "for callable transforms that *are written as
>> though* they take 0, 1, or 2 arguments...?"
>
> Yes, the latter.
OK, please clarify the language. My point is that I had to work all
this stuff out for myself by writing about it.
> The following transform are synonyms:
>
> _arg
> _arg()
> _arg(_)
> _arg(_,_state)
> _arg(_,_state,_visitor)
>
> That is true not just for _arg but for any primitive transform. And for
> completeness (or just to make it more confusing?) you can use _expr
> instead of _ and it means the same thing here.
>
> As I say above, using _arg instead of _arg(_,_state,_visitor) is more
> than just sugar. It's less work for Proto.
Great; the picture is becoming clearer. Let's get that clarity into the
documentation.
>> - Again the use of ``proto::callable`` without a prior
>> explanation... oh! there it is, finally, in the footnote of the
>> example! If you check for "callable with 3 args" using
>> metaprogramming tricks anyway, why not do the same for
>> "callable" in general?
>
> Not sure I understand the questions. Getting a little bleary-eyed myself.
The point is that, given everything you've written so far, at this point
I wonder why I have to specialize is_callable (or derive a class from
proto::callable) when you have (and use) a way to detect callability.
>> - So let me see if I got this right. The naming convention is:
>>
>> ``foobar_tag``
>> a tag type used in grammars... hmm, really a "node
>> type identifier?"
>>
>> ``foobar_``
>> The Proto expression object that builds a node of the above
>> type
>>
>> ``foobar``
>> a non-lazy function object that performs the foobar
>> action
>>
>> ``FooBar``
>> a corresponding grammar element and/or tree transform
>>
>> Whether I got it right or not, it would be helpful to see this
>> spelled out somewhere, much earlier.
>
>
> I wasn't trying to establish any kind of naming convention in the
> make_pair example. None of this code is part of proto. It's just an
> example.
I didn't think you were trying to _establish_ a naming convention at
this point, but I assumed that you were continuing to follow a
consistent naming convention that you'd been using all along. Was I
wrong?
>> - Hmm, "function" looks like the wrong name for an operator,
>> because it doesn't describe an operation. "Call" would be more
>> to-the-point, but that one's already taken. Worth a little
>> thought.
>
> Someone else said something similar, I forget who. The suggestion was
> "apply". I'm not opposed.
I am ;-)
On reflection, once we're in DSEL territory these things can become
syntactic notation divorced from their usual semantics, so maybe I'm
changing my mind. A very consistent approach to naming these things
would help reduce mental drag, though.
> Or the "call" transform can be renamed
> "apply", and "function" can be "call".
That might be better.
> I don't care.
me neither ;-)
>> - Translating the example into terms I understand::
>>
>> make_pair(_arg(_arg1), _arg(_arg2))
>>
>> becomes ::
>>
>> make_pair(_value(_left(_)), _value(_right(_)))
>>
>> which looks a bit better to me.
>
>
> typedef _arg _value;
>
> You're good to go! :-)
I realize that; I'm suggesting the 2nd way makes an easier-to-grasp
presentation.
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/canned_transforms/make.html
> ...
>> - In fact, this section of the user guide is getting very
>> "reference-manual-ish." Maybe I am expecting too much, but I'd
>> like to see ``make<>`` and ``call<>`` treated together. And
>> I'm not sure we need to see a synopsis for each of these
>> transforms, since they all follow the same pattern.
>
> OK, others have noted that the entire Expression Transformation section
> needs a complete rewrite to be more approachable. I'll apply some elbow
> grease.
Thanks; I know this isn't easy.
>> I'm a bit confused about the purpose of the MPL-lambda-ish
>> "check to see if there's a nested ``::type`` here" step. Could
>> you explain why you're doing that?
>
> It's needed here for the same reason that it is needed in MPL
> lambdas.
Oh, but it *isn't* needed there; it's merely a convenience.
> Consider a transform such as:
>
> fusion::single_view< add_const<_> >( _ )
>
> Do you see now? Gotta look for the nested ::type in add_const *after*
> the placeholder (er, nested transform) has been evaluated.
Sure. What I mean is that the option to have no nested ::type is merely
convenient, and not even all that reliable in most cases (how do you
know your vector<>'s implementor didn't add a member ::type)?
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/canned_transforms/bind.html
>>
>> I notice you're using ``void`` here:
>>
>> .. parsed-literal::
>>
>> make<Object>::result<\ **void**\ (Expr, State, Visitor)>::type
>>
>> It's my understanding of the ``result_of`` protocol that you
>> can't just leave out the function object type, because the
>> result of the function might depend on whether the function
>> object itself is const, non-const, an lvalue, or an rvalue.
>> Not sure what to do here; you don't want to spell the whole
>> thing out, clearly.
>
> Right. It's not just an issue in the documentation ... I use void in
> the code, too.
I understand that, but since you defined these templates, you're free to
do so... Any user-defined function objects will presumably have their
return types deduced by boost::result_of, and if you passed void there,
well, it wouldn't know which function object to check the return type
of!
My point is that if you use void like this in the documentation, you
need to describe how your components deal with it. On the other hand,
if you just used boost::result_of in the documentation, I think maybe
you'd skirt the whole issue.
>> Maybe all you need to do to make this okay is provide a blanket
>> statement that all Proto function objects ignore these details
>> of the function object type used with ``result_of``.
>
> OK. I have a rationale section about the liberties I take with the
> ResultOf protocol. I'll discuss the void thing there too, and add a link
> here.
First pls consider whether it's necessary in light of my statement
above.
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/canned_transforms/when.html
> ...
>> - Why the assumption of callability in this one place?
>
> Compile-time performance.
One word: "footnote." ;-)
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/canned_transforms/pass_through.html
>>
>> The example doesn't even name pass_through directly. Can we do
>> better?
>
> I've never used pass_through explicitly, but I use it all the time
> implicitly, as in this example.
Then maybe it doesn't deserve so much space in the user guide.
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/is_callable.html
> ...
>> - You may not want to pay this price, but can you
>> can disassemble the template specialization and see if any of
>> the arguments are transforms, and if not, assume it's callable?
>> That would at least handle the ``times2<int>`` case.
>
> See http://lists.boost.org/Archives/boost/2008/03/134450.php for why
> this doesn't work.
I'll take your word for it.
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_extension/inhibiting_overloads.html
> ...
>> - It's not clear to me why you need all this fancy footwork to
>> define a special meaning for ``operator[]``. Isn't that what
>> contexts are for?
>
> So for:
>
> ( v2 + v3 )[ 2 ];
>
> ... you're saying to let proto build the expression tree representing
> the array access, and evaluate it lazily with a context. Seems reasonable.
Does that undermine the whole example?
What was the example actually doing in lieu of that? Are the tools it
uses still useful in other cases if you can do the job in this case with
a context?
>> - Could you explain why this::
>>
>> typedef typename proto::terminal< std::vector<T> >::type expr_type;
>>
>> lazy_vector( std::size_t size = 0, T const & value = T() )
>> : lazy_vector_expr<expr_type>( expr_type::make( std::vector<T>( size, value ) ) )
>> {}
>>
>> couldn't be written as::
>>
>> lazy_vector( std::size_t size = 0, T const & value = T() )
>> : lazy_vector_expr<expr_type>( std::vector<T>( size, value ) )
>> {}
>
>
> Because lazy_vector_expr<expr_type>'s constructor is expecting an
> expr_type object, not a std::vector<> object. There is no conversion
> between the two. In other words, a T is not implicitly convertible to a
> terminal<T>::type. If then terminal<T>::type has a converting
> constructor, it couldn't be statically initialized.
Ah, thank you very much. Such a note would be very helpful in the text.
>> - But I have to say, that use of the grammar to restrict the
>> allowed operators is *way cool*. I just think it should have
>> been shown *way earlier* ;-).
>
> I can't show *all* the good bits first.
I actually disagree, for some reasonable definition of "first."
> Then there'd be no reason to read further! :-)
It sounds a bit like you're saying it's your goal in the user guide to
eventually teach the reader everything about the library, whether he wants
to learn the details or not. I don't think you should be disappointed
if she stops after she's learned to use the library powerfully but long
before she learns many of the details she won't need.
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_extension/expression_generators.html
>>
>> - "After Proto has calculated a new expression type, it checks
>> the domains of the children expressions. They must match."
>>
>> Does this impair DSEL interoperability?
>
> It does. Thanks for bringing that issue up; I had forgotten about it.
> It's an open design question. What is the domain of an expression such
> as (A + B) where A and B are in different domains? There needs to be an
> arbitration mechanism, but Proto doesn't have one yet. I don't know what
> that would look like.
This is important, because interoperability is one of the most
powerful arguments for DSELs. I think there are a few cases, at least,
where you can make some default decisions. For example A(B) and A[B]
both ought to be handled in A's domain in most cases.
Other more symmetric expressions probably need some explicit wrappers:
A + in_domain<A>(B)
or something.
>> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/examples/rgb.html
>>
>> - This does not appear to follow your naming conventions.
>> - It would be really nice to see some comparative analysis of
>> this versus the PETE version. Likewise for the Vec3 example.
>
> I might be able to link to the PETE version, but if I distribute it with
> Proto, there's the license issue.
IANAL, but I think especially a PETE usage example, can be reproduced as
"fair use."
> And I really don't want to have to explain PETE in Proto's docs.
I was thinking more along the lines of "_this_ roughly corresponds to
_that_ but look how much easier _this other thing_ is in Proto."
-- Dave Abrahams Boost Consulting http://boost-consulting.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk