|
Boost : |
From: Eric Niebler (eric_at_[hidden])
Date: 2008-03-27 20:12:35
Dave gave a lot of good feedback. Any that I leave out here, I
implicitly accept.
David Abrahams wrote:
> * I'm not sure that ::
>
> proto::terminal< std::ostream & >::type cout_ = { std::cout };
>
> is guaranteed to have the nice initialization properties you aim
> for. If ``cout_`` contains a reference, it isn't a POD, and
> therefore is not obliged to be statically initialized.
That's a bummer. I suppose I could add a POD pointer wrapper that Proto
knows to dereference on each access.
> expr< tag::terminal, args0< placeholder1 >, 0 >
...
>
> - Next, there's a great deal left unexplained in the name
> ``args0``. What does "args" mean? What does the "0" mean?
This is part of the larger terminals-have-no-children issue, which you
bring up a few times. I'll address it below...
> * That page also goes on at length about static initialization but
> doesn't really explain why it's important. Imagine the reader
> doesn't know the difference between static and dynamic
> initialization.
There's a separate rationale for static initialization in an appendix.
I'll add a link to it from here.
> * What happens if your type has a generalized operator? ::
>
> namespace fu
> {
> struct zero {};
> #if 1
> template <class T> T operator+(T x,zero) { return x; }
> #else
> double operator+(double x,zero) { return x; }
> #endif
> }
>
> int main()
> {
> // Define a calculator context, where _1 is 45 and _2 is 50
> calculator_context ctx( 45, 50 );
>
> // Create an arithmetic expression and immediately evaluate it
> double d = proto::eval( (_2 - _1) / _2 * 100 + fu::zero(), ctx );
>
> // This prints "10"
> std::cout << d << std::endl;
> }
>
> Answer: a nasty error message (at least on g++). Anything we can
> do to improve it (just wondering)?
It's similar to what happens in e.g. a linear algebra domain where
vector terminals want to define += that actually does work as opposed to
build expression trees. In that case, you'll need to disable proto's
operator overloads with a grammar. Otherwise, the operators are ambiguous.
I'm can't think of anything better.
> * is there a reason we need ``ref_`` as opposed to using true
> references? (just curious, but the docs don't answer this
> question).
It's not strictly necessary, and in branches/proto/v3 there's a version
of proto that uses true references. I found it complicated the the
implementation and causes a bunch of unnecessary remove_reference<>
instantiations. ref_<> forwards much of the expr<> interface, so that,
for instance, I can access E::proto_tag ragardless of whether E is an
expr<> or a ref_<expr<> >.
> - ``arg_c`` is confusingly named. I realize that it is in some
> sense a way to get at the Nth argument to an operation, but
> it's really an operation on an Expr tree and thus I would
> rather see it called ``child_c``. Likewise ``arg()`` ought to
> be ``child()``. That would certainly be more consistent with
> ``left()`` and ``right()``.
OK.
> - It's a little jarring that the semantic value of a terminal
> node is accessed by treating it as a child rather than as
> something known as the node's *value*. Normally in attributed
> parse trees, terminals store associated values and don't have
> children. In fact I think "no children" is the very
> *definition* of a "terminal", isn't it?
That's understandable. I can add proto::value() that extracts the value
from a terminal node. Do you think I should actively prevent people from
using child_c<0>(term)? It "just works", and making it fail to compile
would actually add overhead (a compile-time check).
> It would also help if you clued us in about how
> function call exprs work outside the code comment, i.e. that
> the first node is the function object itself. It's not
> intuitively obvious at first -- I realize it's wrong, but I
> expected to see fun stored inside the parent Expr node, but not
> as a child.
Right, you're not the first person who got hung up on this. It's worth a
blurb explaining why the "function" is a child along with the "arguments".
> - where does ``functional::arg<>()`` come from in this example,
> and what does it do? If it's some other library and you're not
> going to explain it, at least please give me a pointer so I can
> learn about it. It appears not to be from Boost.Functional.
I need to document the naming idioms. proto::foo() is a function,
proto::result_of::foo is a metafunction that calculates the return type,
proto::functional::foo is the equivalent function object, and (where
relevant) proto::transform::foo is the primitive transform, and
proto::_foo is an alias for proto::transform::foo.
Now that you know, do you have any suggestions for improvements?
> - what's the motivation for flattening?
In some DSELs, the tree structure is irrelevant. E.g., in xpressive,
(_>>_)>>_ is the same as _>>(_>>_) and they both need to be flattened
into a list of matchers, because the leftmost must invoke the rightmost.
I'll try to add some motivation to the docs.
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_construction/tags_and_meta_functions.html
...
> - What purpose do the <>s serve in the table if you're not going
> to show the metafunction signature?
Elsewhere in the docs, I've used "foo()" to emphasize that foo is a
function, and "foo<>" to emphasize that foo is a template. Not sure if
it's a worthwhile convention.
> - Does it use the same tag and metafunction no matter the arity
> of a function call?
Not sure what you mean. What function call?
> - What namespaces are the metafunctions in?
boost::proto
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_construction/construction_utils.html
...
> - You're using "..." notation in the ``make_expr`` synopsys. Is
> that intended to be C++0x, or...? You could either do it with
> subscripts in a traditional way:
>
> .. parsed-literal::
>
> A\ :sub:`0`, A\ :sub:`1`, ... A\ :sub:`n`
>
> or spell out how to interpret your notation in the text. A
> reference to a C++0x paper would be enough if that's what
> you're trying to do.
It's a C++0x variadic. But maybe subscripts would be better. And I need
to say that the number of arguments are limited by BOOST_PROTO_MAX_ARITY.
> - The ``DomainOrArg`` argument to ``result_of::make_expr`` is
> confusing. I don't see a correponding argument to the function
> object. I might not have been confused by this except that you
> seem to use that argument in the very next example.
The make_expr function object does have an optional Domain template
parameter:
template<typename Tag, typename Domain = default_domain>
struct make_expr : callable
What I'm not showing is an overload of the proto::make_expr() free
function that doesn't require you to specify the domain.
> - I don't know how well or badly it would work with the rest of
> the library, but I'm thinkin' in cases like this one it might
> be possible to save the user some redundant typing::
>
> // One terminal held by reference:
> int i = 0;
>
> typedef
> proto::result_of::make_expr<
> MyTag
> , int & // <-- Note reference here
> , char
> >::type
> expr_type;
>
> expr_type expr = proto::make_expr<MyTag>(boost::ref(i), 'a');
>
> I'm thinking the result of ``proto::make_expr<...>(...)`` could hold
> everything by reference, but the type of
> ``proto::result_of::make_expr< ... >`` would hold things by the
> specified type.
And rely on an implicit conversion between expression types? I've tried
to avoid that. Figuring what is convertible to what can be expensive,
and it's hard to know whether you are paying that cost in your code or
not because the conversions are implicit.
> Thus you'd end up being able to drop the use
> of ``boost::ref()`` above. If you don't like the lack of
> correspondence between the two ``make_expr``\ s, naturally you
> could call one of them something else.
That's a possibility I've considered. In branches/proto/v3, I have
make_expr and make_expr_ref. It's not flexible enough, though. Sometimes
you want one child held by reference and the other by value. This
happens in transforms when inserting nodes into a tree. The new node
must be held by value, but existing nodes can be held by reference since
they won't go out of scope.
> - testing out some of the examples on this page, I notice that
> you explicitly specify namespaces in some places but not in
> others (e.g. ``default_domain`` is unqualified), so they don't
> compile without some "using." Can you use the automated
> example testing that Joel developed?
I use it in some places. I'll be more consistent about it.
> - Ooh, I really hate ``posit``. It's a word that has an English
> meaning, yet you're using it as some kind of abbreviation,
> which sounds like a positional iterator and sent me scurrying
> back to the table on the previous page to see what it meant.
> How 'bout ``unary_plus``?
Welcome to the bike shed discussion! :-) We've covered this one, and
yes, I'll change it to unary_plus.
> - Is all this time spent on ``make_expr`` really appropriate at
> this early stage of the tutorial? Seems to me we *ought* to be
> able to do a lot of more sophisticated things with the library
> before we start into the nitty-gritty of building expressions
> explicitly (i.e. framework details). No?
You're probably right. Currently, the users' guide is neatly divided
into 5 sections: construction, evaluation, introspection, transformation
and extension. That means I have to exhaustively cover construction
before I get to anything else -- even make_expr() which is rather
esoteric. I suppose I should rethink the overall structure. Suggestions
welcome.
> - How do I compile this example? At namespace scope it seems to
> choke g++ with::
>
> /tmp/tst.cpp:74: error: variable expr_type expr has initializer but incomplete type
>
> I think that's because ``MyExpr`` is incomplete. Adding an
> empty body doesn't help, though.
Yeah, that's pretty hand wavey. Will fix.
> - it
> sez:
>
> "You would like to define a foo() factory function that
> itself was a template"
>
> Wow, I'm totally lost on the motivation here. Why would I like
> that? Why would I define such a tag in the first place?
I'll improve the motivation.
> - Is this the first time you're showing us how to build a simple
> lazy function? That should come *much, much* earlier.
Not everything can come first!
>
> - If this is *truly* a lazy function, why can't I evaluate it
> like this? ::
>
> S s = construct_S(1,'a')();
In proto, operator() doesn't apply a lazy function, it builds another
(even lazier?) function. But I'm sure you know that. Is your issue with
my terminology here? This is a lazy function in the sense that it looks
like a function call, but it is deferred to later.
> - it
> sez:
>
> What is new in this case is the fourth macro argument, which
> specifies that there is an implicit first argument to
> ``construct()`` of type ``construct_<X>``, where ``X`` is a
> template parameter of the function
>
> * what is "the function?"
construct()
> * which template parameter? (the first I think)
Yes, so you can invoke it with construct<X>(_1,_2).
> * what tells the library to substitute the function's template
> parameter there?
Not sure I understand the question.
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_evaluation/proto_eval.html
>
> - It finally dawns on me that the 3rd ``make_expr`` is an
> instance of the function object type in ``functional::``.
It's not, actually, and it can't be. You need to be able to invoke it as:
make_expr<MyTag>(args...)
So proto::make_expr is a free function, not an instance of
functional::make_expr.
> Seems very familiar now, and I should have been very familiar
> with this idiom before I started this review but nonetheless
> was confused, so it bears explanation.
You were misled by the fact that proto::eval is an instance of
proto::functional::eval. It shouldn't be. Better to make it a free
function like the others. No reason why it shouldn't be find-able with ADL.
> - Having just discovered that, it's confusing to encounter an
> example that shows ``eval`` as a function template.
I'll fix it.
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_evaluation/contexts.html
>
> - This nested eval function object doesn't follow the pattern of
> other function objects. Maybe you should tell the reader why
> not.
How does it not? It's a TR1 function object like the others.
> - why do you switch back and forth between using ``::proto_tag``
> and ``tag_of<...>``?
Variety is the spice of life! Seriously, I'll be consistent.
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_evaluation/canned_contexts.html
...
> - Under what circumstances would you *not* use callable_context?
> Is there a good reason to tell users about the form that uses
> specializations of nested class templates at all? Maybe it
> should be later or just in the reference?
Sometimes you want the whole expression instead of just the
constituents. That's the general idea, but I can't remember a specific
example offhand.
> - "Using function overloading and metaprogramming tricks,
> ``callable_context<>`` can detect at compile-time whether such a
> function exists or not. If so, that function is called. If not,
> the current expression is passed to the fall-back evaluation
> context to be processed. "
>
> It surprises me that you need metaprogramming tricks for that,
> but perhaps I'm missing something.
If you know a simpler way, I'm all ears.
> - "We've seen the template ``terminal<>`` before, but here we're
> using it without accessing the nested ``::type``" needs to link
> to the earlier reference, because I'm now wondering what the
> nested ::type is used for.
OK. The nested terminal<T>::type is an expression. terminal<T> is a grammar.
> - The definition of recursive grammars here is just too cool! I
> love it!
Thanks
> Does it have a heavy impact on compile times when
> used with enable_if?
I've worked hard to make matches<> efficient. It's pretty reasonable,
actually.
> - That said, a really good motivating case for using matches<>
> seems to be missing.
I should talk about how it can be used to improve error messages by
validating expressions at API boundaries.
> - I presume this ``or_<...>`` is different from the MPL one?
Yes. The alternatives are not Boolean metafunctions. They are grammars.
> - These are really vague questions because I sense difficult
> territory but I'd need some help to make them more concrete.
> I'm wondering how these grammars would handle algebraic
> structures like Rings and vector spaces, where the important
> things are a relation between the parts. For example, IIRC
> integers are a ring in two ways: with identity 0 over +, and
> identity 1 over \*.
I'm not seeing the relationship with Proto grammars.
> A vector space consists of a matrix type, a
> vector type, and a scalar type, all of which need to be
> compatible.
Compatible how? Value type? Dimension? You are wondering if a grammar
can be used to discover these incompatibilities? That's an interesting
question ... I don't have an answer for you right now.
> - It might be possible to use function overloading to speed up
> compile-time evaluation of ``matches<>``, if you're not already
> doing things that way. See the techniques used in
> ``mpl::set``.
IIRC, my first approach to matches<> was to use overload sets. It didn't
work out, but I don't remember why. I'll dig into it again when I have
some time.
> - "When given a grammar like this, Proto will deconstruct the
> grammar and the terminal it is being matched against and see if
> it can match all the constituents."
>
> Do you really mean "deconstruct?" It's not obvious to the
> reader what you mean by that part of the sentence. Would it be
> better to just cut that whole sentence?
It uses the MPL lambda technique of using template-template parameters
to rip apart template types. Deconstruct seemed as good a word as any,
but maybe I don't need to talk about it at that level here.
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_introspection/if_and_not.html
...
> - is ``is_same`` another proto facility, or is it a boost type trait?
boost::is_same, the type trait.
>
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_introspection/matching_vararg_expressions.html
>
> - "Up to some predefined maximum" -- Isn't this
> ``BOOST_PROTO_MAX_ARITY`` as described repeatedly elsewhere?
>
> - "Degenerate": pertaining to a limiting case of a mathematical
> system that is more symmetrical or simpler in form than the
> general case
> (http://dictionary.reference.com/browse/degenerate). Is that
> what you mean?
Yes. That grammar is the degenerate case because it matches every
expression type.
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_introspection/defining_dsel_grammars.html
...
> - I would be *very* interested to see what Proto would look like
> under ConceptGCC -- nobody really seems to understand the
> interaction of DSELs and metaprogramming with concepts,
> AFAICT. Have you thought about it?
I haven't. You're the second person to ask me that. The first was Gary
Powell, and it's a topic that has him deeply concerned. He feels there
may be some inherent incompatibilities between the necessarily(?)
unconstrained system of templates that libraries like Lambda seem to
require and the constrained templates that people are likely to write in
the future. It's an open question. I don't have the concept-foo yet to
answer it myself.
> - You need to do some work to convince me this is even simpler
> than the EBNF form. There's certainly more of it!
Simpler in the sense that you need not encode precedence and
associativity into your grammar, the way you need to with EBNF.
> - Let me guess at the real point behind transformations when you
> have ``eval()``. I think it's that ``eval()`` only lets you
> operate on nodes independently of their siblings and ancestors.
> A transformation lets you gather all the context in the
> expression and use it at one time. Right? If so, say that
> explicitly. If not, please do clarify.
Yes, that's the major reason.
> - "It says to create an object of type ``terminal<long>::type`` and
> initialize it with the result of the ``_arg`` transform. ``_arg`` is a
> transform defined by Proto which essentially calls ``proto::arg()``
> on the current expression."
>
>
which is a terminal, and in this world, terminals have a
> child, which is known as an "argument." Now what does
> ``proto::arg()`` do? Ah, yes, in this case it extracts the
> value associated with the terminal node. If I keep all those
> translations in mind, it begins to make sense.
You got it.
> - ``when< grammar, transform >`` seems like it would be better
> named ``replace< grammar, transform >`` or something.
Why do you say that?
> - A grammar decorated with transforms is a function object that takes three
> parameters:
>
> * expr -- the Proto expression to transform
> * state -- the initial state of the transformation
> * visitor -- any optional mutable state information
>
> This is really confusing. What's the difference between state
> and visitor? The descriptions make them sound like the same
> thing.
State is an accumulation variable, for use primarily by the fold family
of transforms. In flight, it is the current state of the transformation
so far (e.g., a partially constructed fusion::list). The type of the
state object usually changes during transformation.
Visitor is just a blob of mutable data, whatever you want. The type of
the visitor usually doesn't change during the transformation. None of
proto's built in transforms touch it in any way --- it is passed through
unchanged.
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/example__calculator_arity_transform.html
>
> - "Our job will be to write a transform that calculates the arity
> of any calculator expression."
>
> I think you need to explain why that's a realistic example.
OK. It's actually essential in the Lambda example, where the nullary
lambda<>::operator() needs to state its return type, but it would be a
compile error to try to compute it on a non-nullary lambda expression.
And easier-to-grok motivation would be to be able to catch errors such
as applying too many or too few arguments to a calculator expression.
The number of arguments must match the expression's arity, all of which
is knowable at compile time.
> - I think ::
>
> when< unary_expr< _, CalcArity >, CalcArity(_arg) >
>
> should be spelled ::
>
> when< unary_expr< _, CalcArity >, CalcArity(_arg(_)) >
CalcArity(_arg) and CalcArity(_arg(_)) are synonyms today. Do you feel
that the (_) should be required? (_) is optional because _arg is a
so-called primitive transform, for which expr, state, and visitor can be
implicit. It's not just syntactic sugar -- it's less work for Proto.
> or if you accept my naming scheme,
>
> when< unary_expr< _, CalcArity >, CalcArity(_child(_)) >
Sure, _child works.
> - This seems to imply that if you have two different transforms
> for the same grammar, you end up essentially repeating the
> syntax part... right?
Yes. I've toyed with ways to non-intrusively decorate a grammar with
transforms, but they all end up being syntactically heavier than just
repeating the grammar.
> - I think you're missing a good example of why you'd do two
> different evaluations on the same expr. A classic might be
> regular expression evaluation and symbolic differentiation.
> E.g., ::
>
> x = eval(_1 * _1, normal(7)); // 49
> y = eval(_1 * _1, differentiate(7)); // 14
Symbolic differentiation?
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/canned_transforms.html
...
> - "It can be used without needing to explicitly specify any
> arguments to the transform." Huh? What kind of arguments?
This is why _arg is a synonym for _arg(_). In this case, "_" is an
argument to the transform. Because _arg is "callable", you can leave off
_ and also _state and _visitor. They are implicit.
> - "These are the building blocks from which you can compose
> larger transforms using function types." *Which* are the
> building blocks? "These" needs an antecedent, and in this case
> it's really not clear to me what it should be.
Primitive transforms. Composite transforms are built using function
types from primitive transforms. That's where it starts. If a transform
is DNA, the primitive transforms are the base pairs.
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/canned_transforms/arg_c_and_friends.html
>
> - "``_arg``, ``_left``, and ``_right``": shouldn't you include
> ``arg_c`` here?
>
> - huh, these things are all in ``boost::proto::transform``.
> Didn't I see them used without qualification earlier?
proto::_left is a typedef for proto::transform::left. I need to be way
more explicit about that.
> Hmmm... ::
>
> transform::right::result<void(Expr, State, Visitor)>::type
>
> I think I'd understand this better as:
>
> boost::result_of<transform::right(Expr,State,Visitor)>::type
>
> Now that I've begun to parse it, I guess I think the whole
> table should be restructured with three columns "Expression",
> "Returns", and "Type" and only three rows that are just the
> runtime expressions. Would something like that work?
Sure, that's probably better.
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/canned_transforms/if.html
>
> - The example at the bottom of the page is::
>
> struct ByValOrRef
> : when<
> terminal<_>
> , if_<
> mpl::less_equal<
> mpl::sizeof_<_arg>
> , mpl::size_t<4>
> >()
> , _make_terminal(_arg)
> , _make_terminal(_ref(_arg))
> >
> >
> {};
>
> 1. you should probably be checking has_trivial_destructor
> before you decide to store by value. A clone_ptr might
> easily be 4 bytes lnog.
IMO, that would complicate the example without illustrating anything
additional about proto::if_.
> 2. Can we replace ``_make_terminal(_arg)`` above with
> ``_expr``? If not, why not?
No, _expr is analogous to lambda::_1 ... whatever the first argument is,
return that. (_state is analogous to lambda::_2 and _visitor is
analogous to lambda::_3 -- they return the state and visitor
parameters). So _expr(_arg) would be the same as _arg. It wouldn't
create a new expression node, which is what _make_terminal(_arg) does.
_make_terminal is a typedef for functional::make_expr<tag::terminal>. I
don't think I say that anywhere, though. :-/
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/canned_transforms/and_or_not.html
...
> - Does ``and_< T0,T1, ... Tn >`` *really* only apply ``T``\ *n*?
Yes. It can't apply all the transforms. The result of applying the first
transform might have a type that the second transform can't make sense
of. Transforms don't chain that way.
The default transform for and_ is admittedly a bit arbitrary and not
terribly useful. But you can always override it with proto::when
> - wow, you totally lost me on this page. I can't understand why
> the stated behaviors of these transforms make sense
> (e.g. correspond to their names), and I can't understand why
> the usage of ``and_`` in ``UnwrapReference`` is an example of a
> transform and not a grammar element. The outer ``or_`` is a
> grammar element, right? When ``or_`` is used as a grammar
> element, aren't its arguments considered grammar elements also?
and_, or_ and not_ are both grammar elements and transforms. Every
grammar element has a default transform. The default transform of or_
should make sense ... apply the transform associated with whichever
subgrammar matched. and_ we've just discussed. not_ just returns the
expression unmodified because there's nothing else it can do. If an
expression matches not_<G>, all we know is that it doesn't match G. And
if it doesn't match G, we can't apply G's transform, because G won't
know what to do with it.
Then why bother with default transforms at all, you ask? Consider:
struct Promote
: or_<
when<terminal<int>, terminal<long>::type(_arg)>
, terminal<_>
, nary_expr<_, vararg<Promote> >
>
{};
This promotes all int terminals to long. I only had to explicitly
specify one transform: the one I care about. For the others, I take the
default which does the right thing. All grammar elements have default
transforms to support this kind of use case.
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/canned_transforms/call.html
...
> - This business of handling transforms specially when they can
> accept 3 arguments is hard to understand. Aside from the fact
> that special cases tend to make things harder to use, it's not
> clear what it's really doing or why. I guess you're saying
> that state and visitor are implicitly passed through when they
> can be?
Yes, and the expression, too.
> I can understand why you'd want something like that,
> but let's look at this more closely:
>
> "For callable transforms that take 0, 1, or 2 arguments,
> special handling is done to see if the transform actually
> expects 3 arguments..."
>
> Do you really mean, "for callable transforms that are *passed*
> 0, 1, or 2 arguments...?" Or maybe it's something more
> complicated, like "for callable transforms that *are written as
> though* they take 0, 1, or 2 arguments...?"
Yes, the latter. The following transform are synonyms:
_arg
_arg()
_arg(_)
_arg(_,_state)
_arg(_,_state,_visitor)
That is true not just for _arg but for any primitive transform. And for
completeness (or just to make it more confusing?) you can use _expr
instead of _ and it means the same thing here.
As I say above, using _arg instead of _arg(_,_state,_visitor) is more
than just sugar. It's less work for Proto.
> - Again the use of ``proto::callable`` without a prior
> explanation... oh! there it is, finally, in the footnote of the
> example! If you check for "callable with 3 args" using
> metaprogramming tricks anyway, why not do the same for
> "callable" in general?
Not sure I understand the questions. Getting a little bleary-eyed myself.
> - Aside: this library is making me think it's time for the 2nd
> edition of C++TMP ;-). There's a lot of new territory to cover.
Ha!
> - So let me see if I got this right. The naming convention is:
>
> ``foobar_tag``
> a tag type used in grammars... hmm, really a "node
> type identifier?"
>
> ``foobar_``
> The Proto expression object that builds a node of the above
> type
>
> ``foobar``
> a non-lazy function object that performs the foobar
> action
>
> ``FooBar``
> a corresponding grammar element and/or tree transform
>
> Whether I got it right or not, it would be helpful to see this
> spelled out somewhere, much earlier.
I wasn't trying to establish any kind of naming convention in the
make_pair example. None of this code is part of proto. It's just an example.
> - Hmm, "function" looks like the wrong name for an operator,
> because it doesn't describe an operation. "Call" would be more
> to-the-point, but that one's already taken. Worth a little
> thought.
Someone else said something similar, I forget who. The suggestion was
"apply". I'm not opposed. Or the "call" transform can be renamed
"apply", and "function" can be "call". I don't care.
> - Translating the example into terms I understand::
>
> make_pair(_arg(_arg1), _arg(_arg2))
>
> becomes ::
>
> make_pair(_value(_left(_)), _value(_right(_)))
>
> which looks a bit better to me.
typedef _arg _value;
You're good to go! :-)
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/canned_transforms/make.html
...
> - In fact, this section of the user guide is getting very
> "reference-manual-ish." Maybe I am expecting too much, but I'd
> like to see ``make<>`` and ``call<>`` treated together. And
> I'm not sure we need to see a synopsis for each of these
> transforms, since they all follow the same pattern.
OK, others have noted that the entire Expression Transformation section
needs a complete rewrite to be more approachable. I'll apply some elbow
grease.
> - I'm a little lost as to what you're trying to tell me with this
> pseudocode. OK, going back to the beginning: you're
> telling me how to figure out the result type of applying a
> transformation of the form
>
> class-template<Args0...> ( Args1... )
>
> and it says that for each transform x in Args0, we replace x
> with the result of applying x to the expression being
> transformed. This is just what we do for Args1, but the types
> in Args1 don't affect the final type of the outer transform,
> whereas those in Args0 do. Did I get that right?
Yes, you got it.
> So why not just say something like that?
Sure.
> I'm a bit confused about the purpose of the MPL-lambda-ish
> "check to see if there's a nested ``::type`` here" step. Could
> you explain why you're doing that?
It's needed here for the same reason that it is needed in MPL lambdas.
Consider a transform such as:
fusion::single_view< add_const<_> >( _ )
Do you see now? Gotta look for the nested ::type in add_const *after*
the placeholder (er, nested transform) has been evaluated.
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/canned_transforms/bind.html
>
> I notice you're using ``void`` here:
>
> .. parsed-literal::
>
> make<Object>::result<\ **void**\ (Expr, State, Visitor)>::type
>
> It's my understanding of the ``result_of`` protocol that you
> can't just leave out the function object type, because the
> result of the function might depend on whether the function
> object itself is const, non-const, an lvalue, or an rvalue.
> Not sure what to do here; you don't want to spell the whole
> thing out, clearly.
Right. It's not just an issue in the documentation ... I use void in the
code, too.
> Maybe all you need to do to make this okay is provide a blanket
> statement that all Proto function objects ignore these details
> of the function object type used with ``result_of``.
OK. I have a rationale section about the liberties I take with the
ResultOf protocol. I'll discuss the void thing there too, and add a link
here.
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/canned_transforms/when.html
...
> - Why the assumption of callability in this one place?
Compile-time performance.
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/canned_transforms/pass_through.html
>
> The example doesn't even name pass_through directly. Can we do
> better?
I've never used pass_through explicitly, but I use it all the time
implicitly, as in this example.
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_transformation/is_callable.html
...
> - You may not want to pay this price, but can you
> can disassemble the template specialization and see if any of
> the arguments are transforms, and if not, assume it's callable?
> That would at least handle the ``times2<int>`` case.
See http://lists.boost.org/Archives/boost/2008/03/134450.php for why
this doesn't work.
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_extension/inhibiting_overloads.html
...
> - It's not clear to me why you need all this fancy footwork to
> define a special meaning for ``operator[]``. Isn't that what
> contexts are for?
So for:
( v2 + v3 )[ 2 ];
... you're saying to let proto build the expression tree representing
the array access, and evaluate it lazily with a context. Seems reasonable.
> - Could you explain why this::
>
> typedef typename proto::terminal< std::vector<T> >::type expr_type;
>
> lazy_vector( std::size_t size = 0, T const & value = T() )
> : lazy_vector_expr<expr_type>( expr_type::make( std::vector<T>( size, value ) ) )
> {}
>
> couldn't be written as::
>
> lazy_vector( std::size_t size = 0, T const & value = T() )
> : lazy_vector_expr<expr_type>( std::vector<T>( size, value ) )
> {}
Because lazy_vector_expr<expr_type>'s constructor is expecting an
expr_type object, not a std::vector<> object. There is no conversion
between the two. In other words, a T is not implicitly convertible to a
terminal<T>::type. If then terminal<T>::type has a converting
constructor, it couldn't be statically initialized.
> - But I have to say, that use of the grammar to restrict the
> allowed operators is *way cool*. I just think it should have
> been shown *way earlier* ;-).
I can't show *all* the good bits first. Then there'd be no reason to
read further! :-)
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_extension/expression_generators.html
>
> - "After Proto has calculated a new expression type, it checks
> the domains of the children expressions. They must match."
>
> Does this impair DSEL interoperability?
It does. Thanks for bringing that issue up; I had forgotten about it.
It's an open design question. What is the domain of an expression such
as (A + B) where A and B are in different domains? There needs to be an
arbitration mechanism, but Proto doesn't have one yet. I don't know what
that would look like.
> * http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/examples/rgb.html
>
> - This does not appear to follow your naming conventions.
> - It would be really nice to see some comparative analysis of
> this versus the PETE version. Likewise for the Vec3 example.
I might be able to link to the PETE version, but if I distribute it with
Proto, there's the license issue. And I really don't want to have to
explain PETE in Proto's docs.
-- Eric Niebler Boost Consulting www.boost-consulting.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk