Define "Domain Specific Embedded Language," or offer a reference to a definition.
Suggest "protofied" rather than "proto-ified"
I'm not sure that
proto::terminal< std::ostream & >::type cout_ = { std::cout };
is guaranteed to have the nice initialization properties you aim for. If cout_ contains a reference, it isn't a POD, and therefore is not obliged to be statically initialized.
The docs don't tell me which headers to include; at least, not the intro docs.
consider whether to recommend the use of STLFilt
Try to resist "here's the complicated way, now let me show you how to do it the simple way."
The intro could give some concrete usage examples of DSLs you could build with proto, for the benefit of those who "don't get it." A lambda and an xpressive example might do the trick.
expr< tag::terminal, args0< placeholder1 >, 0 >
that
"The second template parameter is a list of children types. Terminals will always have only one type in the type list."
That page also goes on at length about static initialization but doesn't really explain why it's important. Imagine the reader doesn't know the difference between static and dynamic initialization.
"Once we have some Proto terminals, expressions involving those terminals build expression trees for us, as if by magic. It's not magic; Proto defines overloads for each of C++'s overloadable operators to make it happen"
I usually don't go for a sentence started with a conjunction, but in this case, the 2nd sentence should begin with "but."
Footnote [2] actually leaves the exceptions unexplained. Finding the right language is hard, but "that's just how C++ works... them's the breaks" is pretty opaque.
What happens if your type has a generalized operator?
namespace fu { struct zero {}; #if 1 template <class T> T operator+(T x,zero) { return x; } #else double operator+(double x,zero) { return x; } #endif } int main() { // Define a calculator context, where _1 is 45 and _2 is 50 calculator_context ctx( 45, 50 ); // Create an arithmetic expression and immediately evaluate it double d = proto::eval( (_2 - _1) / _2 * 100 + fu::zero(), ctx ); // This prints "10" std::cout << d << std::endl; }
Answer: a nasty error message (at least on g++). Anything we can do to improve it (just wondering)?
is there a reason we need ref_ as opposed to using true references? (just curious, but the docs don't answer this question).
Can't quite parse the description of BOOST_PROTO_DEFINE_VARARG_FUNCTION_TEMPLATE at the top. Please clarify.
You're using "..." notation in the make_expr synopsys. Is that intended to be C++0x, or...? You could either do it with subscripts in a traditional way:
A0, A1, ... An
or spell out how to interpret your notation in the text. A reference to a C++0x paper would be enough if that's what you're trying to do.
At some point before now, a short section to describe the idiom of naming a metafunction in result_of:: the same as its corresponding function object in functional:: would be good. Probably a sidebar the very first time you introduce result_of would be best.
It would also be good to note that the nested result template satisfies the protocol of boost/tr1::result_of<...>. An in-code comment might be enough.
The use of implementation-defined in this example is not correct, if you take the C++ standardese meaning. That would mean that the library specification doesn't tell you what type it is, but any specific implementation of the library is required to tell the user in documentation what type it is. I'd write unspecified and add commentary describing the actual requirements on said type.
The DomainOrArg argument to result_of::make_expr is confusing. I don't see a correponding argument to the function object. I might not have been confused by this except that you seem to use that argument in the very next example.
I don't know how well or badly it would work with the rest of the library, but I'm thinkin' in cases like this one it might be possible to save the user some redundant typing:
// One terminal held by reference: int i = 0; typedef proto::result_of::make_expr< MyTag , int & // <-- Note reference here , char >::type expr_type; expr_type expr = proto::make_expr<MyTag>(boost::ref(i), 'a');
I'm thinking the result of proto::make_expr<...>(...) could hold everything by reference, but the type of proto::result_of::make_expr< ... > would hold things by the specified type. Thus you'd end up being able to drop the use of boost::ref() above. If you don't like the lack of correspondence between the two make_exprs, naturally you could call one of them something else.
testing out some of the examples on this page, I notice that you explicitly specify namespaces in some places but not in others (e.g. default_domain is unqualified), so they don't compile without some "using." Can you use the automated example testing that Joel developed?
Ooh, I really hate posit. It's a word that has an English meaning, yet you're using it as some kind of abbreviation, which sounds like a positional iterator and sent me scurrying back to the table on the previous page to see what it meant. How 'bout unary_plus?
Furthermore, I think you should use unary negate here, just to keep things out of the realm of "why on earth would I care about doing that?"
Is all this time spent on make_expr really appropriate at this early stage of the tutorial? Seems to me we ought to be able to do a lot of more sophisticated things with the library before we start into the nitty-gritty of building expressions explicitly (i.e. framework details). No?
it sez:
The application of unary operator+ on the last line is equivalent to the by-ref invocation of make_expr() because Proto's operator overloads always build trees by holding nodes by reference.
s/nodes/nonterminal nodes/ ?
it sez:
If you specify a domain when invoking make_expr(), then make_expr() will use that domain's generator...
"generator?" This is the first I've heard of it. Can't you start with something like "each domain has a generator that...?" The use of proto::domain and proto::generator are kinda out-of-nowhere here, too.
How do I compile this example? At namespace scope it seems to choke g++ with:
/tmp/tst.cpp:74: error: variable ‘expr_type expr’ has initializer but incomplete type
I think that's because MyExpr is incomplete. Adding an empty body doesn't help, though.
Building Expression Trees With unpack_expr(): Frankly, and I think this goes for make_expr too, seeing the synopsys at this point presents a lot of information we don't care about. For example, the whole nested result<...> template business makes no obvious difference in these examples.
The unpack_expr example would compile if you had a declaration for Tag, so I suggest using MyTag there and trusting people to mentally substitute the previous declaration into the example.
it sez:
As with make_expr(), unpack_expr() has a corresponding metafunction in the proto::result_of namespace for calculating its return type, as well as a callable function object form in the proto::functional namespace.
Well, now I'm getting the impression that there are three make_exprs: one a function template, one a function object, and one a metafunction. Is that right? If so, you really gotta describe this idiom on its own, earlier.
You already told us that expression nodes are fusion sequences in http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/expression_construction/left_right_arg.html
The first example in Generating Custom Expression Factory Functions is confusing on a couple of levels:
You mention proto::as_expr by way of explanation here, but I don't think you've covered it yet!
This is the first instance I've seen of actually constructing a functional::make_expr<...> object, and it is a bit surprising having only seen proto::make_expr used up 'till now.
it sez:
Such named "operators" are very important for domain-specific embedded languages.
but it doesn't explain why.
it sez:
imagine if you have a custom tag type foo_tag<> that is a template
which is kinda nonsensical. A type is not a template.
it sez:
"You would like to define a foo() factory function that itself was a template"
Wow, I'm totally lost on the motivation here. Why would I like that? Why would I define such a tag in the first place?
s/parammeter/parameter/
How does the 2nd half of this sentence follow from the first?
In the above, TheFunctionToCall might be an ordinary function object, so let's define a construct_<> function object that constructs an object.
The relationship is lost on me.
Is this the first time you're showing us how to build a simple lazy function? That should come much, much earlier.
If this is truly a lazy function, why can't I evaluate it like this?
S s = construct_S(1,'a')();
your definition of construct() could use some guideposts. Let the reader know that the result type computation corresponds exactly to the make_expr invocation in the body. You should also note that this handles exactly 2 arguments so that you can motivate the upcoming macro better.
"boiler plate" is actually one word
it sez:
What is new in this case is the fourth macro argument, which specifies that there is an implicit first argument to construct() of type construct_<X>, where X is a template parameter of the function
Long section! Can we break it down into multiple HTML pages?
literal<...> is used without any prior introduction (I think).
"Using function overloading and metaprogramming tricks, callable_context<> can detect at compile-time whether such a function exists or not. If so, that function is called. If not, the current expression is passed to the fall-back evaluation context to be processed. "
It surprises me that you need metaprogramming tricks for that, but perhaps I'm missing something.
Somewhere earlier you should have said that a context will always have access (usually through its members) to the data "against which" an expression is evaluated, and arguments to a context constructor are typically a way of supplying that data.
"We've seen the template terminal<> before, but here we're using it without accessing the nested ::type" needs to link to the earlier reference, because I'm now wondering what the nested ::type is used for.
The definition of recursive grammars here is just too cool! I love it! Does it have a heavy impact on compile times when used with enable_if?
That said, a really good motivating case for using matches<> seems to be missing.
I presume this or_<...> is different from the MPL one?
"This may look a little odd at first. We seem to be defining the Input and Output types in terms of themselves."
If you're going to say that here, you had better say it earlier when you introduce CRTP with callable_context<>.
These are really vague questions because I sense difficult territory but I'd need some help to make them more concrete. I'm wondering how these grammars would handle algebraic structures like Rings and vector spaces, where the important things are a relation between the parts. For example, IIRC integers are a ring in two ways: with identity 0 over +, and identity 1 over *. A vector space consists of a matrix type, a vector type, and a scalar type, all of which need to be compatible. Have you thought about these issues much?
It might be possible to use function overloading to speed up compile-time evaluation of matches<>, if you're not already doing things that way. See the techniques used in mpl::set.
s/simiple/simple/
"When given a grammar like this, Proto will deconstruct the grammar and the terminal it is being matched against and see if it can match all the constituents."
Do you really mean "deconstruct?" It's not obvious to the reader what you mean by that part of the sentence. Would it be better to just cut that whole sentence?
s/automata/automaton/
Let me guess at the real point behind transformations when you have eval(). I think it's that eval() only lets you operate on nodes independently of their siblings and ancestors. A transformation lets you gather all the context in the expression and use it at one time. Right? If so, say that explicitly. If not, please do clarify.
"The transform above might look a little strange at first. It appears to be constructing a temporary object in place. In fact, it is a function type."
Instead of "the transform above," say "terminal<long>::type(arg)." Also, what's really odd about it is that it appears to be a non-constant expression in a place where only compile-time constants and types can appear.
"It says to create an object of type terminal<long>::type and initialize it with the result of the _arg transform. _arg is a transform defined by Proto which essentially calls proto::arg() on the current expression."
…which is a terminal, and in this world, terminals have a child, which is known as an "argument." Now what does proto::arg() do? Ah, yes, in this case it extracts the value associated with the terminal node. If I keep all those translations in mind, it begins to make sense.
when< grammar, transform > seems like it would be better named replace< grammar, transform > or something.
A grammar decorated with transforms is a function object that takes three parameters:
This is really confusing. What's the difference between state and visitor? The descriptions make them sound like the same thing.
// ... and apply Grammar's transform: result_type result = Grammar()(expr, state, visitor);
Might be clearer as:
// ... and apply Grammar's transform: Grammar g; result_type result = g(expr, state, visitor);
"Our job will be to write a transform that calculates the arity of any calculator expression."
I think you need to explain why that's a realistic example.
I think
when< unary_expr< _, CalcArity >, CalcArity(_arg) >
should be spelled
when< unary_expr< _, CalcArity >, CalcArity(_arg(_)) >
or if you accept my naming scheme,
when< unary_expr< _, CalcArity >, CalcArity(_child(_)) >
This seems to imply that if you have two different transforms for the same grammar, you end up essentially repeating the syntax part... right?
I think you're missing a good example of why you'd do two different evaluations on the same expr. A classic might be regular expression evaluation and symbolic differentiation. E.g.,
x = eval(_1 * _1, normal(7)); // 49 y = eval(_1 * _1, differentiate(7)); // 14
"...resulting in mpl::max< mpl::int_<X>, mpl::int_<Y> >." Whoa, where did X and Y come from?! I guess on reflection I'd have been less surprised if you were using the <replaceable> tag.
lit(...) is used without prior introduction I think.
"_arg, _left, and _right": shouldn't you include arg_c here?
huh, these things are all in boost::proto::transform. Didn't I see them used without qualification earlier?
The table is a little wide; I suggest inserting some line breaks in the first column.
The table is missing a legend that says things like "expr is an expression node of type Expr." I suggest always using names that are not homonyms, so people can discuss the table in English without ambiguity, e.g. "x is an expression node of type Expr." The same goes for most, if not all, other tables in the document.
I think I need a "notes" column for this table; I'm getting lost in here. What is it trying to tell me?
Hmmm...
transform::right::result<void(Expr, State, Visitor)>::type
I think I'd understand this better as:
boost::result_of<transform::right(Expr,State,Visitor)>::type
Now that I've begun to parse it, I guess I think the whole table should be restructured with three columns "Expression", "Returns", and "Type" and only three rows that are just the runtime expressions. Would something like that work?
I think
// Matches an integer terminal and extracts the int. struct Int : when< terminal<int>, _arg > {};
should be spelled
struct Int : when< terminal<int>, _value(_) > {};
I'm really confused in here. I see expr, expr_, and _expr on this page. Is that really intentional?
"compile-time Boolean. If it is true..."
would it be more accurate to say, "MPL integral constant expression. If nonzero...?"
"If it is true, then the first transform is applied. The second is applied otherwise."
s/second/third/, s/first/second/
The lower left corner of the table appears to have a larger font size than the rest.
The example at the bottom of the page is:
struct ByValOrRef : when< terminal<_> , if_< mpl::less_equal< mpl::sizeof_<_arg> , mpl::size_t<4> >() , _make_terminal(_arg) , _make_terminal(_ref(_arg)) > > {};
What is a "callable transform?" Has that term been defined? Remember that it's a brain-stretching exercise to even enter this territory. I think you really need to take the reader by the hand and lead him through everything step-by-step.
Fix the "TODO LINK" on this page.
I'll have to study this page harder. I'm getting bleary-eyed here.
OK, back at it the next morning: "When you use a callable transform as in when< posit<_>, Callable(_arg) >...." Here it looks like you could get some mileage out of <replaceable> again. I guess Callable is not necessarily supposed to be a library component, but any callable transform.
Ah, the first mention of Boost.ResultOf here! The use of that protocol is so central to this library that I think you should have mentioned it earlier.
s/meta-programming/metaprogramming/
This business of handling transforms specially when they can accept 3 arguments is hard to understand. Aside from the fact that special cases tend to make things harder to use, it's not clear what it's really doing or why. I guess you're saying that state and visitor are implicitly passed through when they can be? I can understand why you'd want something like that, but let's look at this more closely:
"For callable transforms that take 0, 1, or 2 arguments, special handling is done to see if the transform actually expects 3 arguments..."
Do you really mean, "for callable transforms that are passed 0, 1, or 2 arguments...?" Or maybe it's something more complicated, like "for callable transforms that are written as though they take 0, 1, or 2 arguments...?"
So you've shown us an example of what it looks like with one argument. What is the behavior with 0 or 2 arguments? I can think of several possibilities, but rather than list them I think I'll let you tell me.
Again the use of proto::callable without a prior explanation... oh! there it is, finally, in the footnote of the example! If you check for "callable with 3 args" using metaprogramming tricks anyway, why not do the same for "callable" in general?
Aside: this library is making me think it's time for the 2nd edition of C++TMP ;-). There's a lot of new territory to cover.
So let me see if I got this right. The naming convention is:
a tag type used in grammars... hmm, really a "node type identifier?"
The Proto expression object that builds a node of the above type
a non-lazy function object that performs the foobar action
a corresponding grammar element and/or tree transform
Whether I got it right or not, it would be helpful to see this spelled out somewhere, much earlier.
Hmm, "function" looks like the wrong name for an operator, because it doesn't describe an operation. "Call" would be more to-the-point, but that one's already taken. Worth a little thought.
Translating the example into terms I understand:
make_pair(_arg(_arg1), _arg(_arg2))
becomes
make_pair(_value(_left(_)), _value(_right(_)))
which looks a bit better to me.
So make<> is what gets used under-the-covers in lieu of call<>, when the return type of a transform spelled as a function type is not callable? I think you should say something to indicate that make<> and call<> are symmetric.
In fact, this section of the user guide is getting very "reference-manual-ish." Maybe I am expecting too much, but I'd like to see make<> and call<> treated together. And I'm not sure we need to see a synopsis for each of these transforms, since they all follow the same pattern.
"The make<> transform checks to see if the resulting object type is a template."
What "resulting object type?" Resulting from what?
Hum, I'm getting confused about naming conventions again. Here you have make and make_, but AFAICT, they don't seem to fit the patter I outlined above.
"The make<> transform checks to see if the resulting object type is a template"
Types are never templates. I'd saya "a class template specialization" or "an instance of a class template."
"the result type is make_<Object<X0,X1,...>, Expr, State, Visitor>::type which..."
Insert a comma before "which."
"...which evaluates this procedure recursively"
Which procedure? Oh, this bullet list.
I'm a little lost as to what you're trying to tell me with this pseudocode. OK, going back to the beginning: you're telling me how to figure out the result type of applying a transformation of the form
class-template<Args0...> ( Args1... )
and it says that for each transform x in Args0, we replace x with the result of applying x to the expression being transformed. This is just what we do for Args1, but the types in Args1 don't affect the final type of the outer transform, whereas those in Args0 do. Did I get that right?
So why not just say something like that? It looks to me like this procedure is way too formal for a user guide, but not quite rigorous enough for the reference. My suggestion is to tighten it up, removing commentary and parenthetical notes, and put it in the reference, replacing it with an English description here.
I'm a bit confused about the purpose of the MPL-lambda-ish "check to see if there's a nested ::type here" step. Could you explain why you're doing that?
The aggregate initialization stuff is cute :)
I notice you're using void here:
make<Object>::result<void(Expr, State, Visitor)>::typeIt's my understanding of the result_of protocol that you can't just leave out the function object type, because the result of the function might depend on whether the function object itself is const, non-const, an lvalue, or an rvalue. Not sure what to do here; you don't want to spell the whole thing out, clearly.
Maybe all you need to do to make this okay is provide a blanket statement that all Proto function objects ignore these details of the function object type used with result_of.
The example doesn't even name pass_through directly. Can we do better?
I'd really like to see proto::arg_c at least declared in this example, since it is used.
I was wondering when we'd get to this capability! It's so fundamental that I think you should introduce it much earlier in the guide.
"A wrapper type like calculator<> that inherits from extends<>" is ambiguous (is it the calculator-ness that's important?). I suggest,
A wrapper type derived from extends<X*…`` >`` behaves just like *X, with any additional…
I'm finding the lazy_subscript_context comments confusing:
// Here is an evaluation context that indexes into an algebraic // expression, and combines the result.
Combines the result with what? Also, I think you should drop the comma.
// Use default_eval for all the operations ... … // ... except for terminals, which we index with our subscript
all the operations except for terminals? IIUC, terminals are never operations.
is this the first mention of default_eval? I think it needs to be explained.
It's not clear to me why you need all this fancy footwork to define a special meaning for operator[]. Isn't that what contexts are for?
Could you explain why this:
typedef typename proto::terminal< std::vector<T> >::type expr_type; lazy_vector( std::size_t size = 0, T const & value = T() ) : lazy_vector_expr<expr_type>( expr_type::make( std::vector<T>( size, value ) ) ) {}
couldn't be written as:
lazy_vector( std::size_t size = 0, T const & value = T() ) : lazy_vector_expr<expr_type>( std::vector<T>( size, value ) ) {}
? I'm obviously missing some important concepts here, and I think this example needs to come with a lot more hand-holding.
But I have to say, that use of the grammar to restrict the allowed operators is way cool. I just think it should have been shown way earlier ;-).
"After Proto has calculated a new expression type, it checks the domains of the children expressions. They must match."
Does this impair DSEL interoperability?
http://boost-sandbox.sourceforge.net/libs/proto/doc/html/boost_proto/users_guide/examples/rgb.html