From: Markus Werle (numerical.simulation_at_[hidden])
Date: 2008-03-17 17:16:13
Eric Niebler wrote:
> Markus Werle wrote:
>> Eric Niebler wrote:
>>> proto::X is a function,
>>> proto::result_of::X is a metafunction that computes the result type of
>>> proto::X, proto::functional::X is the function object equivalent of X,
>>> and proto::transform::X is the primitive transform equivalent of X.
>> I guess that is why I get lost all the time: I read the code in the docs
>> and cannot guess from the name whether this is a typename or a
>> function or whatever and which namespace it belongs.
> The docs should use qualification where it could be unclear what a name
> refers to. I can make a pass through the docs to double-check, but if
> you can point to the places where a lack of qualification led you into
> trouble, I'll fix them.
I am not sure about the reason why I do not understand transforms.
Please give me some more time for this.
>> Just a daydream: What happens if it is this way:
>> proto::X is a function, proto::result_of::X_r is a metafunction that
>> computes the result type of proto::X, proto::functional::X_f is the
>> function object equivalent of X, and proto::transform::X_t is the
>> primitive transform equivalent of X.
>> Do we loose anything? Do we run into trouble?
> Well, it would be inconsistent with established practice within Boost.
> Fusion, for one, uses a similar naming scheme.
> Spirit-1 used a naming scheme like the one you suggest (trailing "_p"
> means parser, trailing "_a" means action, etc.). Spirit-2 is dropping
> this in favor of namespaces, like Fusion and Proto. I believe that was
> in response to feedback from Dave Abrahams. Perhaps Dave or Joel could
> chime in on this issue.
I think this is a very important argument. So this probably is
really only a documentation issue, which leads me to
what I stated already some time ago: the value of knowing why
something is done in boost the way it is done is higher than the
mere existence of the library. A document "The design and evolution of
boost::XXX" would boost C++ development even more than using the library
itself. This is why boost rules say the documentation must contain a
design rationale (wink, wink).
Summary: you keep your names and I get used to it.
Put this part of the library to "accepted" state, unless someone
finds ways to choke the whole thing.
> Re: the _argN transforms ...
>> I assumed they were generated by a macro until BOOST_PROTO_MAX_ARITY ...
>> I was preaching against that (inexistent?) macro.
>> Now I took a look at proto_fwd.hpp and found they are made by hand
>> and stop at 9. So now I really have a problem:
>> what happens if I need an arity of 120?
> Ha! I imaging you'll need longer than the heat death of the universe to
> compile your program,
Can you remember 1998, when it took the whole night to compile
stuff that runs through in 5 minutes today?
I remember generating fractals with fractint on a 386 DX 40
with a math coprocessor (boost!).
One week, one image, my little son switching off the power supply
on Saturday just before a very nice one had finished.
Today bootstrapping gcc 5 times a day is nothing ...
So I personally already have plans to put proto to the limits.
Just waiting for what Intel or IBM have on their backburner.
This is why I am a little bit concerned about the deep nesting
namespaces, since names will become very long, but that's probably
also a minor problem. Compilers evolve at the same speed.
> but ...
> ... you're absolutely right, for the _argN transforms, N should go up to
> at least BOOST_PROTO_MAX_ARITY. I'll need to add a little PP magic.
Now it is me to blame for that a clean section of the
code got its MACRO! ;-)
>> struct MakePair
>> : when<
>> function<terminal<make_pair_tag>, terminal<_>, terminal<_> >
>> , make_pair(
>> , _arg_c<0>(_arg_c<2>)
>> is OK for me, but I understand your intention.
>> Those transforms kill me, but I need them.
> I don't want my code to hurt or kill anybody. What about transforms is
> killing you? Is it something fixable, in your opinion? Or just the steep
> learning curve?
The steep learning curve only. I am one of those who appreciates
lovely step by step explanations like your FOREACH paper.
That's how all things should be explained ... yes, I am asking
too much here.
I see that transforms will help to implement simplify algorithms,
so I really need them. What kills me is to know it is there, but
I get no grip on it.
>> The advantage of those numbers is: proto is a typelist tool.
>> If you see the index you do not mix it up.
>> With _arg I always have this second, where I translate it back to
>> _arg_c<0>, etc. At least during my approach to the docs those
>> aliases did not help me (but I do not fear nested templates).
> I see your POV, but I see it a different way ... Proto provides a
> domain-specific language for processing C++ expression trees.
What I really like in proto is the expr<optag, typelist> archictecture.
No tree at the front door.
> Within that domain, unary and binary expressions are the norm,
> so I provide abstractions for dealing with them in a more natural
> (IMHO) way.
OTOH I was just about to request the ban of binary_expr from the docs.
For you it is probably nice to have the shortcut, but again this
is a shortcut from good old times when expressions had a maximum arity
IMHO the basic expr<> typedef should be provided everywhere
in order to ease mental access to the library.
I found myself forcing compiler error messages, just to obtain the
Please rework "Expression Construction Utilities" such that you simply
add the expr<> version where it is missing, e.g. at
// expr_type is the same as this type:
PUT ANOTHER TYPEDEF AND ASSERT HERE OR BETTER: REPLACE
> I'd be interested in other people's opinions.
>> If those _argN scale up to N=300 or more I will not vote against them.
>> Otherwise if 9 is the maximum, I have a problem with that solution.
I think this is the beginning of a beautiful friendship :-)
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk