Boost logo

Proto :

Subject: Re: [proto] Thoughts on traversing proto expressions and reusing grammar
From: Eric Niebler (eric_at_[hidden])
Date: 2010-10-13 01:10:15


On 10/4/2010 11:51 PM, Thomas Heller wrote:
> Eric Niebler wrote:
>
>> On Mon, Oct 4, 2010 at 12:43 PM, Thomas Heller
>> <thom.heller-gM/Ye1E23mwN+BqQ9rBEUg_at_[hidden]>wrote:
> <snip>
>>>
>>>>
>>>> I'll also point out that this solution is FAR more verbose that the
>>>> original which duplicated part of the grammar. I also played with such
>>>> visitors, but every solution I came up with suffered from this same
>>>> verbosity problem.
>>>
>>> Ok, the verbosity is a problem, agreed. I invented this because of
>>> phoenix, actually. As a use case i wrote a small prototype with a
>>> constant folder:
>>>
>>> http://github.com/sithhell/boosties/blob/master/proto/libs/proto/test/phoenix_test.cpp
>>>
>>
>> Neat! You're getting very good at Proto. :-)
>
> Thanks :)
> Let me comment on your request:
>
> On Tuesday 05 October 2010 03:15:27 Eric Niebler wrote:
>> I'm looking at this code now, and perhaps my IQ has dropped lately
>> (possible), but I can't for the life of me figure out what I'm looking
>> at. I'm missing the big picture. Can you describe the architecture of
>> this design at a high level? Also, what the customization points are and
>> how they are supposed to be used? I'm a bit lost. :-(
>
> First, let me emphasize that I try to explain this code:
> http://github.com/sithhell/boosties/blob/master/proto/libs/proto/test/phoenix_test.cpp
>
> Ok, i feared this would happen, forgive me the low amount of comments in the
> code and let me start with my considerations for this new prototype.
>
> During the last discussions it became clear that the current design wasn't
> as good as it seems to be, it suffered from some serious limitations. The
> main limitation was that data and algorithm wasn't clearly separated meaning
> that every phoenix expression intrinsically carried it's behavior/"how to
> evaluate this expression".

Correct. IIRC, in the old scheme, the tags were actually function
objects that implemented the default "phoenix" evaluation behavior
associated with the tag. That didn't preclude other Proto algorithms
being written that ignored the behavior associated with the tags and
just treated them as tags. But it was not as stylistically clean.

> This was the main motivation behind this new design, the other motivation
> was to simplify certain other customization points.
> One of the main requirements i set myself was that the major part of
> phoenix3 which is already written and works, should not be subject to too
> much change.

Right.

> Ok, first things first. After some input from Eric it became clear that
> phoenix expressions might just be regular proto expressions, wrapped around
> the phoenix actor carrying a custom tag.

Not sure I understand this. From the code it looks like it's the other
way around: phoenix expressions wrap proto expressions (e.g. the actor
wrapper).

> This tag should determine that it
> really is a custom phoenix expression, and the phoenix evaluation scheme
> should be able to customize the evaluation on these tags.

OK.

> Let me remind you that most phoenix expressions can be handled by proto's
> default transform, meaning that we want to reuse that wherever possible, and
> just tackle the phoenix parts like argument placeholders, control flow
> statements and such.
> Sidenote: It also became clear that phoenix::value and phoenix::reference
> can be just mere proto terminals.

Right. But it's not clear to me from looking at your code how the
evaluation of reference_wrapped terminals is accomplished, though.
Indeed, evaluating "cref(2)(0)" returns a reference_wrapper<int const>,
not and int. And in thinking about it, this seems to throw a bit of a
wrench in your design, because to get special handling (as you would
need for reference_wrapper), your scheme requires unary expressions with
special tags, not plain terminals.

> Having that said, just having "plain" evaluation of phoenix expressions
> seemed to me that it is wasting of what could become possible with the power
> of proto. I want to do more with phoenix expressions, let me remind you that
> phoenix is "C++ in C++" and with that i want to be able to write some cool
> algorithms transforming these proto expressions, introspect these proto
> expression and actively influence the way these phoenix expressions get
> evaluated/optimized/whatever. One application of these custom evaluations
> that came to my mind was constant folding, so i implemented it on top of my
> new prototype. The possibilities are endless: A proper design will enable
> such things as multistage programming: imagine an evaluator which does not
> compute the result, but translate a phoenix expression to a string which can
> be compiled by an openCL/CUDA/shader compiler. Another thing might be auto
> parallelization of phoenix expression (of course, we are far away from that,
> we would need a proper graph library for that). Nevertheless, these were
> some thoughts I had in mind.

All good goals, but IIUC nothing about the older design precluded that.
Phoenix expressions were still Proto expressions, and users could write
Proto algorithms to manipulate them (so long as the intermediate form
was sensible and well-documented).

> This is the very big picture.
>
> Let me continue to explain the customization points I have in this design:
>
> First things first, it is very easy to add new expression types by
> specifying:
> 1) The new tag of the expression.
> 2) The way how to create this new expression, and thus building up the
> expression template tree.
> 3) Hook onto the evaluation mechanism
> 4) Write other evaluators which just influence your newly created tag
> based expression or all the other already existing tags
>
> Let me guide you through this process in detail by explaining what has been
> done for the placeholder "extension" to proto (I reference the line numbers
> of my prototype).

Nit: placeholders make for an interesting exploration of the design
space, but placing them outside the core seems futile to me. I've
discussed this before: the placeholders are special, the core needs to
know about them (nullary actor::operator() must calculate the arity of
the Phoenix expression, which depends on the placeholders). Given that
the core must know about the placeholders, pretending they are a layer
on top of an extensible core is really a sham. But an interesting sham. :-)

> 1) define the tag tag::argument: line 307
> 2) specify how to create this expression: line 309 to 315
> First, define a valid proto expression line 309 through phoenix_expr
> which had stuff like proto::plus and such as archetypes. The thing it
> does, it creates a valid proto grammar and transform which can be
> reused in proto grammars and transforms, just like proto::plus.

I don't yet see the purpose of having phoenix_expr be a grammar and a
transform. Looks like the transform associated with phoenix_expr is just
the identity transform; it just returns the expression passed in. Is
this useful?

> Second, we create some constant expressions which are to be used as
> placeholders

Looks like in this design, _1 is actually a unary expression, not a
terminal. Once we get over trying to make the placeholders an extension
and move it back into the core, I think _1 can go back to being just a
terminal as it was before. This seems less surprising to me.

> 3) Hook onto the evaluation mechanism: line 321 to 361
> Note: i created a unpack transform which is boilerplate code for the
> extraction of the children of the current node. So what it does is,
> that it calls the proto::callable passed as first template
> parameter and is applies an optional transform to the children
> prior to passing it to the callable, additionally it is able to
> forward the data and state to the transform

I had a really hard time grokking unpack. In Proto, function-types are
used to represent function invocation and object construction. The type
"X(A,B,C)" means either call "X" with three arguments, or construct "X"
with three arguments. In "unpack< X(A, B, C) >" X does *not* get called
with three arguments. It gets called like "X( A(_0), A(_1), A(_2), ...B,
C )", where _0, _1 are the child nodes of the current expression. This
will confuse anyone familiar with Proto. I'd like to replace this with
something a little more transparent.

> in line 320 to 323 we define the specialization of our generic_evaluator
> which dispatches the call to argument_eval through the unwrap transform.
> The remaining part (line 325 to 361) does not differ too much from the
> current design.
<snip>

This is as far as I made it today. I haven't yet grokked the
generic_evaluator or the visitor. I also can't see where the grammar for
valid Phoenix expressions is defined in this design. How can I check
whether an expression is a Phoenix lambda?

I'm back on the case tomorrow.

-- 
Eric Niebler
BoostPro Computing
http://www.boostpro.com

Proto list run by eric at boostpro.com