|
Proto : |
Subject: Re: [proto] [phoenix] not playing nice with other libs
From: Thomas Heller (thom.heller_at_[hidden])
Date: 2011-05-04 07:25:23
On Wed, May 4, 2011 at 10:58 AM, Eric Niebler <eric_at_[hidden]> wrote:
> (cross-posting to the Proto list and cc'ing Hartmut.)
> On 5/2/2011 6:18 PM, Thomas Heller wrote:
>> On Mon, May 2, 2011 at 12:54 PM, Eric Niebler <eric.niebler_at_[hidden]> wrote:
<snip>
>>> Phoenix is changing the following fundamental constants:
>>>
>>> Â BOOST_PROTO_MAX_ARITY
>>> Â BOOST_MPL_LIMIT_METAFUNCTION_ARITY
>>> Â BOOST_PROTO_MAX_LOGICAL_ARITY
>>> Â BOOST_RESULT_OF_NUM_ARGS
>>>
>>> IMO, Phoenix shouldn't be touching these. It should work as best it can
>>> with the default values. Users who are so inclined can change them.
>>
>> Eric,
>> This problem is well known. As of now I have no clue how to fix it properly.
>>
>> Let me sketch why i changed these constants:
>> 1) Phoenix V2 has a composite limit of 10:
>> Â Â This is equivalent to the number of child expressions a expression can hold.
>> Â Â This is controlled by BOOST_PROTO_MAX_ARITY for the number of
>> Â Â template arguments for proto::expr and proto::basic_expr
>> 2) Boost.Bind can take up to 10 parameters in the call to boost::bind
>
> It's still not clear to me why you're changing
> BOOST_MPL_LIMIT_METAFUNCTION_ARITY and BOOST_PROTO_MAX_LOGICAL_ARITY.
I don't remember the exact reasons anymore ... just checked the proto
code again ...
Seems like there have been some changes regarding these macros.
At the time i wrote the code for these macro redefintions, it was
necessary to make phoenix
compile.
>> The default BOOST_PROTO_MAX_ARITY is 5.
>
> I see. So this is inherently a limitation in Proto. I set Proto's max
> arity to 5 because more than that causes compile time issues. That's
> because there are N*M proto::expr::operator() overloads, where N is
> Proto's max arity and M is Proto's max function call arity. However:
>
> - IIRC, Phoenix doesn't use proto::expr. It uses proto::basic_expr, a
> lighter weight expression container that has no member operator overloads.
Correct. But we also need the arity in:
proto::call, proto::or_ and maybe some others
> - Compile time could be improved by pre-preprocessing, like MPL. That's
> something I've been meaning to do anyway.
Yes, we (Hartmut and me) keep saying for quite some time now.
> - The max function-call arity can already be set separately from the max
> number of child expressions.
>
> - The compile-time problem is a temporary one. Once more compilers have
> support for variadic templates, all the operator() overloads can be
> replaced with just one variadic one. Which should be done anyway.
Right.
> The solution then is in some combination of (a) allowing basic_expr to
> have a greater number of child expressions than expr, (b) bumping the
> max arity while leaving the max function call arity alone, (c)
> pre-preprocessing, (d) adding a variadic operator() for compilers that
> support it, and (e) just living with worse compile times until compilers
> catch up with C++0x.
>
> Not sure where the sweet spot is, but I'm pretty sure there is some way
> we can get Proto to support 10 child expressions for Phoenix's usage
> scenario. It'll take some work on my end though. Help would be appreciated.
Yes, I was thinking of possible solutions:
1) splittling the expressions in half, something like this:
proto::basic_expr<
tag
, proto::basic_expr<
sub_tag
, Child0, ..., Child(BOOST_PROTO_MAX_ARITY)
>
, proto::basic_expr<
sub_tag
, Child(BOOST_PROTO_MAX_ARITY), ... Child(BOOST_PROTO_MAX_ARITY * 2)
>
>
This would only need some additional work on the phoenix side.
Not sure if its actually worth it ... or even working.
2) Have some kind of completely variadic proto expression. Not by
having variadic
templates but by creating the list of children by some kind of cons list.
This might requires a quite substantial change in proto, haven't fully
investigated
that option.
>> The BOOST_RESULT_OF_NUM_ARGS constant needed to be changed because i
>> needed to provide 11 arguments in a "call" to boost::result_of. But i
>> guess a workaround
>> can be found in this specific case.
>
> What workaround did you have in mind?
Calling F::template result<...> directly, basically reimplementing
result_of for our
phoenix' own limits.
>> I wonder what qualifies as "User". Phoenix is certainly a user of mpl,
>> result_of and proto. Spirit is a user of proto and phoenix. Spirit needs an arity of 7 (IIRC).
>
> By "user" I meant "end-user" ... a user of Boost. You have to consider
> that someone may want to use Phoenix and MPL and Numeric and ... all in
> the same translation unit. We shouldn't make that hard. This
> proliferation of interdependent constants is a maintenance nightmare.
I agree. I don't think there really is a general solution to that.
There have been reports
by Micheal Caisse of some macro definition nightmare while using MSM
together with spirit.
If i remember the details correctly, MSM changes the proto constants as well.
This problem is not really phoenix specific!
> I tend to agree with Jeff Hellrung who said that Phoenix should make do
> with the defaults and document any backwards incompatibilities and how
> to fix them. But we should make every effort such that the defaults Just
> Work.
I agree. One simple solution is to add a big fat warning to the docs
saying to include
phoenix/core/limits.hpp before anything else. However, that will not
solve your reported problem.
>> Anybody got any ideas?
>>
>> One idea that comes to my mind is having a phoenix::proto_expr,
>> which is a proto::basic_expr, basically. Not sure if that would work though
>
> I don't like special-casing for Phoenix. Other libraries have the same
> problem.
>
> Hartmut, you have done some work on a Wave-based tool to help with the
> pre-preprocessing grunt-work, is that right?
Yes. Hartmut implemented partial preprocessing for phoenix using wave.
As an example on how to use it see this file:
http://svn.boost.org/svn/boost/trunk/boost/phoenix/object/detail/new.hpp
To preprocess phoenix call:
wave -o- -DPHOENIX_LIMIT=10 libs/phoenix/preprocess/preprocess_phoenix.cpp
You need to have a file called wave.cfg in your current directory.
An example configuration can be found at:
http://svn.boost.org/svn/boost/trunk/libs/phoenix/preprocess/wave.cfg
The reason we hold this solution back is because we think there should
be some more
generic process to invoke the wave preprocessing. Probably through bjam.
The critical points here are where to find the system include files.
Proto list run by eric at boostpro.com