Boost logo

Boost :

From: Jaakko Jarvi (jajarvi_at_[hidden])
Date: 2002-03-21 17:00:37

Just a few points.

First to correct something about Phoenix docs:
even though FC++-like polymorphic functors were not originally part of LL,
they are part of the submitted version, see section 5.3.3 in the
documentation, which explains the use of sig templates.

> 1) Regarding the strict/weak arity question, it seems to me that the
> weak arity should be default, following the reasoning from Phoenix.

The reasoning from Phoenix used example in parsers and semantic actions.
I think that this is a rare case compared to just some 'everyday' code
passing lambda functors to STL algorithms, where I see the stricter arity
safer. But this has been discussed quite a lot already, and it is easy to
set one or the other as default, and have some syntax to get the other.

> 3) In particular, I would like a more detailed analysis of the relevance
> of scoping in the lambda expressions as FACT! (and Phoenix?) provides.
> As I understood Jörg Striegnitz, LL does not have scope. It seems to me
> that this is a requirement for lambda expressions if they are to be
> comparable to lambda expressions from functional programming. I don't
> see how lambda expressions can cater for recursive expressions without
> introducing the concept of scoping.

The first paragraph in the docs say:

The primary motivation for the BLL is to provide flexible and convenient 
means to define unnamed function objects for STL algorithms...
for_each(a.begin(), a.end(), std::cout << _1 << ' ');
The essence of BLL is letting you define small unnamed function objects, 
such as the one above, directly on the call site of an STL algorithm. 
So the goal is not to define a functional language within C++,
and change the way of programming C++ entirely. There are languages
far better suited for functional programming.
(See FC++ for an approach closer to this goal.)
> 4) Additionally, an performance comparision with competitors would also
> be helpful.
Here's a quick test with gcc3.0.4
L2 is the lambda expression _1 * _1        in LL
P2 accordingly 		    arg1 * arg1    in Phoenix
L3 is _1 * _1 * _1  etc.
They're run 10000 times in a transform loops, elements come from a vector
and go into another vector
L2 : 103.736
L3 : 105.155
L4 : 109.578
L5 : 117.301
L6 : 128.86
L7 : 147.288
L8 : 161.872
L9 : 196.295
L10 : 191.952
L11 : 220.988
L12 : 252.292
L13 : 280.202
L14 : 307.275
L15 : 331.81
L20 : 487.784
P2 : 189.631
P3 : 205.464
P4 : 229.871
P5 : 250.3
P6 : 289.332
P7 : 336.765
P8 : 372.081
P9 : 404.196
P10 : 435.336
P11 : 467.225
P12 : 495.447
P13 : 522.825
P14 : 555.651
P15 : 581.173
P20 : 727.167
The performance difference probably stems from the fact that phoenix
constructs a tuple out of the  arguments, which the g++ cannot 
optimize away. LL does not store the arguments to lambda functors into
any intermediate objects, but rather just passes them unchanged to 
the underlying functions. 
KCC can optimize the tuples away.
> 5) The feature-set of LL seems to be a mix of "hey, we can do this too",
> and features that are more useful and practical in the daily work.
Currying of lambda functors may be like that, the motivation behind other 
features is, that we try to provide counterparts for all C++ constructs. 
All operators, all function call types, casts sizeof and typeid, all 
control structures, and exception handling.
> 7) Practically, if FACT! and Phoenix are portable to VC6, while LL is
> not, this is worth considering.
LL does compile with the current alpha version of MSVC. (there will of 
course be VC6 users around for quite a while.)
Cheers, Jaakko

Boost list run by bdawes at, gregod at, cpdaniel at, john at