From: Andy Little (andy_at_[hidden])
Date: 2006-09-09 05:42:01
"Joel de Guzman" <joel_at_[hidden]> wrote in message
> Andy Little wrote:
>> "David Abrahams" <dave_at_[hidden]> wrote in message
>>> "Andy Little" <andy_at_[hidden]> writes:
>>>> So all in all I reckon Boost.Fusion is quite cool :-). Of course it is
>>>> not as good performance wise,
>>> I don't know why you say "of course." Just as STL iteration can be
>>> faster than a hand-coded loop, in MPL we did several things that can
>>> make it quite a bit faster to use the high-level abstractions than to
>>> do the naive hand-coded version. The same thing could be true of Fusion.
>> Looking at the assembler output from my quan::fusion::dot_product function
>> optimised in VC8, it looks like the optimisation is near perfect FWIW.
>> I love fusion !
> You must've missed this: Dan wrote me an email a while back. He
> says: "Interestingly using fusion::fold to do maths on boost::arrays,
> I'm finding that with vc8.0 fusion significantly outperforms the
> standard library equivalent code, presumably as it has more
> information available at compile time, and with inlining it
> effectively unrolls the entire loops."
> I asked Dan to add his tests to libs/fusion/example to showcase
> this favorable "phenomena" :-).
I can't find that, but I am probably looking in the wrong place. Do you mean
BTW. By using a tuple rather than an array, then you can use representing zero
and one. IOW zero<T> one<T> .
template <typename TL, typename TR>
zero<typeof(TL() * TR() )>
operator *( Tl , zero<TR>)
return zero<typeof(TL() * TR() )>();
It should be relatively simple for the compiler to optimise such calcs away.
Very useful for matrix calcs.
(originally suggested by Geoffrey Irving).
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk