Boost logo

Boost :

From: Matthew Hurd (matt_at_[hidden])
Date: 2004-03-03 08:26:29


Been looking at the very cute named_params in the sandbox. Getting around
to trying to do some stuff with it.

Noticed something interesting w.r.t. vc7.1 optimization:

BOOST_NAMED_PARAMS_FUN(double, power, 0, 2, power_keywords )
{
        double b = p[base | 10];
        double e = p[exponent | 1];
        return pow(b ,e);
}

double pow_wrap(double b, double e)
{
        return pow(b,e);
}

are the same speed, around 25 nanoseconds on my machine, when called with
some variable parameters ( now = pow_wrap(t.elapsed(),t.elapsed() /
(rand() * t.elapsed())); )
to defeat the chance of variable elimination and constant propagation.

But interestingly,

BOOST_NAMED_PARAMS_FUN(double, power, 0, 2, power_keywords )
{
        return pow(p[base | 10] , p[exponent | 1]);
}

is more than five times slower at 144 nanoseconds.

Which is the opposite to what I would of expected... Anyway, I found it
interesting :-)

It seems the take home message for me is that it is possible to use the
named_params library with no, zippo, notta bit of abstraction overhead even
for a very simple function wrap.

I'm deeply impressed. Well done Dave and Daniel.

It would be nice if the macro could some how encapsulate the keyword
definition so this might be eliminated.
    struct base_t;
    struct exponent_t;

    namespace
    {
        boost::keyword<base_t> base;
        boost::keyword<exponent_t> exponent;
    }

    struct power_keywords : boost::keywords< base_t, exponent_t> {};

The pre-processor trickery to do this is beyond me I'm afraid.

I am interested in being able to iterate over the parameters extracting
keyword_types, argument types and argument values. Convenient string names
would be good too but I can always uses typeid on the keyword_types.

Why would I want to do this? I would like to use this approach as a way for
inserting an intermediary function. Specifically, I would like to call
f<direct>(x,y) and have the direct representation called or f<marshal,
destination>(x,y) and have an intermediary serialize the params into a block
and send it off on a transport where a representation along the lines of
f<marshal, source>(x,y) would accept the block in some infrastructure
somewhere else. f<queue_for_processing_in_a_thread_pool>(x,y) fits this
model too.

Any thoughts?

Regards,

Matt Hurd.
__________________________________________

First case results:
----------------------------------------------------------------------------

----
                        looper invasive timing estimate
                               simple_named_params
----------------------------------------------------------------------------
----
median time = 24.88038277512962 nanoseconds
90% range size = 9.926167350636332e-015 nanoseconds
widest range size (max - min) = 10.8665071770335 microseconds
minimum time = 24.88038277511962 nanoseconds
maximum time = 10.89138755980862 microseconds
50% range = (24.88038277512962 nanoseconds, 24.88038277512962 nanoseconds)
50% range size = 0 nanoseconds
----------------------------------------------------------------------------
----
                        looper invasive timing estimate
                                   simple_wrap
----------------------------------------------------------------------------
----
median time = 24.88038277512962 nanoseconds
90% range size = 9.926167350636332e-015 nanoseconds
widest range size (max - min) = 10.51913875598087 microseconds
minimum time = 24.88038277511962 nanoseconds
maximum time = 10.54401913875599 microseconds
50% range = (24.88038277512962 nanoseconds, 24.88038277512962 nanoseconds)
50% range size = 0 nanoseconds
Second case:
----------------------------------------------------------------------------
----
                        looper invasive timing estimate
                               simple_named_params
----------------------------------------------------------------------------
----
median time = 144.4976076555124 nanoseconds
90% range size = 0 nanoseconds
widest range size (max - min) = 1.#INF seconds
minimum time = 144.4976076555077 nanoseconds
maximum time = 1.#INF seconds
50% range = (144.4976076555124 nanoseconds, 144.4976076555124 nanoseconds)
50% range size = 0 nanoseconds
----------------------------------------------------------------------------
----
                        looper invasive timing estimate
                                   simple_wrap
----------------------------------------------------------------------------
----
median time = 24.88038277512962 nanoseconds
90% range size = 3.308722450212111e-015 nanoseconds
widest range size (max - min) = 1.#INF seconds
minimum time = 24.88038277512295 nanoseconds
maximum time = 1.#INF seconds
50% range = (24.88038277512962 nanoseconds, 24.88038277512962 nanoseconds)
50% range size = 0 nanoseconds

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk