|
Boost : |
From: Hamish Mackenzie (hamish_at_[hidden])
Date: 2002-07-28 20:18:46
On Sun, 2002-07-28 at 19:50, David Abrahams wrote:
> From: "Andrei Alexandrescu" <andrewalex_at_[hidden]>
>
> > An argument that was aired is that various compile-time structures would
> > lead to different compile times. Consequently, the argument goes, the
> > developer would select the structure that leads to the least memory
> occupied
> > and the fastest compile time. However, experience shows that various
> > compilers prefer various data structures, depending on how the compilers
> > implement templates. So if one is to write portable code, there is no
> clear
> > choice of a structure over another.
>
> Hi Andrei,
>
> Just a few comments:
>
> 1. Which experience are you talking about? I ask because previous
> measurements of the compilation speed of MPL algorithms on its structures
> were misleading. The slowness people were seeing had mostly to do with the
> particular way the preprocessor library was used in the implementation (and
> a few other factors, I think -- Aleksey can say for sure). In any case,
> those issues have been addressed in the implementation under review.
I have not been keeping up with boost lately, does this apply to the
tests that I did? Have you repeated them with the new version? If so
what where the results?
Before I got distracted I had a go at writing a hybrid PP/template fold
algorithm which used the BOOST_PP_FOLD_* to unroll.
The idea is that you can define your function as a macro and use it to
create the final algorithm template class by passing it to the
PP/template hybrid fold macro. Doing it this way also allows the
algorithm instantiated by the macro to be the one best optimised for the
compiler in use. It could also include specialisations for different
container types and iterator types.
I don't like having to use PP in this way as it complicates writing
algorithms. count_if for instance ends up being:
Firstly a function defined as a macro which itself expands to a template
which will do the work...
#define BOOST_MPL_COUNT_IF_OP( Depth, State, Value ) \
BOOST_MPL_PP_TEMPLATE( ( \
integral_c< unsigned long, State::value + \
( apply< Predicate, Value >::type::value ) > \
) ) \
/**/
Followed by macro instantiation which creates the template class for the
algorithm
BOOST_MPL_FOLD_FORWARD_S_T(
count_if, // Name of class to create
1, (Predicate), // Template arg count and arg names
BOOST_MPL_PP_TEMPLATE( (
integral_c< unsigned long, 0 > // Initial value
) ),
BOOST_MPL_COUNT_IF_OP ) // Function
The _S in BOOST_MPL_FOLD_FORWARD_S_T indicates that an initial state is
to be provided as a macro argument (otherwise it will be added as the
first template argument).
The _T indicates that typename should be used on the result of the
function if a type is required. This is needed as some functions will
need it and some wont (eg. x<y> won't but x<y>::type will). typename
can't go in the function macro as it is only needed in some cases.
BOOST_MPL_PP_TEMPLATE is to allow commas in macro parameters. If it is
slow on a particular compiler then it will be a pain, but it seems to be
nice and fast on GCC. Here is how it works
namespace aux
{
template< typename X >
struct pp_template;
template< typename T >
struct pp_template< void (*)( T ) >
{
typedef T type;
};
}
template< typename T >
struct require_typename
{
typedef T type;
};
#define BOOST_MPL_PP_TEMPLATE( T ) \
aux::pp_template< void (*)T >::type \
/**/
The name sucks a bit though. Any thoughts?
I have implemented BOOST_MPL_FOLD_* macros for a recursive list type and
I'll tidy up the code a bit and post it this week. If the new version
of MPL has sorted the compile time performance problem it might be
better if I delete it instead :-).
Hamish
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk