Boost logo

Boost :

From: Paul Mensonides (pmenso57_at_[hidden])
Date: 2002-08-12 00:47:08


----- Original Message -----
From: "Terje Slettebø" <tslettebo_at_[hidden]>

> I've already showed you, and you've pointed it out, yourself, that
> implementing large vectors tend to be difficult, as they are fixed size, in
> the source. This means you either have to use the preprocessor (if
> possible), or manually create large sources with the maximum fixed size you
> want. This could bloat MPL enourmously. How large should the maximum be?
> 100? 500? 1000? 10,000? No matter what number you choose, someone might want
> a bigger one. Using large vector sources like this, if the size isn't
> needed, may slow the compilation down considerably.

Yes, but only if the vector implementation is using many default arguments that
represent "not_a_type."

> Lists, on the other hand, have no fixed size in the source, and need no help
> from the preprocessor.

Fixed size in the source, no. However they do have a fixed size in an
implementation--possibly dependant on a compiler switch.

> To address your efficiency argument: If you manually duplicate (or feed the
> output of the preprocessor to a file), then you may end up with a large
> library, and if you use a large vector where one isn't needed, it may become
> inefficient, because of all those unused elements that are carried around,
> and compiled. You may alleviate the latter some, by having different size
> vectors included at compilation time, depending on what is used (which is
> already done in MPL, selecting between 10-50 elements, in 10 element steps).
> However, the scale of this means that you may get very large sources, and no
> matter how large you make them, they may not be enough.

Agreed. But as I've said many times, if you use a list for elements that high,
you are either going to run into instantiation depth limits or massive
efficiency drains.

> If, on the other hand, you use the preprocessor, to generate it on the fly,
> that brings its own overhead, and limitations on size.

The preprocessor library will soon be able to manipulate numbers up to
9,999,999,999. However, I see your point, but currently the preprocessor
library can generate the necessary specializations up to 128 with no massive
speed problems. Of course, generating 10,000 element specializations would
still take time (and a lot of it!). :)

> Bottom line, you can't have your cake and eat it too. In MPL, vectors and
> lists have different performance characteristics, one way or the other.

Yes, they do. Summary so far: vectors are always faster than lists, and both
become unrealistic at large sizes. One has a library implementation limit and
the other has a compiler implementation limit. You are implying that there is a
tradeoff with cons-style lists because they theoretically could be any size vs.
the speed of a vector which has a fixed maximum size. That is only in theory
however--not in practicality, and the speed advantage of vectors is negligible
at small sizes.

Paul Mensonides


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk