|
Boost : |
From: Matthias Troyer (troyer_at_[hidden])
Date: 2005-10-10 04:00:06
On Oct 10, 2005, at 3:39 AM, Martin Slater wrote:
>
> Robert Ramey wrote:
>
>
>> I only took a very quick look at the diff file. I have a couple of
>> questions:
>>
>> It looks like that for certain types, (C++ arrays, vector<int>,
>> etc) we want
>> to use
>> binary_save/load to leverage on the fact the fact that we can
>> assume in
>> certain
>> situations that storage is contiguous.
>>
>> Note that there is an example in the package - demo_fast_archive
>> which does
>> exactly this for C++ arrays. It could easily extended to cover
>> any other
>> desired
>> types. I believe that using this as a basis would achieve all you
>> desire
>> and more
>> which a much smaller investment of effort. Also it would not require
>> changing the
>> serialization library in any way.
>>
>>
>>
>>
>
> If you check the post I made last week I did just this for
> std::vectors
> of POD types, this went from 9.5seconds for a 50 000 000 element
> vector
> of int to ~ 0.5 seconds. Very worthwhile speedup for a lot of
> common use
> cases.
>
Indeed, this 20x speedup fits well with my observations of a 5-100x
speedup depending on compiler optimization settings, archive types,
etc..
Matthias
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk