
Boost : 
From: Matthias Troyer (troyer_at_[hidden])
Date: 20051009 12:09:16
On Oct 9, 2005, at 6:44 PM, Robert Ramey wrote:
> I only took a very quick look at the diff file. I have a couple of
> questions:
>
> It looks like that for certain types, (C++ arrays, vector<int>,
> etc) we want
> to use
> binary_save/load to leverage on the fact the fact that we can
> assume in
> certain
> situations that storage is contiguous.
Exactly.
>
> Note that there is an example in the package  demo_fast_archive
> which does
> exactly this for C++ arrays. It could easily extended to cover any
> other
> desired
> types. I believe that using this as a basis would achieve all you
> desire
> and more
> which a much smaller investment of effort. Also it would not require
> changing the
> serialization library in any way.
This would lead to code duplication since we would need to overload
the serialization of
array
vector
multi_array
valarray
Blitz array
ublas dense vectors
ublas dense matrices
MTL vectors
MTL matrices
...
not only for the demo_fast_archive, but for all archives that need
such an optimization. Archives that immediately come to my mind are
binary archives (as in your example)
all possible portable binary archives
MPI serialization
PVM serialization
all possible polymorphic archives
....
Thus we have the problem that M types of data structures can profit
from the fast array serialization in N types of archives. Instead of
providing MxN overloads for the serialization library, I propose to
introduce just one traits class, and implement just M overloads for
the serialization and N implementations of save_array/load_array.
Your example is just M=1 (array) and N=1 (binary archive). If I
understand you correctly, what you propose needs M*N overloads. With
minor extensions to the serialization library, the same result can be
achieved with a coding effort of M+N.
Matthias
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk