On 26 June 2012 09:09, Robert Ramey
<ramey@rrsd.com> wrote:
> Also, have you noticed
that it loads a binary array of integers one
> integer at a time?
>
I wonder if there is a way we could enable
>
BOOST_SERIALIZATION_USE_ARRAY_OPTIMIZATION(eos::portable_iarchive)
> it
would mean implementing ar.save_array() for the eos archive.
Note that this optimization exploits the fact that
an array of integers
can be handled as a block if no translation has to
be done. the
portable binary archives use a variable length for
integers and
these have to be handled one by one.
I'm afraid you're out of luch here.
There is plenty of potential here,
All the values in an array would be of the same type, so you could encode whatever extra info is required at the front (ie original # of bytes, endian, # of elements) and then write the block of data all at once. If its not possible to write it all at once (eg it needs to be transformed first) then it could be transformed in-memory before being written out.
Alternatively, the translation could still happen element-by-element, but it doesn't need to go through all of the boost::serialization adl callstack (eg, it currently goes through the NVP calls for every element in the array). Same as above, except its done with many write() calls rather than writing a block of memory all at once.
My current use case, is that I'm writing blocks of uint64_t or double or float, and reading it back in. It needs to work on intel x64 and intel x86 (ie 32 bit). AFAIK, my arrays don't need any transformations since I'm using fixed-sized integers/floats and the types should be compatible across the platforms (won't be using this on SPARC etc).
The ONLY reason I need to use the portable binary archive, is because the usual binary archive's header is incompatible between 32 and 64 bit. And potentially version number integers and number-of-elements-in-an-array integers and other such overhead stuff.
your thoughts?