|
Boost : |
From: Robert Ramey (ramey_at_[hidden])
Date: 2005-11-13 12:52:56
>> ii) another option would be to implement differing serializations
>> depending upon the archive type. So that we might have
>>
>> template<class T>
>> void save(fast_oarchive_impl &ar, const std::vector<T> & t, const
>> unsigned
>> int){
>> // if T is a fundamental type or ....
>> ar << t.size();
>> ar.save_binary(t.size() * sizeof(T), t.data?());
>> }
>>
>> This would basically much simpler substitute for the
>> "fast_archive_trait"
>> proposed by the submission.
>
> Now we are back to an NxM problem.
Nope. Remember the class hierarchy
basic_archive
basic_oarchive
common_oarchive
basic_binary_oarchive
binary_oarchive
fast_oarchive_impl
MPI_oarchive
XDR_oarchive
Since the above uses the fast_oarchive_impl it will be invoked for
all classes derived from it. (subject to the C++ lookup rules). So
it will have be done only once. Also can can be hidden by
another function which uses an archive farther down the tree.
None of the alternatives proposed require any more functions
to be written than the original proposal does.
> But the real issue is that for many array, vector or matrix types
> this approach is not feasible, since serialization there needs to be
> intrusive. Thus, I cannot just reimplement it inside the archive, but
> the library author of these classes needs to implement serialization.
It may be a real issue - some data types just don't expose enough
information to permit themselves to be saved and restored. But
this is not at all related to implementation of a save/load binary
optimization.
Robert Ramey
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk