|
Boost : |
From: Matthias Troyer (troyer_at_[hidden])
Date: 2005-11-27 04:59:43
Hi Robert,
Thanks for posting your proposal!
There is a close similarity between your proposal and Dave's. Dave's
classes array::oarchive and array::iarchive are archive adaptors,
just like the one you are proposing. We all understand what you mean
by archive adaptor. If you take a look a closer look at Dave's
proposal then you will surely see that he built on your idea of using
archive adaptors.
Aside from naming differences, and other minor things the main
difference between your proposal and Dave's is the choice of
customization point used by the authors of serialization functions
for new array-like classes (such as e.g. std::valarray). How can they
profit from the optimized saving of contiguous arrays of some data
types?
In your proposal these authors should provide an overload of
template<class Base, class T>
void override(boost::archive::bitwise_oarchive_adaptor<Base> &, T
const &)
In this function they have to re-implement the serialization of the
specific class. Dave on the other hand proposes that these authors
call a function save_array, which by default will just do a simple
loop (as in the current library), but dispatch to an optimized
function when available.
Let me state clearly that both these approaches can coexist and there
is no conflict. Dave's proposal uses a wrapper just as the one you
use to override the default serialization provided by your library,
but in addition provides a save_array and load_array function that
can be used with any archive (and without modifications to your
library).
Let me take std::valarray as an example of what would have to be
implemented by the author of std::valarray serialization.
In your scheme that would be:
----------------------------------------------------
// the default serialize function
template<class Base, class T>
void save(
boost::archive::bitwise_oarchive_adaptor<Base> &ar,
const std::valarray<T> & t,
){
const unsigned int count = t.size();
ar << count;
for (unsigned int i=0; i<t.size();++i)
ar << make_nvp("item",t[i]);
}
// the optimized overload of the override function
template<class Base, class T>
void override(
boost::archive::bitwise_oarchive_adaptor<Base> &ar,
const std::valarray<T> & t,
boost::mpl::true_
){
const unsigned int count = t.size();
ar << count;
ar.save_binary(t.size() * sizeof(T), get_data(t));
}
// the dispatch either to the optimized or the default version of
override
template<class Base, class T, int N>
void override(
boost::archive::bitwise_oarchive_adaptor<Base> &ar,
const std::vector<T> & t
){
override(ar, t,
boost::serialization::is_bitwise_serializable<T>::type());
}
----------------------------------------------------
Contrast this with Dave's proposal, where one a *single* function
needs to be written to have both unoptimized and optimized
serialization.
----------------------------------------------------
template<class Base, class T>
void save(boost::archive::bitwise_oarchive_adaptor<Base> &ar, const
std::valarray<T> & t,)
{
const unsigned int count = t.size();
ar << count;
save_array(ar, get_data(t),t.size());
}
----------------------------------------------------
Not only is this much shorter, it is even simpler and less error-
prone and easier to maintain than the default serialization function
in your suggestion, since the for-loop over the elements of the
std::valarray is omitted. And please keep in mind that if the archive
does not provide an optimized version of save_array, the code that is
executed will be *exactly* the for-loop in the other example. The
simplicity of Dave's proposal is no chance. I know that he spent many
hours thinking about the problem to come up with an elegant and
simple solution.
Let me stress again that both options can coexist, i.e. we can write
an array_adaptor that provides both your override() mechanism and a
save_array() function. There is no conflict at all between the two
proposals. It will then be up to the authors of the serialization
function of a new class to choose which mechanism they prefer. For me
the choice is obvious, but your mileage may vary.
Matthias
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk