Boost logo

Boost :

From: Robert Ramey (ramey_at_[hidden])
Date: 2006-09-18 18:30:04

Matthias Troyer wrote:
> On Sep 18, 2006, at 8:50 PM, Robert Ramey wrote:

> Thus, there is no std::vector, std::valarray, ... overload in any of
> the archives - not in the binary archive nor anywhere else.

Well, we're not seeing the same thing.

template<class Archive>
class basic_binary_oarchive :
    public array::oarchive<Archive>

template <class Archive>
class oarchive
 : public archive::detail::common_oarchive<Archive>
  typedef archive::detail::common_oarchive<Archive> Base;

  oarchive(unsigned int flags)
   : archive::detail::common_oarchive<Archive>(flags)

  // save_override for std::vector and serialization::array dispatches to
  // save_optimized with an additional argument.
  // If that argument is of type mpl::true_, an optimized serialization is
  // If it is false, we just forward to the default serialization in the
base class

  //the default version dispatches to the base class
  template<class T>
  void save_optimized(T const &t, unsigned int version, mpl::false_)
    Base::save_override(t, version);

  // the optimized implementation for vector uses serialization::array
  template<class ValueType, class Allocator>
  void save_optimized(
    const std::vector<ValueType, Allocator> &t, unsigned int, mpl::true_)
    const serialization::collection_size_type count(t.size());
    * this->This() << BOOST_SERIALIZATION_NVP(count);
    * this->This() <<

  // the optimized implementation for serialization::array uses save_array
  template<class ValueType>
  void save_optimized(
    const serialization::array<ValueType> &t, unsigned int version,

  // to save a vector:
  // if the value type is trivially constructable or an optimized array save
  // then we can use the optimized version

  template<class ValueType, class Allocator>
  void save_override(std::vector<ValueType,Allocator> const &x, unsigned int
    typedef typename mpl::apply1<
        BOOST_DEDUCED_TYPENAME Archive::use_array_optimization
      , BOOST_DEDUCED_TYPENAME remove_const<ValueType>::type
>::type use_optimized;
    save_optimized(x,version,use_optimized() );

  // dispatch saving of arrays to the optimized version where supported
  template<class ValueType>
  void save_override(serialization::array<ValueType> const& x, unsigned int
    typedef typename mpl::apply1<
        BOOST_DEDUCED_TYPENAME Archive::use_array_optimization
      , BOOST_DEDUCED_TYPENAME remove_const<ValueType>::type
>::type use_optimized;

  // Load everything else in the usual way, forwarding on to the
  // Base class
  template<class T>
  void save_override(T const& x, unsigned BOOST_PFTO int version)
    Base::save_override(x, static_cast<unsigned int>(version));

} } } // end namespace boost::archive::array

which just moves some of the implementation out of binary_oarchive
into oarchive. What attracts my attention is:

  // dispatch saving of arrays to the optimized version where supported
  template<class ValueType>
  void save_override(serialization::array<ValueType> const& x, unsigned int

Now these suggest to me that the next person who want's to handle his
specially for certain archives will then want/have to back into
and/or oarchive to add special handling to HIS wrapper. That is what I mean
by not scalable.


  template<class ValueType, class Allocator>
  void save_override(std::vector<ValueType,Allocator> const &x, unsigned int

And then there is the special handling for std::vector - which suggests that
one want's to add special handling for multi_array he'll have to do
something else.

I realise that my nvp wrapper is similar to the above and perhaps that has
to some confusion. When faced with the problem I said to myself - "self -
I know you wan't to maintain decoupling between types and archives - but
this is a special case - live with it" And I listened to myself. But now I
that was a mistake. We only have two types of XML archives and they
could have been handled by putting the NVP handling in the most derived
class - even though it was some repetition. oh well.

Finally - the usage of inheritance used above strikes me as confusing,
misleading, and unnecessary. This may or may not be strictly an
aesthetic issue. The only really public/protected function is the
overload for the array wrapper and the default forwarder. These
entry point functions are only called explictly by the drived class.
Basically the base class is not being used to express an "IS-A"
relationship but rather an "Implemented in terms of" relationship
as decribed in Scott Meyers Effective C++ item 41. So I would
have expected something in binary_oarchive like

template<class T>
void save_override(array<T> & t, const unsigned int version) const {
    // if array is optmizable
    save_optimized_array(t, version); // most of oarray in here
   // else
   // forward to default handling

This would have to be added to every archive which supports this
but then we now have to explictly forward twice anyhow so I don't
see a hugh difference.

> What you
> seem to propose, both above and in the longer text I cut, is to
> instead re-implement the optimized serialization for all these N
> classes in the M different archive types that can use it (we have M=4
> now with the binary, packed MPI, MPI datatype, and skeleton archives,
> and soon we'll do M+=2 by adding XDR and HDF5 archives.). Besides
> leading to an M*N problem, which the array wrapper was designed to
> solve, this leads to intrusion problems into all classes that need to
> be serialized (including multi_array and all others), which is not
> feasible as we discussed last year.

I am aware of the desire to avoid the M*N problem - that should be
pretty clear from the design of the serialization library itself - which
takes great pains to permit any serializable type to be saved/load
from any archive. The problem is that when I look at the code, its
not clear that this problem is being addressed.

And its not even clear that there is an M*N problem here:

binary archives benefit from optimization of types which are
"binary serializable"

mpi_archives benefit from optimizaton of types for which there
exists a predefined MPI prmitive.


There may or may not be overlapp here. But it seems to me
that we're trying to hard to factor where truely common
factors don't exist.

So I'm looking at
binary, packed - 3 archives - valarray, vector, C++ native arrays 3 types- 9
and one special trait - is_binary_serializable

mpi - 1 archive * 3 types - 3 overloads
and one special trait - is_mpi_optimizable

and a couple more - but not a HUGE amount.

And I don't think this intrudes into the other collection classes. They are
already going
to be wrapped in array - so they are done. The only issue is how is the
best way to
specify the table of archive - array type pairs which benefit from optimized
One way is to sprinkle special code for array in different places. Another
is to just bite the bullet and make the table for every archive.

> That's a separate discussion which we seem to be repeating every few
> months now. It seems to me from today's discussion that there is a
> confusion now about the use of the array wrapper, which we use in
> just the way you originally proposed.

Hmmm - when I proposed it I had in mind that it would be used
differently - as I outlined above. This is not anyone's fault, it just
didn't occur to me that it would occur to anyone to use it differently.

And truth is, I've really just wanted to keep ehancements/extensions
to the library orthogonal to the original concepts. I thought the
array wrapper suggestion would accomplish that so once it was
"accepted" I got back to may paying job.

Robert Ramey

Boost list run by bdawes at, gregod at, cpdaniel at, john at