|
Boost : |
From: Matthias Troyer (troyer_at_[hidden])
Date: 2006-02-13 13:41:12
Dear Robert, dear all,
Let me try to stop the explosive growth of this thread by summarizing
the problems and let me state that I think that Robert's strong
typedef proposal is the best solution, and argue why.
The first problem with the current state is that it does not allow
for more than 4G elements in a collection to be serialized. Another
serious problem is that it does not allow an archive to treat a
collection size differently from the integral type used to represent
it. That feature is useful for portable binary archives, and is
absolutely essential for serialization using MPI archives. For MPI
archives we need to treat size types differently than integers (I
don't want to go into details here since that will only distract from
the discussion).
Needing to distinguish size types from integers in some archives,
rules out choosing any other integer type to represent the sizes.
Furthermore, there will never be a consensus as to which integral
type is best. If I want to store a 4G+ collection, I will vote for a
64 bit integer type, while if I want to serialize millions of short
containers, I would hate to waste the memory needed for 64 bit size
types.
Fortunately there is an elegant solution: use a "strong typedef" to
distinguish container sizes from an unsigned int (or std::size_t),
and let the archive decide how to represent it, just as Robert suggests:
On Feb 12, 2006, at 10:42 PM, Robert Ramey wrote:
>
> Even if a strong type is used, it is neither necessary nor is it
> desireable to add it to every archive.
>
> The procedures would be:
>
> create a header boost/collection_size.hpp which would contain
> something like
>
> namespace boost {
> BOOST_STRONG_TYPE(collection_size_t, std::size_t)
>
> // now we have a collection type
> BOOST_CLASS_IMPLEMENTION_LEVEL<collection_size_t, object)
> // no versioning for effiency reasons
This will work with all existing archives, and the serialize function:
>
> template<class Archive>
> void seriaize(Archive &ar, collection_size_t &, const unsigned int
> version){
> ar & t; // if its converted autmatically to size_t
> // or
> ar & static_cast<collection_size_t &>(t); // if not converted
> automatically
> }
is actually not needed in my experience. I have actually implemented
this solution and it passes all regression tests.
With this solution, existing archives will continue to work, and any
programmers who want or need to serialize size types differently from
std::size_t can overload the serialization of collection_size_t in
their archive. Thus anybody's wishes can be granted with this
solution and I think we should go for it, as we had already discussed
last November.
Matthias
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk