Boost logo

Boost :

Subject: Re: [boost] [1.44] Beta progress?
From: Robert Ramey (ramey_at_[hidden])
Date: 2010-07-26 20:06:37

Matthias Troyer wrote:
> On 26 Jul 2010, at 13:22, David Abrahams wrote:
> In 1.44 Robert has changed the implementation of the "strong
> typedef", greatly reducing the concepts the type models. To cope with
> that Robert had to rewrite parts of Boost.Serialization, but the
> changes also lead to the breaking of Boost.MPI and most likely also
> other archives based on Boost.Serialization. As far as I can see
> Robert removed the default constructor since he did not need it
> anymore after changing his archives - but he did not realize that
> there was other code that might get broken.

I've been thinking about this some more and now I remember
a little more about the history and rationale for the way things
are the way they are.

When I factored out the class ?_?primitive I had in mind
that this would be the "C primtive" layer which would
include serialization for the c++ primitive types. The
_?archive layer would handle other types with
special handling or passing them on to the "primitive"
class. I ended up with two prmitive classes text and
binary. text used stream - binary just saved/loaded
the raw bytes. At the time you posed the question
of why not use least16_t, etc in the primitive class.
I wasn't sold on the idea as it broke my original
concept, but it did make me wonder - if maybe
it wasn't a better idea. In fact, I think the original
C made a big mistake in making primtives (int,
etc) whose size vary from machine to machine.
I'm thinking we would have better off using int16,etc
as primitives and typedef int for each machine.

So when I made the internal serializable types
for archives, I made sure that they would
all be convertible to int, unsigned int, etc. so
that text_primitive could handle them without
having to list them one by one and tie them to
some specifc sort of integer. I got all this for
free using STRONG_TYPEDEF.

When a few of the types changed and I had
to make them more complicated, I had some
problems and made these types "less" than
integers. This let me trap mis-use of these
numbers which don't have all features that
integers do - e.g. it makes no sense to add
version #. Since I included conversions to
c++ primitives - all my archives worked. The
only changes I had to make were fixing inadvertant
usage of these types as integers which before
were generating warnings and now were
generating errors.

I never thought of of other archives. I guess
i just presumed that they would use either
text_primitive or binary_primitive or that
they wouldn't be any more problem than my
archive classes were. I knew little of MPI
but I did know it derived from binary_archive
so it would never occur to me that there would
be a problem. I just assumed it worked just
as binary_archive does - just send the bytes.

Looking a little more, it seems that it sends
the data as MPI types. so you have to convert
each kind of integer. I'm still not getting why
implicit convesion operators don't do

class_id_type -> int16_t -> int ->send as mpi_integer

or something like that. In otherwords, if this
works for text_archives, why doesn't it work for

The "new" types do convert to integers or
typedefs of integers (note: NOT STRONG_TYPEDEF)
so I'm surprised this comes up - even in mpi.

Anyway - the underlying concept for STRONG_TYPEDEF
is "convertible to underlying type" which I thought
I implemented in the "hand rolled" implementations.
So, I'm not sure what should be added to the
hand rolled implementions to address this.

Note that I'm not declining to do anything, I'm
just not sure what best thing to do is.

Robert Ramey

Boost list run by bdawes at, gregod at, cpdaniel at, john at