Boost logo

Boost :

From: Kim Barrett (kab_at_[hidden])
Date: 2006-02-11 17:16:49


At 11:52 AM -0800 2/11/06, Robert Ramey wrote:
>David Abrahams wrote:
> > IMO the size_type change should be considered a bugfix, as it was not
>> possible to portably serialize collections
>
>of over 4 G objects

Strictly speaking, of any size. And changing the type of the count from
"unsigned int" to std::size_t would actually be worse, in a practical
sense. The representation size (how many bits of data will appear in the
archive) must be the same on all platforms.

sizeof(unsigned int) is commonly 4 on 32bit platforms
sizeof(unsigned int) is (I think) commonly 4 on 64bit platforms
sizeof(std::size_t) is commonly 4 on 32bit platforms
sizeof(std::size_t) is commonly 8 on 64bit platforms

(I'm knowingly and intentionally ignoring DSP's and the like in the above).

The count should be some type that has a fixed (up to byte-order issues)
representation, i.e. something like uint64_t.

Note that a portable archive doesn't help here, because the choice of a
portable (or not) representation type for the count is made within the
serialization routine. So unless the serialization routine can query the
archive about what type it should use for the count (ick!), the
serialization routine must use a type with a consistent cross-platform
representation. The archive can then deal with byte-order issues.

Without this, cross-platform(*) archive portability of collections is
pretty hopeless, even if the library user is really careful to pedantically
use portable types everywhere (such as std::vector<int16_t> or the like).
(*) At least for current commodity platforms; some DSP's and the like add
enough additional restrictions that a user of the serialization library
might quite reasonably decide that they are outside the scope of portability
that said user cares about.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk