|
Boost : |
From: Matthias Troyer (troyer_at_[hidden])
Date: 2006-02-11 18:21:55
On Feb 11, 2006, at 2:16 PM, Kim Barrett wrote:
> At 11:52 AM -0800 2/11/06, Robert Ramey wrote:
>> David Abrahams wrote:
>>> IMO the size_type change should be considered a bugfix, as it was
>>> not
>>> possible to portably serialize collections
>>
>> of over 4 G objects
>
> Strictly speaking, of any size. And changing the type of the count
> from
> "unsigned int" to std::size_t would actually be worse, in a practical
> sense. The representation size (how many bits of data will appear
> in the
> archive) must be the same on all platforms.
>
> sizeof(unsigned int) is commonly 4 on 32bit platforms
> sizeof(unsigned int) is (I think) commonly 4 on 64bit platforms
no, it can be 4 or 8 depending on the platform
> sizeof(std::size_t) is commonly 4 on 32bit platforms
> sizeof(std::size_t) is commonly 8 on 64bit platforms
>
> (I'm knowingly and intentionally ignoring DSP's and the like in the
> above).
>
> The count should be some type that has a fixed (up to byte-order
> issues)
> representation, i.e. something like uint64_t.
Or one can leave it up to the archive to decide how to store the
count. That's why I proposed to introduce a collection_size_type
object storing the count. Any archive can then decide for itself how
to serialize it, whether as unsigned int, uint64_t or whatever you like.
Matthias
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk