|
Boost : |
From: David Abrahams (dave_at_[hidden])
Date: 2006-02-11 23:33:57
"Peter Dimov" <pdimov_at_[hidden]> writes:
> David Abrahams wrote:
>> "Peter Dimov" <pdimov_at_[hidden]> writes:
>>
>>> The status quo is that the size of the container is consistently
>>> written or read as an unsigned int, is it not?
>>
>> I think so, though I could be mistaken.
>>
>>> Consider the simplistic example:
>>>
>>> void f( unsigned int ); // #1
>>> void f( unsigned long ); // #2
>>>
>>> void g( std::vector<int> & v )
>>> {
>>> unsigned int n1 = v.size();
>>> f( n1 ); // #1
>>>
>>> size_t n2 = v.size();
>>> f( n2 ); // ???
>>> }
>>
>> Sure. So how does this relate to serialization?
>
> Consider an archive where unsigned int and unsigned long have a different
> internal representation. When a value of a size_t type is written on
> platform A, where size_t == unsigned int, platform B, where size_t ==
> unsigned long, won't be able to read the file.
Sure, but I don't see what that has to do with the ambiguity in
overload resolution you're pointing at above.
I don't think anyone is suggesting that we use size_t; int has the
same problem, after all. I thought Matthias wass using a
variable-length representation, but on inspection it looks like he's
just using a "strong typedef" around std::size_t, which should work
adequately for the purposes we're discussing.
-- Dave Abrahams Boost Consulting www.boost-consulting.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk