|
Boost Users : |
From: Zeljko Vrba (zvrba_at_[hidden])
Date: 2008-08-18 10:41:41
On Mon, Aug 18, 2008 at 12:06:08PM +0200, Lang Stefan wrote:
>
> A) you intend to use your ID to later reference some address or offset
> (which is the same) in memory. If that is your intention, then you
> should stick to the size type (i. e. std:size_t). Problem solved.
>
Yes, it is used as an index into a vector. And that's what I did
(typedef vector<..>::size_type pin_id).
>
> cases, and as I understand it he *was* referring to size types. The
> problem with size types is that they do need to be able to point to
> every location within the addressable memory space. Unfortunately this
>
Hm, do they? Correct me if I'm wrong, but I don't think that C++ mandates flat
address space -- it just requires that each individual object has a contiguous
memory representation. And there can be discrepancy between the two, e.g.
80286 protected mode: largest segment size is 64k, yet the largest amount of
addressable memory is 16MB (or even in the range of TB if one allocates all
LDTs and plays with swapping). And, oh yes, pointers were 48-bit :-)
So, size_type should be able to represent the size of a *single* largest
representable object. Why use it for e.g. number of elements in a vector?
>
> means they will need every single bit of the biggest data entity a CPU
> can handle at any one time. If size types were signed, they would need
> one extra bit for the sign, effectively doubling the amount of memory
> such a type would take up in memory. Unfortunately 'difference types'
> technically should be both signed and able to address every legal
> address in memory - which means one bit more than the current size
> types! However, see below....
>
The "problem" could be solved by using largest _signed_ type both for size_type
and difference_type. and 286 is not the only architecture where flat address
space does not exist -- even today, a similar situation exists on mainframes
(IIRC, there's no straightforward way to convert between increasing integers
and increasing memory addresses).
If you replace "every legal address in memory" with "every legal index", then I
agree with your analysis. But long almost[*] serves this purpose.
So, even trying to define integer types that are able to span all memory space
is doomed to failure. From a practical perspective:
- on 32-bit machine, it's unrealistic to expect to be able to have a vector
of more than 2^31 elements (unless all you have is a std::vector<char>)
- on 64-bit machine, 2^63 is a huuuge number. signedness does not matter
[*] A similar border-case already exists in the C standard library: printf()
returns an int, but it's legitimate to write a string with more than 2^15
characters even on platforms where sizeof(int) == 2. What should be the
return value of printf() ?
So I'd advocate signed representation for size_type and difference_type.
>
> So these are my 5 cents, sorry for the lengthy post.
>
Oh, thanks for the feedback.
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net