Boost logo

Boost Users :

From: dizzy (dizzy_at_[hidden])
Date: 2008-08-18 05:34:42


On Sunday 17 August 2008 22:16:29 Andrea Denzler wrote:
> I may add that C/C++ have different integer sizes on different platforms
> adding even more confusion.

How do you suggest that C++ offer integer types that are native to the
platform then?

> I understand that a basic int has the size of
> the processor register, but when I handle and store data values I want to
> know it's size.

So you have sizeof().

> When I want a 16 bit integer then I want to use a 16 bit
> integer because I don't waste space with a 32 bit or have data structures
> of different sizes.

I don't think bits matter in storage much. Because there is no system
interface I am aware of that works with bits (POSIX file I/O, sockets I/O, etc
all work with bytes). They all work with bytes, with the native platform byte.
And sizeof() tells you the size in bytes. So you got all you need there to
know how much space it takes to store that type.

> Even worse the size of the wchar_t for i18n.

sizeof(wchar_t) works as well.

> A signed/unsigned compare should always generate a warning, but I just
> found out it doesn't if you use constant values like -1 < 2U. Funny.
> signed int a=-1;
> unsigned int b=2U;
> bool result = a < b; // as usual I get the signed/unsigned warning
>
> Technically if you compare/add/sub two signed/unsigned values of the same
> byte size then you are having an overflow because signed values doesn't
> exist in the unsigned realm and you have a double amount of unsigned
> values.

Not necessarily true (that unsigned have the corresponding signed type sign
bit for value). There can be unsigned types for which the range of values is
the same as the signed type (except of course the negative values).

> That's why a class (or new standard integer types) handling those
> confusions is really welcome.

For what, for the issues Zeljko described or for the fixed integer size you
said? For the former it's technically impossible to have native fast integer
types and checked without runtime cost. For the later of course you can have
as you can even make classes for fixed integer types (I have something like
this in my serialization framework and the code takes fast code paths if it
detects at compile time that the current platform matches well the fixed
integer sizes that the user asked for).

> Until now I rely on crossplatform integer
> sizes (uint16, uint32, uint64, etc) and compiler warnings. I think compiler
> warnings are important because you always know there is something to care
> about. An overflow assert at runtime can happen or not happen.

Of course, there is no doubt compile time checks are better than runtime ones
(the main reason I like to use C++ and not C, since I have more compile time
semantics to express more invariants at compile time using familiar syntax).

> So ideally we should handle at compile time explicitly all incompatible
> integral types (signed/unsigned of same size) and have at runtime (at least
> in debug mode) asserts for any kind of integer overflow (through casting
> from signed to/from unsigned and basic operations like add, sub, mul, div,
> etc).

I think his library idea does exactly this. Still, that doesn't mean the C++
native integrals are flawed :)

-- 
Dizzy
			"Linux is obsolete" -- AST

Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net