|
Boost Users : |
From: Zeljko Vrba (zvrba_at_[hidden])
Date: 2008-08-16 13:32:59
Integer types in C and C++ are a mess. For example, I have made a library
where a task is identified by a unique unsigned integer. The extra information
about the tasks is stored in a std::vector. When a new task is created, I use
the size() method to get the next id, I assign it to the task and then
push_back the (pointer to) task structure into the vector. Now, the task
structure has also an "unsigned int id" field. In 64-bit mode,
sizeof(unsigned) == 4, sizeof(std::vector::size_type) == 8
I get a warnings about type truncation, and obviously, I don't like them. But
I like explicit casts and turning off warnings even less. No, I don't want to
tie together size_type and task id type (unsigned int). One reason is
"aesthetic", another reason is that I don't want the task id type to be larger
than necessary (heck, even a 16-bit type would have been enough), because the
task IDs will be copied verbatim into another std::vector<unsigned> for further
processing (edge lists of a graph). Doubling the size of an integer type shall
have bad effects on CPU caches, and I don't want to do it.
What to do? Encapsulate into "get_next_id()" function? Have a custom size()
function/macro that just casts the result of vector::size and returns it?
==
Another example: an external library defines its interfaces with signed integer
types, I work with unsigned types (why? to avoid even more warnings when
comparing task IDs with vector::size() result, as in assert(task->id <
tasks.size()), which are abundant in my code). Again, some warnings are
unavoidable.
What to do to have "clean" code?
==
Does anyone know about an integer class that lets the user define the number of
bits used for storage, lower allowed bound and upper allowed bound for the
range? Like: template<int BITS, long low, long high> class Integer;
BITS would be allowed to assume a value only equal to the one of the existing
integer types (e.g. 8 for char, 16 for short, etc.), and the class would be
constrained to hold values in range [low, high] (inclusive).
All operations (+-*/<<>><>) would be compile-time checked to make sense. For
example, it would be valid to add Integer<32, 0, 8> with Integer<8, 0, 8> and
store into Integer<8, 0, 16> (or Integer<8, 0, 17>) result, but NOT into
Integer<8, 0, 8> result. The operations would also be checked at run-time to
not overflow.
Address-of operator would convert to signed or unsigned underlying integer
type, depending on the lower bound.
Mixing operations with native integer types would be allowed, provided ranges
are respected[*].
And, of course, the library should have a "production mode" where the class
would turn off all checks.
Does anybody know about such a library? Is a unified numeric tower too much to
ask for, at least in the form of a library? How do you deal with these issues?
[*] If there's no such library in existence, I might be willing to think about a
set of "sane" axioms and checks for automatic conversions. And I have "Hacker's
Delight" at home, so I can peek into it and get formulas for deriving bounds of
arithmetic operations.
And yes, is this even easily implementable within C++ template framework; one
has to deal with integer overflow.. What happens in
template<long A, long B> class blah { const long T = A+B; };
when A+B overflows? Compiler error? Undefined result? Did C++0x do anything
to clean up the integer types mess?
This is a proposal for extension to the C language:
http://www.cert.org/archive/pdf/07tn027.pdf
Anything similar available for C++?
</rant>
Or should I just listen to the devil on my shoulder and turn off the
appropriate warnings?
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net