Subject: Re: [boost] safe integer library -- the scope
From: Robert Ramey (ramey_at_[hidden])
Date: 2015-12-11 19:18:18
>> what other parts?
> 1. safe_unsigned_range -- while I understand the idea behind it, I don't
> think I would be inclined to use it.
And also, I believe it addresses a
> different problem, and could be handled by another library, like:
I don't remember seeing this before. Clearly there is some overlap.
> 2. safe<unsigned int> -- this looks suspicious. Unlike signed int, unsigned
> int does not overflow.
It does overflow. The only difference is that it doesn't result in
undefined behavior. The behavior is still arithmetically incorrect though.
> It is meant to represent the modulo arithmetic with
> well defined result for any input values.
Hmmm - I didn't believe that. But I checked and found that
std::numeric_limits<unsigned int>::is_modulo is set to true.
I'm going to argue that many programmers use unsigned as a number which
can only be positive and do not explicitly consider the consequences of
overflow. In order to make a drop in replace which is safe, we need this.
BTW - the utility of using a built-in unsigned as a modulo integer which
a specific number of bits is pretty small. If I want to a variable to
hold the minutes in an hour or eggs in a dozen it doesn't help me. I'm
doubting that unsigned is use as a modulo integer in only a few very
odd cases - and of course one wouldn't not apply "safe"
in these cases.
> If I do not want modulo
> arithmetic, I would rather go with safe<int> than safe<unsigned>.
You might not have a choice. If you're re-working some existing program
which uses the unsigned range, you'll need this.
In the context of this library the safe_range ... are important for a
very special reason. The bounds are carried around with type of
expression results. So if I write
save<int> a, x, b, y;
y = a * x + b;
runtime checking will generally have to be performed. But if I happen
to know that my variables are limited to certain range.
safe_integer_range<-100, 100> a, x, b, y;
y = a * x + b;
Then it can be known at compile time that y can never overflow so no
runtime checking is required. Here we've achieved the holy grail:
a) guaranteed correct arithmetic result
b) no runtime overhead.
c) no exception code emitted.
d) no special code - we just write algebraic expressions
This is the true motivation for safe_..._range
> 3. The docs mention a possible extension for safe<float>. I do not know
> what that would mean. In the case of safe<int> I know what to expect: you
> trap when I try to build value greater than INT_MAX or smaller than
> INT_MIN: nothing else could go wrong. But in case of float: do you trap
> when I want to store:
> safe<float> ans = safe<float>(1.0) / safe<float>(3.0);
> ?? 1/3 does not have an accurate representation in type float. But if you
> trap that, you will be throwing exceptions all the time. So is it only
> about overflow?
The whole question of what a safe<float> means is being explored.
Clearly there is a case where would want to handle divide by zero
without crashing the program. The base situation is where an operation
results in a Nan. Most current implementation don't trap and just
soldier on propagating the Nan. I've never felt comfortable with this -
it's not arithmetic any more. I don't want to say much about this
because it's a deep subject and I know enough about to know that I don't
want to say anything about it.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk