Subject: Re: [boost] [safe_numerics] questioning the basic idea
From: John Maddock (boost.regex_at_[hidden])
Date: 2014-11-18 12:52:59
> One good usage example I can think of is this. After a while of trying to
> chase a bug I comae up with a hypothesis that my int could be overflowing.
> I temporarily replace it with safe<int> and put a break point in function
> overflow() to trap it and support my hypothesis. I would probably use a
> configurable typedef then:
> #ifndef NDEBUG
> typedef safe<int> int_t;
> typedef int int_t;
> But is this the intent?
> But perhaps it is just my narrow perspective. Can you give me a real-life
> example where substituting safe<int> for int has merit and is not
> controversial? I do not mean the code, just a story.
This is all a very good question, which I don't have a good answer to,
but I'll add some comments anyway ;)
One thing I've been asked from time to time is to extend support for
boost::math::factorial or boost::math::binomial_constant to integer
types - and it always gets the same response: "are you serious?".
With Boost.Multiprecision one of the first support requests was for
integer exponentiation and I reluctantly added it (as well as it's
modular version) because I know there are situations where it's really
needed, even though is clearly dangerous as hell.
Now on to safe numerics: perhaps many folks don't realise this, but
boost::multiprecision::cpp_int has always supported a "safe mode" where
all operations are checked for overflow etc. What's more you can use
this to create checked 32-bit int's right now if you really want to
(it's a sledge hammer #include solution to the problem though). And
yes, I have found bugs in number-theoretic type coding problems by using
those types (mostly this is the algorithms within the multiprecision lib
including the modular-exponentiation mentioned above).
However there is going to be a noticeable performance hit if you really
do use this with 32-bit integers. But not for extended precision
integers - in fact I doubt very much you will be able to detect whether
checking is turned on or not for those types - because the check is a
fundamental part of the addition/subtraction/multiplication code anyway
- you simply check at the end of the operation whether there is an
unused carry. It's utterly trivial compared to everything else going on.
So... I think yes, if you are writing a number theoretic algorithm then
routine testing with a checked integer type is downright essential.
However, for multiprecision types it has to be implemented as part of
the number type's own arithmetic algorithms, not as an external add on
which would be so utterly expensive as to be useless (all those
multi-precision divides would kill you). Which is to say the proposed
library would be quite useless for multiprecision types.
None of which really answers your question. I guess if your pacemaker
or your aeroplane uses integer arithmetic for critical control systems,
then I rather hope that some form of defensive programming is in use.
Whether this is the correct method, or whether some form of hardware
support would be more effective is another issue.
And my "favourite" integer bug: why subtracting (or heaven forbid
negating) unsigned integers of course!