Boost logo

Boost :

Subject: Re: [boost] a safe integer library
From: Robert Ramey (ramey_at_[hidden])
Date: 2015-12-10 13:47:16

On 12/10/15 9:30 AM, Paul A. Bristow wrote:
> Detecting and handling overflow (and underflow) is certainly something is a big missing item with
> C/C++.
> But I'm not sure that your proposal is radical enough.

This is the first time I remember being accused of this.

> I'm sure that your solution will work (though I haven't been able to study it in detail yet),

I much appreciate your vote of confidence.

> but if you allow definition of a safe minimum and maximum then I fear that you are paying the big
> cost in speed

nope, you've got it backwards. Consider the following example. You
have a variable stored in an int8_t. You can absolutely know that if
you square this variable, it can never produce an arithmetically correct
result - even in C++. So there is zero overhead in proving that your
program can never fail. One of the main features of the libraries
implementation is that it carries range information around with the
type. When a type is used in an expression, compile time range
arithmetic is used to determine whether or not it's necessary to do any
runtime checking. So such runtime checking is performed only when it is
actually necessary.

that comes from not using the built-in carry bit provided by all the
processors that we care about.

In cases where runtime checking is necessary, the library implements
this is a portable way. But there's no reason one couldn't
conditionally specialize the functions to implement this functionality
in a manner which takes advantage of any special hardware facilities.

> So I fear it is premature until we have solved the problem of detecting overflow and underflow
> efficiently.

Just the opposite. Now is the time to use C++ to make the problem
smaller and create a "drop in" interface so that any future "solutions"
to the problem can be available without having to recode our programs.
That is, we want to augment/enhance C++ through libraries such as this
to decouple our programs from specific machine features while
maintaining the ability to gain maximal runtime efficiency.

So the library has several aspects:

a) creating types which carry their ranges around with them through

b) using operator overloading to make usage of the library as easy as
replacing your integer types with corresponding safe integer types.

c) implementing runtime code to handling error generating expressions in
an efficient manner.

For c) I've provided a portable solution which is pretty efficient. But
I'm guessing you're correct that one could do better were he to give up
on portability.

> Lawrence Crowl, Overflow-Detecting and Double-Wide Arithmetic Operations

I looked at these. The above discussion should make it clear I'm
addressing something else here. The problem for me is not creating
efficient runtime checking. The problem is to create an interface which
can incorporate such code while maintaining the ability to transparently
code numeric algorithms. It's about decoupling the arithmetic from the
hardware in way that preserves maximum efficiency.

> This seems something that will allow your solution to be efficient, at least for built-in integral
> types?

So if you're interested in contributing:

a) Consider re-implementing you're own version of "checked" operations
which exploit features of this or that hardware.

b) create a couple of performance tests so we can measure what the
actual performance hit is. The above should make it clear that this will
require a non-trivial amount of effort.

> (And not everyone wants to throw exceptions - even if perhaps many think that they are mad?)

Note that the library specifies error behavior via a policy.

Take care to make a distinction between the proposal to C++ standards
(which is less ambitious aimed at whimpier programmers) and the proposal
for Boost ( which is for glutons for punishment such
as ourselves).

Robert Ramey

Boost list run by bdawes at, gregod at, cpdaniel at, john at