Subject: Re: [boost] [integer] Type-safe and bounded integers with compile-time checking
From: Leif Linderstam (leif.ls_at_[hidden])
Date: 2011-09-05 15:27:26
Mathias Gaunard wrote 2011-09-05 16:57:
> Ok, but that would very quickly lead to very big ranges.
> What happens when that range goes beyond the range supported by the
> underlying int type?
The range of the result is computed, this can be used to select a
proper underlying type for the result. So although the two ranges A: [0,
200] and B: [1, 100] both fit into unsigned chars, the result of A+B will
be put into a short or an int. In the general case it is even a bit more
complicated than that. To prevent overflow a suitable internal type for
the computation must be chosen by the library.
Agreed, with multiplication we soon need more than the widest fundamental
type, but in most cases you would run into an overflow with the ordinary
integers as well, only that you might not be aware of the risk.
My first go at this was that if the result range cannot be represented by
any fundamental int type a compile time error is issued. My latest
thoughts though are that the library should instead allow for multi-sized
integers, which then of course touches on the current work of Christopher
Kormanyos. In the original post I said that the range type should accept
types as bounds instead of integers; this is the reason. With types there
we can specify really big ranges.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk