
Boost : 
From: Gabriel Dos Reis (gdr_at_[hidden])
Date: 20020905 04:38:54
Sylvain Pion <pion_at_[hidden]> writes:
 On Thu, Sep 05, 2002 at 01:55:39AM +0200, Gabriel Dos Reis wrote:
 > Sylvain Pion <pion_at_[hidden]> writes:
 >
 >  Hence all the compilers you have tested might define this to be the
 >  rounding to nearest, but we can't theoretically rely on it :(
 >
 > Then, probably you might want to consult numeric_limits<>::round_style.

 Thanks for pointing this out, I had missed it.
 But the fact remains that I have to do something in the case it's
 round_indeterminate, so I fear it doesn't bring me much...
That case repesent 1/3 of the possibilities.
The covered cases represent 2/3, that already is much more than 1/3.
 >  Not a big requirement, you'll have to admit.
 >
 > Really?

 Well, as a first start, it's completely reasonnable. I've never touched a
 machine where int/float/double didn't match these requirements.
Then you've been lucky ;)
 Did you ?
Yes.
As a first start, consider the VAX machines. Then move on IBM
systems. Then consider Tru64unix.
 But anyway, I completely agree that it's not complete, and I would like to
 find a scheme to improve this. You may have some helpful comments on this.
Well, my feeling is that requiring IEEE754 is too strong for no
practical benefits for the thing you want to accomplish. You might
want to consider that C++ tends to support LIA1 (and not IEEE754,
not this one is not excluded). It is my belief that any useful
numerical component for C++ should try to work within LIA1
assumptions and not require more unless there are good reasons to do
so. In this specific case, I don't see any reason to require more
than LIA1.
 To recall, what I want is the sharpest interval bounds (for float, double,
 long double) on PI (other constants might follow, but this one is needed for
 the trigonometric functions).

 As mentionned above, the standard doesn't impose any particular rounding style,
 so you can't assume a particular one,
Certainly, however the standard does list the possibilities. And
those are given by integral constant expressions. So the picture is
not that dark.
 so a decimal FP constant might be rounded
 to any of the two closest representable binary FP values enclosing it.
 But when it's exactly representable, the standard guarantees that you get the
 right value. So I plan to exploit this (well, that's what my original code did
 as well somehow), and the fact that any binary FP value fits exactly in a
 decimal FP value.
Well, a minor nit: there is no requirement that the FP system has
radix 2  that is an IEEE754 assumption that should be gotten rid
of. That radix may be 16 as with IBM formats. It may be 10.
Even more, it makes perfect sense to use greater precision to perform
certain computations (with long double) and there you have no IEEE754
rules to backup your assumptions  no assurance that you have a
hidden nit (x86 with intelextended format has no hidden bit, SPARC
has a hidden bit).
[...]
 Now the question is : what do we do when numeric_limits<float>::digits is
 different ?
The question may be reformulated as follows: in the case digits ==
24, how did you get the above value?
 Is there a cleaner way to do that ?
Figure out how a good approximate value can be computated as a
function of digits, and other parameter. IMO, that is much more
scalable and robust.
 Gaby
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk