Boost logo

Boost :

From: Matthias Troyer (troyer_at_[hidden])
Date: 2005-09-25 13:53:30


On 9/25/05, Andy Little <andy_at_[hidden]> wrote:
>
> There is neverending discussion on comp.lang.c++.mod regarding
> differences
> between runtime floats on various platforms. IMO this is an ideal
> opportunity to
> create a platform independent floating point type, IOW one with an
> exact length
> of mantissa and exponent specified and with a consistent policy on
> rounding
> etc. I think this is how it is done in Java though the only link I
> can find
> is:http://www.concentric.net/~Ttwang/tech/javafloat.htm
> Other question is ... What sort of rounding do you use?
>
>
> For multiplication and division operations:
> If the 53'rd bit is 1, add 1 to the 52'nd bit.
> For addition/subtraction
> Ignore extra bits, cutoff after the 52'nd bit.
>
> This mimics the behaviour of the VC compilers.

Note that not even this will help you, if you allow for
optimizations. Just consider the evaluation of the following two
expressions:

x=b*c;
y=a*b*c;

Using common subexpression elimination, the compiler might rewrite
this as

x=b*c;
y=a*x;

The second line is now equivalent to a*(b*c), while before it was
(a*b)*c, and hence the numbers could differ in the last bit(s),
because of different rounding. Note that in the past this has
actually prevented the Java compiler from performing such
optimizations, because such roundoff discrepancies were not allowed!

I am aware that at compile time you might just ignore such
optimizations, but the result might still differ from the runtime
result, because of possible runtime optimizations. Since, hence, now
perfect agreement with runtime values will ever be possible unless
optimization is completely turned off anyways, one should either, not
put too much weight on equivalence to runtime computation, or just
forget about compile time computations at all.

Matthias


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk