Boost logo

Boost :

From: John Maddock (john_at_[hidden])
Date: 2005-12-06 13:39:48


> Then for epsilon 2 ^-105 and 2 ^ -106 are below
> outAsHex(one + numeric_limits<quad_float>::epsilon());
> // 1.00000000000000000000000000000002 1 == 0x3ff0000000000000
> 2.4651903288156619e-032 == 0x3960000000000000 2 ^-105
> // 1.00000000000000000000000000000001 1==0x3ff0000000000000
> 1.2325951644078309e-032==0x3950000000000000 2 ^ -106
>
> I note that the 2 ^ -106 value is the one I would naively expect - a
> single least significant bit different.

Strange!

The docs do make it clear that this type is rather strange:

"This is a rather wierd representation; although it gives one
essentially twice the precision of an ordinary double, it is
not really the equivalent of quadratic precision (despite the name).
For example, the number 1 + 2^{-200} can be represented exactly as
a quad_float. Also, there is no real notion of "machine precision".

Note that overflow/underflow for quad_floats does not follow any
particularly
useful rules, even if the underlying floating point arithmetic is IEEE
compliant. Generally, when an overflow/underflow occurs, the resulting
value
is unpredicatble, although typically when overflow occurs in computing a
value
x, the result is non-finite (i.e., IsFinite(&x) == 0). Note, however, that
some care is taken to ensure that the ZZ to quad_float conversion routine
produces a non-finite value upon overflow."

Ah, now hold on: when you print out your results, even if there is a 1 in
the last binary digit *that doesn't mean there will be a 1 in the last
hexadecimal or decimal digit*. Think about it, unless the number of binary
bits fit into a whole number of bytes, the last binary 1 may be in a 1, 2, 4
or 8 position in the last byte.

I actually tried to write a short program to calculate epsilon with this
type, but quad_float exhibits some very strange behaviour: for example if
you go on adding a single digit to one, each time one position to the right
(by dividing the digit you're adding by 2 each time), then the high-part
goes up to 2, and the low part become negative! It's actually correct if
you think of the number as the sum of it's parts (which is less than 2), but
it means that it does not behave as an N-bit number in any way shape or
form. As the number increases in magnitude, the low-part actually become
less-negative (smaller magnitude) as the number gets ever closer to it's
high-part in value.

Double weird.

I'm not sure how you can meaningfully reason about such a beast to be
honest.

John.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk