|
Boost : |
From: Ian McCulloch (ianmcc_at_[hidden])
Date: 2005-11-13 17:11:18
John Maddock wrote:
>> Related to this, there is a very interesting article on comparing IEEE
>> floats at
>> http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm
>>
>> This makes use of the property that IEEE floating point formats are
>> lexicographically ordered when reinterpreted as an appropriate length
>> signed integer. It appears you can use this to define a closeness
>> measure based on how many units in the last place the two numbers
>> differ by (equivalently, if you enumerated all possible floats and
>> counted how far
>> away one number is from the other in the enumeration). I only just
>> came across it now so I havn't tried playing with it yet, but it
>> looks like it would make a useful closeness measure.
>
> I must admit I've been looking for a way to measure errors as ULP.
> However, I have my doubts about this: are integer types and floating point
> types
> always of the same endianness? I suspect not, but can't be 100% sure.
> There are also problems with long doubles to consider: padding bits on
> some implementations, and strange "a long double is actually a pair of
> doubles" on Darwin.
It is frustrating, that the article shows that it is quite easy to do a
ULP-based comparison given the right hardware (even doing some bit-banging
on eg, Darwin would be not so hard), but a portable version to use as a
fallback seems really hard. Maybe it is possible using frexp(), ldexp()
etc. The problem is if you don't assume IEEE arithmetic, then there isn't
much you can guarantee about floating point behaviour. Not to mention
handling FLT_RADIX != 2 ... Scalbn() might be useful there, but that is
C99 only...
There is also the problem of platforms that are not quite IEEE. For
example, IIRC on Alpha by default doesn't generate subnormals.
Cheers,
Ian
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk