Boost logo

Boost :

From: Guillaume Melquiond (guillaume.melquiond_at_[hidden])
Date: 2003-12-12 08:32:51


Le ven 12/12/2003 à 13:21, Dan W. a écrit :
> > > int main() {
> > > int a = (1 << 24) - 1;
> > > volatile float b = a;
> > > volatile float c = b + 0.5f;
> > > volatile float d = floorf(c);
> > > int e = d;
> > > assert((double)a == (double)b);
> > > printf("%d %d\n", a, e);
> > > return 0;
> > > }
> > >
> > > The integer is the biggest odd integer representable as simple
> > > precision floating-point number. So the conversion from float to int
> > > should not
> > > modify the value (since it already is an integer). However, as you can
> > > see, the final value is different from the first value. The same will
> > > happen with any odd integer between 2^23 and 2^24 (and their negative
> > > counterpart). So it's not a few cases, there is unfortunately a bunch
> > > of them.
> > >
> > Hmmm...
> > OK, I see the problem....
> > Seems difficult to deal with without compromising efficiency for the safe
> > cases (<2^23).
>
>
> As a probable future user of numerical headers, I'd like to beg for simplicity.
> The problems being discussed are not new, and any programmer that has the
> dimmest clue about numerical representations knows to expect nothing but
> bad news from floats.. ;-)
>
> Please do whatever the hardware supports natively, and don't slow our
> computations for the sake of a few newbies that will initialize a float to
> 2^30 and expect to get a different number when adding one...

It seems you didn't really understand. It doesn't happen with big or
small integers. It happens with integers in the middle (between 2^23 and
2^24). With the current implementation, when a floating-point is an
integer in the correct target range, you are not sure at all that the
integer you get after the cast is the same as before.

When the value is exactly representable in both types, the user expects
the value to be exactly convertible from one type to the other, don't
she? Moreover, if the user had only computed "static_cast<int>(b)"
rather than "numeric_cast<int>(b)", the result would have been correct.

If you are interested in what "the hardware supports natively" rather
than a fully specified library, why will you even bother to use this
numerical cast library? You don't seem to understand that the current
implementation doesn't respect its specification.

> True, I understand, floor() is a function that bears a specification. But
> where the specification can't be met, it should be fixed, not the
> computation. People who know anything about floats and doubles will use
> them well below the range of their mantissa's bits as int, anyways.

Please note that "floor" is absolutely not at fault here! In the
example, c is already a floating-point integer, so floor works just fine
and you can assert that (d == c).

> You already offer rational numbers, and multiple precision numbers?, so
> those who need better than standard IEEE representations can use the better
> ones.

Why use something else when IEEE floating-point numbers are enough? If
you are sure (just suppose you have formally proved it) a standard
floating-point number stores enough precision and range for your needs,
why should you use a type really slower, not necessarily standard, and
maybe full of bugs?

Regards,

Guillaume


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk