Boost logo

Boost Users :

From: tirath (research-lists_at_[hidden])
Date: 2006-10-20 04:30:07


On 19/10/2006, at 4:40 PM, Guillaume Melquiond wrote:

> Le jeudi 19 octobre 2006 à 01:03 +0800, tirath a écrit :
>> Hi all,
>
>> The query...
>> Before I started using the library, I had this assumption: let A be a
>> float with an exact representation (i.e. width=0). If we cast A to a
>> double, the resulting double will have a non-zero width -
>> specifically the width will be equal to the extra precision afforded
>> to doubles vs floats.
>
> If you have a set of numbers that is exactly representable as a
> singleton interval<float>, then this set of numbers is also
> representable as a singleton interval<double>. There is no reason
> to use
> a bigger interval than necessary to represent this set of numbers.

Thanks for your response Guillaume. I think my explanation was a bit
vague. Take two (with a more concrete example)...

In general, when you have a 32-bit floating point representation of a
real number, the precision is less than it would be with a 64-bit
floating point representation of the number. Therefore when
"promoting" a float to a double, say:
interval<double> x;
interval<float> y;
...
y=x;

shouldn't we (in the spirit of error propagation) acknowledge that
the data assigned to y was of inferior precision? A range too small
to be expressed in single precision may be large enough to be
expressed in double precision, as single precision ulp ~ 1E-7 wheras
double precision ulp ~ 1E-16. Consider the following...

interval<double> x;
interval<float> y;
interval<double> z;
x=1.00000000001;
//the following loop forces error accumulation
for(int i=0; i<MAX_ITER; i++) x*=x;
print_width(x);
y=x;
print_width(y);
z=y;
print_width(z);

with MAX_ITER=30, the error accumulated on x is pretty large, and
consequently the output is:
(variable: {lower} {upper} {width})
x: 1.0108 1.0108 2.40992e-07
y: 1.0108 1.0108 2.38419e-07
z: 1.0108 1.0108 2.40992e-07

with MAX_ITER=5, the error accumulated on x is small, and consequently:
x: 1 1 6.88338e-15
y: 1 1 0
z: 1 1 0

I think this behaviour is a bit misleading because it seems to imply
that assigning the double into the float then to another double
actually zeros the interval width! The current situation implies that
one can sanitise all double precision results by casting it to a
float; this won't remove large intervals, but it will remove small
intervals, and with iterative algorithms this can be very significant.

On the contrary, shouldn't there be a precision cost associated with
"promoting" a float to a double, since the source data is inferior to
the precision of a double? Garbage in garbage out, no?

Best regards,
Tirath Ramdas


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net