Boost logo

Boost :

From: Paul A Bristow (pbristow_at_[hidden])
Date: 2006-04-04 07:19:15


If you are using >> to convert decimal digit strings to floating-point and
expect to get **exactly** the right result, read on.

There was some discussion in this thread some weeks ago and agreement that
there was a problem with serialization of floating point (and with lexical
cast).

Although the change is only 1 bit, if you repeatedly read back and
re-serialized floating-points, the values would drift 1 bit each time.

I've now found a (some - quite a few) moments to look into this.

The basic problem is failure to 'round-trip/loopback'

        float f = ?; // should work for ALL values, both float and double.
        std::stringstream s; // or files.
        s.precision(max_digits10); // 9 decimal digits for 32-bit float, 17
for 64-bit double.
        s.str().erase(); // see note below on why.
        s << f; // Output to string.
        float rf;
        s >> rf; // Read back into float.
        assert(f == rf); // Check get back **exactly** the same.

With MSVC, the problem is with s >> rf; For some values, the input is a
single least significant bit wrong (greater).

The ***Good News*** is that, unlike what I found for VS 7.1, where 1/3 of
float values are read in 1 bit wrong,

VS 8.0 works correctly in release mode for ALL 32-bit float values.

(Digression - because of the memory leak in stringstream in VS 8.0 (it is
disgraceful that we haven't had an SP1 for this), the naïve test runs out of
real and virtual memory after half an hour if you try a brute force loop
re-creating stringstream for each value. So it is necessary (and quicker)
to create the string just once and erase the string contents before each
test.
I used my own nextafterf to test all 2130706431 float values and it took
70:53 (must get my new dual-core machine going ;-).

The ***Bad News*** is that, as shown by John Maddock, for double there is a
bizarre small range of values where every third value of significand are
read in one bit wrong. Murphy's law applies - it is fairly popular area.

Of course, testing all the double values would take longer than some of us
are likely to be above ground to be interested in the result ;-)

So I created vaguely random double values using 5 15-bit rand() calls to
fill all the bits, and then excluding NaN and infs.

(Unlike the more expertly random John Maddock, I decided it was best to keep
it simple to submit as a bug report to MS rather than any of the Boost fancy
randoms - which in any case seem to have bits which never get twiddled - not
my idea of random - but then I am not a statistican or mathematican.)

For example:

Written : 0.00019879711946838022 == 3f2a0e8640d90401
Readback : 0.00019879711946838024 == 3f2a0e8640d90402 << note 1 bit
greater.

This shows that

failed 77 out of 100000 double values, fraction 0.0007.

The range of 'wrong' reads is roughly shown by

wrong min 0.00013372562138477771 == 3f2187165749cbef
wrong max 0.0038160481887855135 == 3f6f42d545772497

I suspect the 'bad' range is more like 0.0001 to 0.005 from some runs.

All have an exponent in the range 3f2 to 3f6.

And if you use nextafter to test sucessive double values in this range, each
3rd value is read in 'wrong.

I think we really can claim this is 'a bug not a feature' (MS reponse to my
complaint about 7.1 floats) and I will submit this soon. With the info
above, it should be possible to find the obscure mistake.

I suspect this problem exists in many previous MS versions. I doubt even
Dinkumware would apply an extensive random double value test like this - it
takes some time to run.

If anyone wants to test other compilers, please mail me and I will dump my
crude test in the vault.

Paul

-- 
Paul A Bristow
Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB
Phone and SMS text +44 1539 561830, Mobile and SMS text +44 7714 330204
mailto: pbristow_at_[hidden]  http://www.hetp.u-net.com/index.html
http://www.hetp.u-net.com/Paul%20A%20Bristow%20info.html
| -----Original Message-----
| From: boost-bounces_at_[hidden] 
| [mailto:boost-bounces_at_[hidden]] On Behalf Of Paul Giaccone
| Sent: 14 March 2006 17:39
| To: boost_at_[hidden]
| Subject: [boost] [serialization] 
| Serialisation/deserialisation offloating-point values
| 
| I'm having problems with deserialising floating-point (double) values 
| that are written to an XML file.  I'm reading the values back in and 
| comparing them to what I saved to ensure that my file has 
| been written 
| correctly.  However, some of the values differ in about the 
| seventeenth 
| significant figure (or thereabouts).
| 
| I thought Boost serialization used some numerical limit to make sure 
| that values are serialised exactly to full precision, so what is 
| happening here?
| 
| Example:
| Value in original object, written to file: 0.0019075645054089487
| Value actually stored in file (by examination of XML file): 
| 0.0019075645054089487 [identical to value written to file]
| Value after deserialisation: 0.0019075645054089489
| 
| It looks like there is a difference in the least-significant bit, as 
| examining the memory for these two values gives:
| 
| Original value: b4 83 9b ca e7 40 5f 3f
| Deserialised value: b5 83 9b ca e7 40 5f 3f
| 
| (where the least-significant byte is on the left)
| 
| Note the difference in the first bytes.
| 
| I'm using Boost 1.33.1 with Visual Studio 7.1.3088 in debug mode.
| 
| Paul

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk