Boost logo

Boost :

From: Paul A Bristow (pbristow_at_[hidden])
Date: 2006-03-14 14:05:58

You are right to expect this, but apparently the Standard does not REQUIRE

The formulae for the number of decimla digits required is given in

which is derived from Kahan's paper:

 max_decimal_digits = 2 + significand_digits * 3010/1000

For example:
#define FLT_MAXDIG10 (2+(FLT_MANT_DIG * 3010)/10000)
#define DBL_MAXDIG10 (2+ (DBL_MANT_DIG * 3010)/10000)
#define LDBL_MAXDIG10 (2+ (LDBL_MANT_DIG * 3010)/10000)

which yield the following values on typical implementations:


For C++, using numeric limits,

So it is convenient instead to use the following formula which can be
calculated at compile time:
2 + std::numeric_limits<double>::digits * 3010/10000;

HOWEVER, during my tests of VS 2005 BETA, float did not read back in
correctly (for 1/3 of values, off by 1 bit!), and when I queried this was
claimed by Microsoft to be 'by design'. Mysteriously, in the VS 2005 final
_release_, float and double (and thus long double == double) all work as
expected for a 'quality product'.

As far as I recollect, VS 7.1 also worked correctly for all FP types, so I
would surmise that the number of digits used for serialisation is
insufficient? For double, it should be 17. There do appear to be 17
digits, so I am slightly puzzled.




os << std::setprecision(std::numeric_limits<double>::digits10 + 2);

and I suggest that this should be:

os << std::setprecision(2 + std::numeric_limits<double>::digits *



Paul A Bristow
Prizet Farmhouse, Kendal, Cumbria UK LA8 8AB
Phone and SMS text +44 1539 561830, Mobile and SMS text +44 7714 330204
mailto: pbristow_at_[hidden]
| -----Original Message-----
| From: boost-bounces_at_[hidden] 
| [mailto:boost-bounces_at_[hidden]] On Behalf Of Paul Giaccone
| Sent: 14 March 2006 17:39
| To: boost_at_[hidden]
| Subject: [boost] [serialization] 
| Serialisation/deserialisation offloating-point values
| I'm having problems with deserialising floating-point (double) values 
| that are written to an XML file.  I'm reading the values back in and 
| comparing them to what I saved to ensure that my file has 
| been written 
| correctly.  However, some of the values differ in about the 
| seventeenth 
| significant figure (or thereabouts).
| I thought Boost serialization used some numerical limit to make sure 
| that values are serialised exactly to full precision, so what is 
| happening here?
| Example:
| Value in original object, written to file: 0.0019075645054089487
| Value actually stored in file (by examination of XML file): 
| 0.0019075645054089487 [identical to value written to file]
| Value after deserialisation: 0.0019075645054089489
| It looks like there is a difference in the least-significant bit, as 
| examining the memory for these two values gives:
| Original value:     b4 83 9b ca e7 40 5f 3f
| Deserialised value: b5 83 9b ca e7 40 5f 3f
| (where the least-significant byte is on the left)
| Note the difference in the first bytes.
| I'm using Boost 1.33.1 with Visual Studio 7.1.3088 in debug mode.
| Paul

Boost list run by bdawes at, gregod at, cpdaniel at, john at