|
Boost Users : |
From: Robert Ramey (ramey_at_[hidden])
Date: 2008-08-30 23:18:17
David Abrahams wrote:
> I wonder if it really works so well when the word size of the machines
> differs, or even when the word size is 32 bits on both ends. It's
> likely they're both using IEE754, so if long double has more than 32
> bits of mantissa, your method will be needlessly lossy. I think long
> double commonly has 96 or 128 bits total, so you'd lose significant
> precision. The HPC community has had to solve this problem numerous
> times. These are people that care about the accuracy of their
> floating point numbers. Why one would begin anywhere other than with
> the formats the HPC people have already developed is beyond me.
The current implementation implements a variable length format
where only the significant bits are stored. If it turns out that a number
is stored in the archive cannot be represented on the machine
reading the archive an exception is thrown. This would occur
where a 64 bit machine stored a value > 2^32 and a 32 bit
machine tried to loaded.
This method has one great advantage. It automatically
converts between integer (or long or what ever) when
the size of integer varies between machines. It also eliminated
redundent data (leading 0's) and never loses precision.
If it can't do something the user want's to do - it punts.
Its up to the library user to decide how to handle these
special situations.
I believe leveraging on this by converting floats to a pair
of integers and serializating them would simplify the
job and result in a truely portable (as opposed to 99%)
archive.
BTW - I was wrong about the two library functions
mentioned above. The to return the exponent of the
normalzed value - but the return mantissa as a float
rather than an integer - damn!.
Robert Ramey
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net