From: Geoffrey Irving (irving_at_[hidden])
Date: 2006-05-03 11:57:09
On Wed, May 03, 2006 at 10:37:04AM +0100, John Maddock wrote:
> > In practice, nothing can be assumed to be exactly rounded if it goes
> > through a decent set of optimizations, since the compiler gets to
> > choose when values go in and out of 80 bit registers.
> Ah, that's a whole other issue: having an extended 80-bit double really
> screws things up because you can get double-rounding of a result pushing it
> off by one bit. Machines that don't have that data type, or if you force
> the Intel FPU into 64-bit mode, don't have that problem I believe.
A whole other issue? The addendum to Goldberg seems to think it's pretty
> I'm not entirely sure, but I believe that the AMD64 model effectively
> deprecates the old x87 80-bit registers infavour of 64-bit SMD registers, so
> again the problem goes away there.
Yes. That's nice, but not portable.
> > Actually I have not read it in its entirety, though I will do that
> > shortly.
> Good luck, if a tough read in places, but very useful.
> > I have skimmed it and read similar things in the past. However, I
> > couldn't find any mention of determinism in that document. There are
> > plenty of discussions of non-portability, but the determinism
> > question is separate. Specifically, I've been assuming the following:
> > If I have a function that accesses no global source of
> > nondeterministic (e.g., other global variables, threads, etc.), and
> > I compile it once into a separate translation unit from whatever
> > calls it (to avoid inlining or other interprocedural weirdness), and
> > call it twice on the same machine at different times with exactly
> > the same bits as input, I will get the same result.
> > I also usually assume that the compiler is determistic given the same
> > set
> > of optimization flags on the same machine with the same environment.
> Yep, but the IEEE standard is much stronger than that: you will get exactly
> the same result from the same input on different machines and/or
> architectures. In practice certain optimisations can mess things up, as can
> the 80-bit double rounding problem, but we're remarkably close to that
> result even now. Of course this assumes you don't make any std lib calls,
> since the quality of implementation of exp/pow etc can vary quite a bit.
Yes, but "remarkably close as long as you never call exp" is irrelevant.
The initial point was that exact serialization was unimportant because math
operations are nondetermistic. This is false because in practice, floating
point computations, even those that call exp, on all current implementations,
are deterministic. If you do a trillion operations, dump the results to
disk, and do a trillion more, you can repeat the last trillion by reading
them back from disk...if you have exact serialization (or you wrote binary).
Sorry if I confused the issue by seeming to not understand floating point.
I was really just trying to humbly correct the nondeterminism post. I'll
correct with statements from now on.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk