From: Guillaume Melquiond (guillaume.melquiond_at_[hidden])
Date: 2008-04-15 08:44:59
Le samedi 12 avril 2008 à 21:51 -0700, Steven Watanabe a écrit :
> The simplest implementation is to create a unique integer id
> for every new variable constructed. The
> format of a variable in the follwing discussion is
> (value, [(id1, alternate_value1), (id2, alternate_value2)])
> where the alternate_values are the values that we would get
> if the value were increased by their uncertainties.
This approach to reliable computing is called "affine arithmetic". You
may also want to look at other approaches called "Taylor arithmetic" (at
first-order) or "slope arithmetic", as they are quite close.
Obviously, since they are all first-order arithmetic, they all suffer
from second-order terms. Yours is no exception. More precisely, what to
do when one performs (x + ex) * (y + ey)? You decided to just discard
ex*ey and hope it does not matter. There are two other solutions.
First, create a new variable z that always corresponds to x*y. That way,
you associate ex*ey to z and no information is lost. The drawback is
that you will get a combinatorial explosion on the number of variables.
Second, have a value that corresponds to all the higher-terms. The usual
approach is to rely on an interval there. That way, you only have one
additional term. But the drawback is that you can no longer detect
dependencies on second-order terms, which tend to matter in iterative
processes. The interval may be stored as lower/upper bounds, or as
center/radius values, or as magnitude only (since the center is usually
close to zero).
An intermediate ground between these two solutions is to have
second-order terms as new variables (so quadratic number only) while
having an interval for dumping all the higher-order terms. A
commonly-encountered variant is to add one single new term for each
operation, so that first-order error compensations are properly handled.
Anyway, notice that, when using an interval as a wastebin for terms, you
can also use it to dump the rounding errors encountered during
computations. That way, not only your error analysis is "reliable", but
it is also guaranteed.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk