|
Boost : |
From: John Phillips (phillips_at_[hidden])
Date: 2006-06-09 11:55:17
Gerhard Wesp wrote:
> On Thu, Jun 08, 2006 at 06:02:54PM -0400, John Phillips wrote:
>
>>instead attempting to maximize numeric stability, and so choosing units
>>that keep things as close to order 1 as possible. Then, since real world
>
>
> Can you elaborate on this? In particular, an example of a simple problem that
> can be better conditioned by choosing the "right" units would interest
> me very much! (Assuming we can express numbers from about 1e-300 to
> 1e300).
>
> Regards
> -Gerhard
Try an experiment.
Using your favorite platform, compare the results of sin(x) for x in
the range [0, 2*pi] with those from [100*pi, 102*pi] or [10^13*pi,
(10^13+2)*pi]. In all cases with which I'm familiar, the underlying
representation of the sine function drifts some for large arguments.
This is because it is a series expansion, and the series only includes a
set number of terms. As the argument gets big, the latter terms get more
and more important. So, the only way to stablize it would be to check
the argument in advance of the calculation and rescale to the range [0,
2*pi]. However, that check implies a cost for everyone using the
function, no matter what numbers they are using in the argument, and so
is unacceptable to most users.
Thus, the choice is, start with entries of order 1, sacrifice
accuracy or sacrifice performance.
I hope that make it more clear.
John
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk