# Boost :

From: Geoffrey Irving (irving_at_[hidden])
Date: 2006-06-09 12:09:25

On Fri, Jun 09, 2006 at 11:55:17AM -0400, John Phillips wrote:
> Gerhard Wesp wrote:
> > On Thu, Jun 08, 2006 at 06:02:54PM -0400, John Phillips wrote:
> >
> >>instead attempting to maximize numeric stability, and so choosing units
> >>that keep things as close to order 1 as possible. Then, since real world
> >
> >
> > Can you elaborate on this? In particular, an example of a simple problem that
> > can be better conditioned by choosing the "right" units would interest
> > me very much! (Assuming we can express numbers from about 1e-300 to
> > 1e300).
> >
> > Regards
> > -Gerhard
>
> Try an experiment.
> Using your favorite platform, compare the results of sin(x) for x in
> the range [0, 2*pi] with those from [100*pi, 102*pi] or [10^13*pi,
> (10^13+2)*pi]. In all cases with which I'm familiar, the underlying
> representation of the sine function drifts some for large arguments.
> This is because it is a series expansion, and the series only includes a
> set number of terms. As the argument gets big, the latter terms get more
> and more important. So, the only way to stablize it would be to check
> the argument in advance of the calculation and rescale to the range [0,
> 2*pi]. However, that check implies a cost for everyone using the
> function, no matter what numbers they are using in the argument, and so
> is unacceptable to most users.
> Thus, the choice is, start with entries of order 1, sacrifice
> accuracy or sacrifice performance.
> I hope that make it more clear.
> John

Arguments to sin are always unitless, so that particular example doesn't work
very well.

One easy to get to get numbers that are too large is to start doing lazy
matrix analysis. This problem bit me a few days ago: