Boost logo

Boost :

From: Fernando Cacciola (fcacciola_at_[hidden])
Date: 2001-12-07 13:13:17


----- Original Message -----
From: rogeeff <rogeeff_at_[hidden]>
To: <boost_at_[hidden]>
Sent: Thursday, December 06, 2001 4:13 PM
Subject: [boost] Re: Formal review: New Boost.Test Library

> --- In boost_at_y..., "Fernando Cacciola" <fcacciola_at_g...> wrote:
> >
> > ----- Original Message -----
> > From: rogeeff <rogeeff_at_m...>
> > To: <boost_at_y...>
> > Sent: Thursday, December 06, 2001 3:13 PM
> > Subject: [boost] Re: Formal review: New Boost.Test Library
> >
> >
> > > --- In boost_at_y..., "Fernando Cacciola" <fcacciola_at_g...> wrote:
> > >
> > > > > So, the system might
> > > > > provide functions to roughly estimate the expected error on
> the
> > > basis of
> > > > > the nature of the computation, the orders of magnitude of the
> > > numbers
> > > > > involved, and the properties of the current platform.
> > > > >
> > > > I agree that this would be *really* useful.
> > > > I'm not sure if there is any 'successful' research done in this
> > > area,
> > > > though.
> > > > Anyway, it's been quite some time since I last looked this up.
> > > > If I ever have the time, I would try to locate some recent
> reports
> > > about
> > > > this to see if an algorithm can be implemented.
> > >
> > > What about the algorithm implemented currently? It's documented on
> > > Floating-Point Numbers comperison tools page. Floating -point
> > > comparison tools could allows to omit a tolerance (macro does not
> > > allow this that is why number_of_rounding is not optional now).
> > >
> > I'm refering to an algorithm to find out the proper tolerance for a
> given
> > computation, which is what Ullrich was asking for.
> > Your implementations are of algorithms that 'scale' a given
> tolerance to
> > accomodate it to either the operands or the computation steps, but
> the
> > tolerance to be scaled must anyway be 'guessed' or fixed.
>
> In most cases you would want to 'scale' a ULP value.
>
Most cases, but not always.
Scaling ULPs (Units in the Last Place) is certainly a very good *default*;
that's why your code is very good as a default tool.
We were trying to get some heuristic on those other cases, like geometric
computations, for instance, were this is not enough.

> >
> > BTW: In your document you say:
> >
> > "is based on the reliable comparison algorithm"...
> >
> > I wouldn't say 'reliable'.
> > Perhaps 'more confident'.
> In comparison with?
>
In comparison with the straight forward |a-b|<=tol, which is mentioned in
the introduction of your document.

> >
> >
> >
> > "The rounding error for a floating-point value should not exceed
> one half of
> > the std::numeric_limits<T>::epsilon(). "
> >
> > Actually, the error is for the "operation", not the "value".
> > Besides, and more importantly, this rule holds only with respect to
> > *arithmetic* operations: + - * /
> > It doesn't hold, for instance, if you call 'sqrt()'.
>
> If you write float f = 0.1;
> (f - real 0.1) could be different from 0, but should not exceed one
> half of the std::numeric_limits<T>::epsilon()

Right, because the source of errors are (1) the promotion and (2) the
operation -.
Errors are only bounded to operations, including conversions, not values.

> More correct statement vould be: for values and arithmetic operation.
>
IMO, "for (standard conformant) conversions and arithmetic operations".

Fernando Cacciola
Sierra s.r.l.
fcacciola_at_[hidden]
www.gosierra.com


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk