Boost logo

Boost :

From: Fernando Cacciola (fcacciola_at_[hidden])
Date: 2001-12-12 14:34:58


----- Original Message -----
From: rogeeff <rogeeff_at_[hidden]>
To: <boost_at_[hidden]>
Sent: Tuesday, December 11, 2001 7:13 PM
Subject: [boost] Re: Floating Point comparisons

> --- In boost_at_y..., "Fernando Cacciola" <fcacciola_at_g...> wrote:
> >
> > ----- Original Message -----
> > From: Paul A. Bristow <boost_at_h...>
> > To: <boost_at_y...>
> > Sent: Tuesday, December 11, 2001 7:00 PM
> > Subject: RE: [boost] Re: Floating Point comparisons
> >
> >
> > > Are you sure we are not duplicating the IEEE compliance tests,
> > > for example a program called 'paranoia' whose source I could
> locate.
> > > (I tested MSVC with this and it passed).
> > >
> > > (It does not extend to trig etc functions, so the work discussed
> is still
> > > useful).
> > >
> > > Paul
> > >
> > AFAICT, the test programs that Gennadiy made and all the discussions
> > thereafter are intended to "understand" the behavior of floating
> point
> > computations, not to test a particular implementation.
> >
> > The intention, if I get it right, is to be able to write a document
> and
> > implement some C++ tools to guide and help average programmers to
> write
> > numerical oriented applications.
> >
> > Fernando Cacciola
> > Sierra s.r.l.
> > fcacciola_at_g...
> > www.gosierra.com
>
> Fernando,
>
> please take a look into floating point comparison page in an update
> of Boost.Test Library I have just uploaded. Does my understanding is
> correct (better) now?
>
> Gennadiy.
>
I've been re-reading the relevant sections of my literature on floating
point and also scanning the net for online CS course notes on the subject.
Unfortunately, it seems that the matter of the upper limit of a rounding
error isn't uniformly understood by writers and professors since I've found
contradictory statements.

I'm not a numerical analyst, so my interpretation of the error bounds might
be wrong,

I base my statements on the following excellent paper: (actually, on my
interpretation of it :-)

"What every computer scientist should know about floating-point arithmetic".
D Goldberg:

which can be found at:

http://docs.sun.com/htmlcoll/coll.648.2/iso-8859-1/NUMCOMPGD/ncgTOC.html

The comments:

1)
At the beginning, you mentioned "The simple solution like abs(f1-f2) < e"
....
I recommend that you change '<' for '<=' here.
The reason is that if anyone reading the document thinks about using this
"simple solution" he/she will find as a surprise that it fails to detect
'exact equality' when feed with 'e=0'. (because (2.3-2.3)<0 happens to be
false).

2)
Decimal to binary conversions are not required to be exactly rounded which
means that they are not guaranteed to have an upper limit on the error bound
of half ulp.
There are however, requirements on the number of decimal digits required to
accurately recover its binary representation, but I think this is out of the
scope of the document, so I would put decimal-to-binary conversions in the
'unpredictable' bag.

3)
There still seems to be some confusion about the meaning of the error bound
of 1/2ULP.

The "1/2ULP" that bounds rounding errors doesn't translate directly into
"1/2*epsilon".
"relative error" and that 1/2ULP are not exactly interchangeable, because
the relative error introduces a division.
In other words, when you consider "relative errors", you can't say that they
are bounded by (0.5*epsilon); but rather by (k*epsilon); were k depends on
the operation (k=2, for instance, for subtraction and addition).

Also, I mentioned that the only true source of errors is 'rounding' (w.r.t
algebraic operations). This is strictly correct because of the requirement
of exactly rounded operations. However, the statement by itself can be
confusing without proper explanation, because in most floating point
bibliography a user will find statements such as 'the relative error of this
operation is <= 2e'.

In conclusion I recommend:

Rephrase the statement:

 "The first three operations proved to have a relative rounding error that
does not exceed 1/2 * "machine epsilon value" for the appropriate floating
point type (represented by std::numeric_limits<FPT>::epsilon())."

with something intentionally weaker such as:

"The relative error of a floating point conversion or arithmetic operation
is usually around or below 1 (one) "machine epsilon" for the appropriate
floating point type (represented by std::numeric_limits<FPT>::epsilon())"

Remove the first two sentences that start with "All theorems about..." and
"This means that the operation error".
Start the paragraph directly with "In order for numerical software"

4)
I don't understand the meaning of the statement

  "The underflow/overflow or division-by-zero errors cause unpredictable
errors."

They *are* errors so they can't "cause errors", predictable or not.

5)

The words "strong" and "weak" used by the argument for "close_at_tolerance"
are not directly related to the referred inequations.
I suggest using those words also at the beginning of the document to
establish the relation.

6)
I dislike the name 'close_at_tolerance' because the English term 'close' has
many meanings.
I would rather name it: 'equal_with_tolerance'.

7)
You could add the reference I've used.

Fernando Cacciola
Sierra s.r.l.
fcacciola_at_[hidden]
www.gosierra.com


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk