Boost logo

Boost :

From: Paul A. Bristow (boost_at_[hidden])
Date: 2001-12-12 18:26:44


You might also find helpful

3. D. Priest, On properties of floating point arithmetics: numerical
stability
and the cost of accurate computations. Ph.D. Diss, Berkeley 1992.
and more references in http: www.cs.wisc.edu/~shoup/ntl/quad_float.txt.

15. ftp://ftp.ccs.neu.edu/pub/people/will/howtoread.ps
William D Clinger, In Proceedings of the 1990 ACM Conference on Principles
of Programming Languages, pages 92-101.
How to read Floating-point accurately.
Abstract: Consider the problem of converting decimal scientific notation for
a number into the best binary floating-point approximation to that number,
for some fixed precision. This problem cannot be solved using arithmetic of
any fixed precision. Hence the IEEE Standard for Binary Floating-Point
Arithmetic does not require the result of such a conversion to be the best
approximation.
This paper presents an efficient algorithm that always finds the best
approximation. The algorithm uses a few extra bits of precision to compute
an IEEE-conforming approximation while testing an intermediate result to
determine whether the approximation could be other than the best. If the
approximation might not be the best, then the best approximation is
determined by a few simple operations on multiple-precision integers, where
the precision is determined by the input. When using 64 bits of precision to
compute IEEE double precision results, the algorithm avoids higher-precision
arithmetic over 99% of the time.

17. Cephes Mathematical Library, Stephen L. B. Moshier,
www.netlib.org/cephes/ and
http://people.ne.mediaone.net/moshier/index.html - in C. A perl interface
and a DOS calculator also available.
Methods and Programs for Mathematical Functions, Stephen L. B. Moshier, J
Wiley, (1989) ISBN 0 7458-0289-3 & 0-470-21609 3 & 0 7458 0805 0.

Paul

Dr Paul A Bristow, hetp Chromatography
Prizet Farmhouse
Kendal, Cumbria
LA8 8AB UK
+44 1539 561830
Mobile +44 7714 33 02 04
mailto:pbristow_at_[hidden]

> -----Original Message-----
> From: Fernando Cacciola [mailto:fcacciola_at_[hidden]]
> Sent: Wednesday, December 12, 2001 7:35 PM
> To: boost
> Subject: RE: [boost] Re: Floating Point comparisons
>
>
>
> ----- Original Message -----
> From: rogeeff <rogeeff_at_[hidden]>
> To: <boost_at_[hidden]>
> Sent: Tuesday, December 11, 2001 7:13 PM
> Subject: [boost] Re: Floating Point comparisons
>
>
> > --- In boost_at_y..., "Fernando Cacciola" <fcacciola_at_g...> wrote:
> > >
> > > ----- Original Message -----
> > > From: Paul A. Bristow <boost_at_h...>
> > > To: <boost_at_y...>
> > > Sent: Tuesday, December 11, 2001 7:00 PM
> > > Subject: RE: [boost] Re: Floating Point comparisons
> > >
> > >
> > > > Are you sure we are not duplicating the IEEE compliance tests,
> > > > for example a program called 'paranoia' whose source I could
> > locate.
> > > > (I tested MSVC with this and it passed).
> > > >
> > > > (It does not extend to trig etc functions, so the work discussed
> > is still
> > > > useful).
> > > >
> > > > Paul
> > > >
> > > AFAICT, the test programs that Gennadiy made and all the discussions
> > > thereafter are intended to "understand" the behavior of floating
> > point
> > > computations, not to test a particular implementation.
> > >
> > > The intention, if I get it right, is to be able to write a document
> > and
> > > implement some C++ tools to guide and help average programmers to
> > write
> > > numerical oriented applications.
> > >
> > > Fernando Cacciola
> > > Sierra s.r.l.
> > > fcacciola_at_g...
> > > www.gosierra.com
> >
> > Fernando,
> >
> > please take a look into floating point comparison page in an update
> > of Boost.Test Library I have just uploaded. Does my understanding is
> > correct (better) now?
> >
> > Gennadiy.
> >
> I've been re-reading the relevant sections of my literature on floating
> point and also scanning the net for online CS course notes on the subject.
> Unfortunately, it seems that the matter of the upper limit of a rounding
> error isn't uniformly understood by writers and professors since
> I've found
> contradictory statements.
>
> I'm not a numerical analyst, so my interpretation of the error
> bounds might
> be wrong,
>
> I base my statements on the following excellent paper: (actually, on my
> interpretation of it :-)
>
> "What every computer scientist should know about floating-point
> arithmetic".
> D Goldberg:
>
> which can be found at:
>
> http://docs.sun.com/htmlcoll/coll.648.2/iso-8859-1/NUMCOMPGD/ncgTOC.html
>
>
>
> The comments:
>
> 1)
> At the beginning, you mentioned "The simple solution like abs(f1-f2) < e"
> ....
> I recommend that you change '<' for '<=' here.
> The reason is that if anyone reading the document thinks about using this
> "simple solution" he/she will find as a surprise that it fails to detect
> 'exact equality' when feed with 'e=0'. (because (2.3-2.3)<0 happens to be
> false).
>
> 2)
> Decimal to binary conversions are not required to be exactly rounded which
> means that they are not guaranteed to have an upper limit on the
> error bound
> of half ulp.
> There are however, requirements on the number of decimal digits
> required to
> accurately recover its binary representation, but I think this is
> out of the
> scope of the document, so I would put decimal-to-binary conversions in the
> 'unpredictable' bag.
>
> 3)
> There still seems to be some confusion about the meaning of the
> error bound
> of 1/2ULP.
>
> The "1/2ULP" that bounds rounding errors doesn't translate directly into
> "1/2*epsilon".
> "relative error" and that 1/2ULP are not exactly interchangeable, because
> the relative error introduces a division.
> In other words, when you consider "relative errors", you can't
> say that they
> are bounded by (0.5*epsilon); but rather by (k*epsilon); were k depends on
> the operation (k=2, for instance, for subtraction and addition).
>
> Also, I mentioned that the only true source of errors is 'rounding' (w.r.t
> algebraic operations). This is strictly correct because of the requirement
> of exactly rounded operations. However, the statement by itself can be
> confusing without proper explanation, because in most floating point
> bibliography a user will find statements such as 'the relative
> error of this
> operation is <= 2e'.
>
> In conclusion I recommend:
>
> Rephrase the statement:
>
> "The first three operations proved to have a relative rounding error that
> does not exceed 1/2 * "machine epsilon value" for the appropriate floating
> point type (represented by std::numeric_limits<FPT>::epsilon())."
>
> with something intentionally weaker such as:
>
> "The relative error of a floating point conversion or arithmetic operation
> is usually around or below 1 (one) "machine epsilon" for the appropriate
> floating point type (represented by std::numeric_limits<FPT>::epsilon())"
>
> Remove the first two sentences that start with "All theorems about..." and
> "This means that the operation error".
> Start the paragraph directly with "In order for numerical software"
>
>
> 4)
> I don't understand the meaning of the statement
>
> "The underflow/overflow or division-by-zero errors cause unpredictable
> errors."
>
> They *are* errors so they can't "cause errors", predictable or not.
>
> 5)
>
> The words "strong" and "weak" used by the argument for
> "close_at_tolerance"
> are not directly related to the referred inequations.
> I suggest using those words also at the beginning of the document to
> establish the relation.
>
> 6)
> I dislike the name 'close_at_tolerance' because the English term
> 'close' has
> many meanings.
> I would rather name it: 'equal_with_tolerance'.
>
> 7)
> You could add the reference I've used.
>
> Fernando Cacciola
> Sierra s.r.l.
> fcacciola_at_[hidden]
> www.gosierra.com
>
>
>
> Info: http://www.boost.org Send unsubscribe requests to:
> <mailto:boost-unsubscribe_at_[hidden]>
>
> Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
>
>
>
>


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk