Boost logo

Boost :

From: Kevin Lynch (krlynch_at_[hidden])
Date: 2001-09-05 12:37:45

In a previous message:

> Date: Tue, 04 Sep 2001 10:45:37 -0000
> From: boost_at_[hidden]
> Subject: Math Functions

> 1 My instinct is to only produce a double version(64 bit reals),
> rather than template versions.

I think you are missing the forest for the trees ... the important part
is getting the interface right, and that interface will be dependent
upon the floating point type, so templates are a must. The
implementation then follows more or less for free; that is, the
interface for such a library is the hard part, while the implementations
have been done a thousand times before and are almost free.

You can feel free, of course, to only provide a single implementation (a
double version, for example) because you only have the expansion
constants for that precision. But the possibility of providing
implementations for OTHER floating point types (for example, long
double, an interval type, an arbitrary precision type, etc.) should not
be precluded. I can't stress enough that (at least as far as I am
concerned) a good, well designed, consistent interface with a reasonable
implementation under the hood is MUCH more important than a whiz bang
implementation with a terrible interface ... the implementation is
(relatively) easily fixed, but you are really stuck with a bad
interface. And the proliferation of numerical libraries with terrible
interfaces (including most of my own homegrown solutions :-) exceeds
the number of libraries in almost any other problem domain that I am
aware of.

You will also need to consider implementation issues other than the
floating point type you use (in fact, that may be the least important
choice!) ... many implementation methods for these "special functions"
that you are talking about are NOT valid for arbitrary ranges of the
input parameters; in particular, you will need to document things like:
the valid range of input parameters, the algorithm used in the
implementation and the method used to derive all of the expansion
coefficients (do you use a Taylor expansion or perhaps a Pade
expansion?), the precision guarantees ("This implementation is +/- 1% of
the correct result for input values between 10^-5 and 10^5, and +/- 5%
for values between 10^-7 and 10^6"), how you return "error terms" (many
algorithms allow you to calculate their likely accuracy, and that data
is as important to calculations as the result of the function). Many
special functions have a number of possible implementation strategies;
you might want to ensure that there is some way to choose
implementations via template parameters or function arguments. For
example, you might have a Taylor expansion and a Pade expansion for
Gamma(x), one of which is much faster but less accurate... it might be
nice to provide access to both of them. Again, this type of interaction
must be designed into the interface, and not thrown in as an

> 2 Arguments
> Many need checks on validity of arguments.

The standard approach in most numerical libraries that I have used is to
leave out argument checking entirely ... when you use these functions,
you typically want the right answer very quickly, and don't want the
penalty of validity checks. You can't produce a perfectly safe
implementation of the special functions (one that never produces an
incorrect result) anyway, so why bother checking things that are not
likely to be incorrect in the usual case anyway? If you provide some
checks, users may come to expect that you are providing a guarantee that
a successful function call has produced a valid result; this is, of
course, a silly assumption, but it is the kind of thing most users do.
The standard library math functions are an example of this; there is no
check, for example, that the argument to exp() is small enough to
prevent overflow in any implementation that I've used, and there is no
language in the standard suggesting it. There is also no guarantee in
the standard that special functions will not encounter internal overflow
even when both the inputs and outputs are perfectly representable,
although most implementations are correctly coded (see abs(complex<>)
for example).

I would suggest that you take one of two approaches:

1) Document the valid arguments for your implementation, and don't check
them. This will serve all users at least as well as the standard
library currently does.
2) Document the valid arguments, don't check them in the function body,
and provide an additional function with a longer name (erf_safe() or
somesuch) that checks stuff. Most people won't use it, but some people
might be thankful for it's presence. But don't contaminate the names of
the unchecked functions, otherwise you will cut your potential user base
to very small numbers.

> 3 Handling math errors like overflow/underflow.

Again, I would say: don't do anything. Users will not want the extra
overhead associated with such checking. Allow the error value to
propagate out in the return value (if beta(x,y) returns NaN, so be it).
This is what users will expect and desire in the general case. Again,
if you want to provide an additional "_safe" function that will detect
and protect, feel free to do so, but you probably don't want to cut out
a large proportion of your possible user base by implementing
unnecessary performance degradations.

> 5 When checking for an argument value like zero or unity,
> should I also allow within an espilon of it?

You should (almost) never be checking for equality of floating point
types! If you do so, your implementation is very fragile, and not
likely to be used. In the few cases where you do actually need to be
doing such a check, the algorithm you are implementing will dictate the
size of epsilon ... too small, and your calculation will be wrong, too
big and your calculation will also be wrong. You can (almost) always
check INequality safely, but not equality. Most of the standard
calculation algorithms use inequalities instead of equalities.

I hope this helps and hasn't been too disheartening ... implementing
efficient, widely used, portable numerical libraries is REALLY HARD.
There is a reason that I haven't already written and submitted stuff
like this myself :-) But I wish you luck, because we really do need
this stuff.

Kevin Lynch voice: (617) 353-6065
Physics Department Fax: (617) 353-6062
Boston University office: PRB-565
590 Commonwealth Ave. e-mail: krlynch_at_[hidden]
Boston, MA 02215 USA

Boost list run by bdawes at, gregod at, cpdaniel at, john at