Boost logo

Boost :

From: Jens Maurer (Jens.Maurer_at_[hidden])
Date: 2001-09-04 16:44:34


boost_at_[hidden] wrote:
> I am working on some math functions like erf, beta, gamma,
> and have some questions on preferred practice

There hasn't been much practice with "from-scratch" math
functions in boost. We have a few functions in
boost/math/special_functions.hpp which are (easy) combinations of
existing transcendental functions such as exp() or sin().
However, your description seems to indicate that you're starting
from a Taylor series.

> 1 My instinct is to only produce a double version(64 bit reals),
> rather than template versions.
> This is because the polynomial coefficients and algorithms
> are only readily available for double.
> Most people will use these.

Is you implementation extensible? Can I later add a "long double"
or "my_really_long_double" version of the respective function?
Is it documented how I can compute the coefficients with some
unlimited-precision tool?

> 2 Arguments
>
> Many need checks on validity of arguments.
> For example, probability p must be 0 <= p <= 1.
>
> What should I do after the check fails?
> throw standard exception out_of_range?

That's often a longish debate. Documented preconditions
(such as the allowed input range) are sometimes checked with
"assert". That halts the program on failure. However, I think
for the maths functions, I'd rather see an exception instead
of an assert(), because it's very tedious to check all
preconditions in longish expressions.

On the other hand, the IEEE floating-point standard provides a
"NaN" which could be used as the result of an undefined operation.
The idea is that no operation throws or otherwise signals an
error, but you can check the result at the end and decide whether
a problem happened during the calculation. I could envision
that both alternatives are useful in different contexts.

> Should I distinguish between invalid_argument for wrong sign,
> and out_or_range for too big?

I'd just use "range_error" for all. It's a "runtime_error"
and I think that's the right category for that kind of problem.

> I propose to provide the value of the offending argument,
> and perhaps an indication of which argument (many functions have
> several).

You could transport copies of the faulty arguments in the
exception object.

> Does anyone have any specially good examples of doing this?

Do it the natural way first and present a specific idea to
talk about.

> 3 Handling math errors like overflow/underflow.

If you go the "exception" route, appropriate overflow_error
and underflow_error exception classes are at your disposal
in the standard library.

> Checking for denormalisation works OK.

Note that you're relying on IEEE semantics if you ask for
denormalization. I think that's ok for todays environments,
but you should document this requirement very clearly.
Boost may run on platforms where IEEE numerics are not
available, and people need to know that your library doesn't
work properly on those platforms.

> product = std::numeric_limits<double>::min() /2.; // should be
> denormalised!
> cout << "std::numeric_limits<double>::min() /2. " << product
> << ( (product > 0 && product <
> std::numeric_limits<double>::min()) ? " IS
> denormalized!" : " is NOT denormalised") << nl;

Looks good.

> And so does checking for infinity:
>
> product = std::numeric_limits<double>::infinity();
> if (product == std::numeric_limits<double>::infinity() )

What about checking for > std::numeric_limits<double>::max() ?
Hm... No, == infinity() looks nicer.

> But can't use this for NaNs???
>
> if (product == std::numeric_limits<double>::quiet_NaN() )
> { // If compare with standard limits quiet_NaN, then
> result is ALWAYS a
> NaN!!!

No, the result is always "false". This is a defining property
of NaNs, they are the only quantities where x == x is false.
Thus, for IEEE numerics, you can say
   if(x != x)
        throw "x is NaN";

Do make sure to put your compiler into "strict IEEE conformance mode",
otherwise a greedy optimizer may throw away the "x != x" and replace
it by "false".

> 5 When bad things are detected, should I throw exceptions derived
> from the
> standard exceptions like
>
> class My_overflow_error : public std::overflow_error ...
> class My_range_error : public std::range_error

I'd simply use the standard exceptions and avoid defining
my own types except when I'd like to copy the offending function
argument values to the exception object. If you define your
own types, it's a good idea to derive from the standard exception
objects.

> 5 When checking for an argument value like zero or unity,
> should I also allow within an espilon of it?
>
> if (p <= 0) ...
>
> or
>
> if (p <= 0 + standard::numeric_limits<double>::epsilon() ) ...

I'd say no. If you have a rounding error, you should pass it on
and not try to be clever. In particular since the "epsilon" sized
quantity could be there on purpose. Also note that epsilon() is a
very large quantity (about 10^-16), so your check above flattens
all values between min() ( = 10^-308 ) and epsilon() ( = 10^-16 ).
Very surprising.

> (The logic of this is that computation error in previous stages might
> result
> in
> this instead of a exact zero).

Tough. You use computer arithmetic, you get rounding errors.
That's the way of life you chose.

> 6 If the result is infinity (for limit values like 0 and 1 for
> example)
> should I return standard::numeric_limits<double>::infinity()
>
> or standard::numeric_limits<double>::max()

If you're dealing with IEEE numerics on the platform, infinity()
is the proper way to go (but only for exact 0, e.g. in case of 1/x).
For non-IEEE platforms, max() may be sensibly defined while
infinity() isn't.

Jens Maurer


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk