Boost logo

Boost :

Subject: Re: [boost] GGL Review
From: Barend Gehrels (barend_at_[hidden])
Date: 2009-11-17 07:00:20

> Is this why one group of users accepts that his input is uncertain and accepts
> (slightly) uncertain result
There are many discussions going on about (numerical) robustness and the
problems they can give in libraries. We want to clarify our approach
once again, and give some backgrounds.

People seem to understand different things by the word robustness, as
pointed out on this list:
a) numerical floating point issues
b) algorithmic bugs
c) those two are distinct, but also related

I'll first explain c) in more detail. Intersections in the past were
often done using perturbations. See for example (Greiner/Hormann, a variant of
Weiler/Atherton). This makes some lifes easier. Easier for the
implementator, but not for the user. It is *not* correct behaviour and
results in (on large scale) small deviations and might actually cause
bugs (if moved vertices happen to cause new intersections (see e.g.
Kim&Kim, All this is *not* done in GGL. We
do *not* follow this approach and perturbate vertices. We also do not
follow the approach of Kim&Kim.

These perturbations might be one cause of the fear people apparently
have on FP intersections.

Still about c): Another approach that widely used GIS packages have is
the tolerance value. For intersecting or cleaning, a tolerance can be
specified and all points within that tolerance are merged, and there is
a default tolerance as well. This results in moving vertices. This
tolerance approach is also *not* followed by GGL.

This tolerance behaviour is probably another cause of problems people
have had in the past (and in the present!) using FP intersections.

About b). The perturbation option was invented to make programmers life
easier, but can cause bugs as explained. If *not* used, programmers live
is more difficult and this can lead into bugs. This is probably another
cause of problems, I do not have a reference to it (it is welcome).

About a). Numerical floating point calculation can lead to values
considered equal while they are not, to overflow and rounding errors,
etc. There are several ways to approach this:
- using high precision arithmetics
- using methods to avoid overflow or rounding errors. This is e.g. done
in the Boost.Math hypot function. A classical example is the calculation
of triangular area, where terms can be sorted to get a more precise
result (e.g.

So there are several methods to avoid instability and we don't know them
all, some of them might be applicable to our algorithms. This is what I
meant by "not checked on 100% robustness". As pointed out in the
discussions on this list this spring (a.o. "The dependent concept should
explicitely require unlimited precision since the library specifically
uses it to *guarantee* robustness. [...] Users should be left with no
choice but to pick some external component fulfilling the unlimited
precision requirement"), it seems that it is not possible to solve all
these issues using any FP number. Therefore we decided to go for
supporting high precision numeric types first.

Our approach (I presented this list earlier but realized that I forgot
option 4):
1) library users can use points with double, long double or float
2) for higher numerical robustness: users can call algorithms using
another calculation type (e.g. GMP or CLN, ...)
3) idem: users can use points with GMP or CLN coordinates
4) GGL supports strategies ( They are
selected by default but can be overriden by the user. This is like 2,
but it is also possible to *implement* a strategy yourself and implement
it using another numeric approach (still having the same coordinate
type).The design allows this.

This approach can be seen in area, distance and centroid but not yet in

If this approach doesn't fully make sense, please state exactly why.

Regards, Barend

Boost list run by bdawes at, gregod at, cpdaniel at, john at