Boost logo

Boost :

Subject: Re: [boost] [geometry] robustness approaches
From: Fernando Cacciola (fernando.cacciola_at_[hidden])
Date: 2009-03-20 18:26:46


Hi Luke,

> Fernando Cacciola wrote:
>> IIRC there has never been a precedent of a Boost library practically
>> requiring (to be actually useful) an external non-boost library, so this
>> deserves its own discussion.
>>
>> While in the case of GTL, it is thechnically up to the user to pick up
>> "their own" multipresicion number types, they actually can't do that
>> without an external non-boost library since such number types do not exists
>> in the SCL.
>
> That may be too strong a statement. Long double provides quite a few bits of
> precision. There is some upper limit of input bit width which long double is
> still sufficient to compute the exact result. As that bit width is exceeded
> the odds of a fault increase as the input bit width increases. If we say the
> library is by default robust only up to 25 bits (or whatever I determine to
> be the case) then users have the option of performing robust calculations
> within that constraint.

The problem with this view is that users have no way to restrain the input in
such a way that all calculations are exact within N bits, for whatever N.

Users are completely helpless since they just have their input, whatever that
is, and they can't do anything about it to make the algorithm work with a number
type of N bits because they just wouldn't know what to do.

That is, there is no know normalization process that can be applied to the input
  so as to keep the required number of working bits within long double that can
be applied to general geometric algorithms.

There has been some research, for example, on using single precision in the
input so as to keep exact results within long double when doing line
intersections (so that applies to clipping) but it doesn't generalize to other
algoritrhms and pose certain restrictions on the input data itself (sort of
force a snapping on it), which may very well be unacceptable if the input uses
floating-point to begin with (instead of integer or fixed-precision coordinates)

Keep in mind that we don't see GTL as a "clipping" library only, so its
robustness must be general.

Floating point operarions won't even trap if they loose significant precision
(if programmed in portable C++), so the ibrary will just silently fail to
produce a correct result.

I like to consider Robustness as an all or nothing game: it can't work in
100%-epsilon cases. It is either 100% robust or not all.

> If they want to handle larger inputs they can
> provide high precsion data types to override long double.

This won't work. There is no definition of "larger input" that a user can measured.

Here's a practical example:

Implement a predicate to compare the distance between two intersection points of
two *lines* given as two points each against a third using long double. That is,
the predicate takes 9 points.

Now feed into that two almost parallel lines like these:

the first being vertical and VERY SHORT, say with the top point 1e-2 away fromm
the botton.
the second almost vertical but VERY LARGE, with the top and bottom vertices
separated by, say 1e4 or so, and with one of the vertices slightly shifted by a
very small amount, say 1e-3 (to make it NOT really vertical)

And a similar setup but with vertical lines.

Now lay with the small distances in there to see how way off the long double
precision you need to be to compute that predicate exactly.

> I have been quite successful in executing very large test cases without
> needing gmp. I performed the n-layer map overlay on dozens of layers of
> thousands of segmented circles each that were intersecting quite frequently
> and it ran just fine without gmp.
>

That experience just isn't sufficient.

Here's a tale of a similar experience:

When I first incorporated GPC (which btw has, easily, thousands of users) into a
project, I spend a lot of time studying its robutness by feeding it specially
designed test cases. It passed them all, so it appeared to evident that it was
robust for all practical purposes.

The project evolved until one day I started to recieve, with increasing
frequency, bug reports that boiled down (after many wated hours) to be
robustness issues within GPC. Soon a geometric pattern popped up as the culprit:
a complex figure F and another figure F2 obtained by shifting F a really
*really* small amount, then subtrating F2 from F.

That pattern totally choked GPC, even after years of succidingly producing
correct results. In the end my client had to buy the CGAL clipper wich is
slower, nowhere cheap, but truly 100% robust.

For general geometric problems there is no *practical* input geometry constrain
(that a user can measure and handle) which would guarantee that only M bits of
working precision are needed, so 100% robustness can ONLY come from the ability
of the implementation to adaptively use as many bits as needed.

Only in a case-by-case basis it is possible to measure the upper bounds on the
number of bits needed and then state whether long double is good enough ot not.

But then, for the specific case of clipping, it turns out that the number of
required working bits for ALL posible input can be shown to be way off the
precision of long double, even if the input is restricted to floats unless it is
restricted to a float grid.

Best

--
Fernando Cacciola
SciSoft Consulting, Founder
http://www.scisoft-consulting.com

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk