# Boost :

Subject: Re: [boost] [geometry] robustness approaches
From: Fernando Cacciola (fernando.cacciola_at_[hidden])
Date: 2009-03-15 16:41:45

Hi Luke,

> Fernando Cacciola wrote:
>> I'll try to sketch the general ideas below to get you started.
> ... BIG SNIP ...
>> distance_result d1 = distance(a,b) distance_result d2 =
>> distance(c,d) if ( d1 < d2 ) blabla
>>
>
> The sage has come down from the mountain. I like to see your posts
> to the list. It makes me feel less self-conscious about my own long
> winded emails.
>
:) it was a bit long wasn't it?

> Your distance_result example is sort of like my template <typename T>
> struct coordinate_traits { typedef T difference_type; }
>
> template <> struct coordinate_traits<int> { typedef long long
> distance_type; }
>
Nice.
And greacefully correct for int... io most platforms o)

> Template <typename coordinate_type> foo(coordinate_type a,
> coordinate_type b, coordinate_type c coordinate_type d) { typedef
> typename coordinate_traits<T>::difference_type diff_type; diff_type
> d1 = distance(a, b); diff_type d2 = distance(c, d); //distance
> between coordinates is difference_type if( d1 < d2) blabla }
>
> I am exremely careful about overflow, because as you point out, it is
> pretty much the one thing that causes robustness problems in integer.
> I need to promote input integers to a larger data type even for
> precision and may overflow. I need to register what types to promote
> to for given situations. It is really amazing how fast I exceed the
> limits of builtins and need to resort to gmp to represent results of
> simple algebraic expressions.

Indeed.

> I have a scheme in mind to make floating point robust using my
> fixed-point implementation. The idea is to take the bounding box of
> the Boolean operation at hand and map that to the full integer
> extents. When mapping the floating point coordinates to integer we
> construct a map from integer back to floating point to ensure that we
> don't lose any precision by mapping back. Then we run the Boolean as
> fixed point, which is 100% robust and cannot fail. Finally we map
> the output of the Boolean back to floating point and look up each
> coordinate in our x and y integer to floating point maps to restore
> the original precision if any was lost in conversion. In this way,
> any and every floating point input can be handled robustly with a
> minimum loss of precision at the intersection points and none at all
> at the original vertices.

Indeed.. this is a particular implementation of the well known snap
rounding technique, which was first develop in the context of
itersections thus it naturally fits boolean operations between polylines,

>To improve precision just use a larger
> integer data type.

How uses that? library or user?

> I think this scheme is workable and attractive
> because it avoids all the well documented pitfalls of making floating
> point robust.

FWIW snap rounding can be done directly on the float coordinates, but
that's a detail we don't need to discuss now.

> Also integer arithmetic is faster than floating point.

I'm not sure if that true these days.. you'll quickly find arguments
against this old timer truth on the internet.
Never benchmarked it myself though.

> This gives an integer algorithm a significant performance advantage
> that might offset the cost of sorting the input and extra time and
> storing the mapping vector.
>
It would be interesting to see some benchmarks of ints vs doubles to
measure how far that holds today.

> Most applications need uniform precision (cartography is a good
> example) whereas graphics is the only application I can think of that
> needs high precision around the origin and low precision far from the
> origin.

Can you elaborate on this?

> This scheme ought to cover most of the needs for floating
> point geometry. Can you identify any problem with the idea?
>
Well, it doesn't cover most of the needs for floating point geometry,
just some.

That is, it is undoubtly useful when doing booleans but, for example, it
can't be used for straight skeleton without risking a complete
topological change (for the same reason perturbations can't be used).
And for say, convex hulls, or triangulations, is over kill and most
likely slower than the general purpose technique I outlined before
(floating-point filters)

> What about the case where the skeleton of a 1x1 integer unit square
> results in degenerate topologies when the construction of its center
> point snaps to one of its corners?

This is a good example of the importance of separatintg topology from
geometry *in the algorithm*.

It shouldn't matter at all if the coordinates of the center point happen
to be coincident with one of the corners. The topology must be still
correct.

The only way to achieve this is by simply not using the coordinate of
the center point (which is the result of a construction, i.e. an
intermediate result) *at all* when constructing the output topology.

My straight skeleton and polygon offset implementation in CGAL:

does exactly that, it will always give you the correct topology even if
several output nodes end up in the same coordinate. They would
nevertheless be distinct nodes properly connected to each other.

You can't achieve this with snap rounding.

> The algorithm has to be robust to
> the degenerate cases that it itself might produce,

If the algorithm follows the general principle I stated before of using
excusively its input to drive any logic, then it is just naturally
inmune to the degenerancies it might produce itself.

> as well as those present in the input.

Well this is different story. Handling degenerate input typically
involves an overhead some users won't appreciate, so this fall within
the design decisions of the algorithm in particular and the library
phylosophy in general.

From a user POV I generally prefer algorithms that require certain
degree of non-degenerancies in the input and provide ortoghonal tools to
fix that if needed (say to remove self intersections)

> Furthermore, it is preferable not to output
> degenerate geometries because this can cause downstream code to
> choke.

Absolutely.

Following the general methodologies I outlined before, since algorithms
have separate predicates and constructions, each with its own exactness
property, a user can choose to use exact constructions (with the added
overhead usually significant) if they need to pipeline results down
further processing.
In all cases of course the topologies must be correct, hence predicates
must be exact.

> I find it very unconvincing when a booleans algorithm claims
> to handle, for example, self intersecting polygons, but upon closer
> examination leaves them self intersecting at the output. This is
> hardly handling the degeneracy in my opinion.

Indeed.
It is much better to otherwise not attempt to handle the degenerancy at
all, and provide other tools to "fix" that before clipping.

> Robust to degeneracy
> should rightly mean robust to degeneracy at the input and free of
> degeneracy at the output, otherwise you're setting a trap in the path
> of the guy who uses the result.
>
Precisely.

Best

```--
Fernando Cacciola
SciSoft Consulting, Founder
http://www.scisoft-consulting.com
```