 # Boost :

Subject: Re: [boost] [geometry] robustness approaches
From: Simonson, Lucanus J (lucanus.j.simonson_at_[hidden])
Date: 2009-03-14 16:36:10

Fernando Cacciola wrote:
> I'll try to sketch the general ideas below to get you started.
... BIG SNIP ...
> distance_result d1 = distance(a,b)
> distance_result d2 = distance(c,d)
> if ( d1 < d2 )
> blabla
>

The sage has come down from the mountain. I like to see your posts to the list. It makes me feel less self-conscious about my own long winded emails.

Your distance_result example is sort of like my
template <typename T>
struct coordinate_traits { typedef T difference_type; }

template <>
struct coordinate_traits<int> { typedef long long distance_type; }

Template <typename coordinate_type>
foo(coordinate_type a, coordinate_type b, coordinate_type c coordinate_type d) {
typedef typename coordinate_traits<T>::difference_type diff_type;
diff_type d1 = distance(a, b);
diff_type d2 = distance(c, d); //distance between coordinates is difference_type
if( d1 < d2)
blabla
}

I am exremely careful about overflow, because as you point out, it is pretty much the one thing that causes robustness problems in integer. I need to promote input integers to a larger data type even for addition and subtraction because these operations add one bit of precision and may overflow. I need to register what types to promote to for given situations. It is really amazing how fast I exceed the limits of builtins and need to resort to gmp to represent results of simple algebraic expressions.

I have a scheme in mind to make floating point robust using my fixed-point implementation. The idea is to take the bounding box of the Boolean operation at hand and map that to the full integer extents. When mapping the floating point coordinates to integer we construct a map from integer back to floating point to ensure that we don't lose any precision by mapping back. Then we run the Boolean as fixed point, which is 100% robust and cannot fail. Finally we map the output of the Boolean back to floating point and look up each coordinate in our x and y integer to floating point maps to restore the original precision if any was lost in conversion. In this way, any and every floating point input can be handled robustly with a minimum loss of precision at the intersection points and none at all at the original vertices. To improve precision just use a larger integer data type. I think this scheme is workable and attractive because it avoids all the well documented pitfalls of making floating point robust. Also integer arithmetic is faster than floating point. This gives an integer algorithm a significant performance advantage that might offset the cost of sorting the input and extra time and storing the mapping vector.

Most applications need uniform precision (cartography is a good example) whereas graphics is the only application I can think of that needs high precision around the origin and low precision far from the origin. This scheme ought to cover most of the needs for floating point geometry. Can you identify any problem with the idea?

What about the case where the skeleton of a 1x1 integer unit square results in degenerate topologies when the construction of its center point snaps to one of its corners? The algorithm has to be robust to the degenerate cases that it itself might produce, as well as those present in the input. Furthermore, it is preferable not to output degenerate geometries because this can cause downstream code to choke. I find it very unconvincing when a booleans algorithm claims to handle, for example, self intersecting polygons, but upon closer examination leaves them self intersecting at the output. This is hardly handling the degeneracy in my opinion. Robust to degeneracy should rightly mean robust to degeneracy at the input and free of degeneracy at the output, otherwise you're setting a trap in the path of the guy who uses the result.

Thanks,
Luke