Subject: Re: [boost] [geometry] robustness approaches
From: Thomas Klimpel (Thomas.Klimpel_at_[hidden])
Date: 2009-03-15 09:00:17
Simonson, Lucanus J wrote:
> I'm surprised, however, that unreported intersections are commonly
> considered allowable in floating point if they are below a threshold.
Whether such intersections are allowable or not depends on what you want to do, and on the tool you use for doing it. It didn't intent to say that tools taking their input as floating-point (should) consider unreported intersections allowable in floating-point if they are below a threshold.
The tagging with "integer philosophy" and "floating-point philosophy" was just meant to distinguish two possible choices for what is considered as valid result. My underlying statement is that one is forced to take a choice here. More precisely, I think that the following two statements apply in cases where the 'exact' result cannot be represented in the used output format:
If you "forbid" to introduce new points into existing segments, you must allow for certain unreported (spurious) intersection.
If you "forbid" to have certain unreported (spurious) intersection, you must allow to introduce new points into existing segments.
The tagging of these choices with "integer philosophy" and "floating-point philosophy" was probably a bad idea.
You asked "Can you identify any problem with the idea?", and I pointed out that you risk violating both conditions for what is considered as valid result. That said, I really think that this idea is the way to go, but that it is important to be explicit about its limitations.
In order to be explicit, I will try to describe the conversion process in detail, so that the places where information is lost (without hope of recovery) can clearly be identified. The input is in floating-point, so that each x- and y-coordinate of the points has it's own exponent. The conversion to fixed point involves fixing a common exponent for all x-coordinates and a common exponent for all y-coordinates. Choosing the highest occurring exponent of the x-coordinates as common exponent for the x-coordinates and the highest occurring exponent of the y-coordinates as common exponent for the y-coordinates looks like a good idea to me, so this is how I will proceed (remember we are just talking about a factor 2 for binary numbers, not a factor 10 as for decimal numbers). Now I assume that the fixed-point type has at least as many bits as the mantissa (including sign) of the floating point type, so that the coordinates with the highest exponent will be representable without loss of accuracy. So the points that will loose the most accuracy will be the one around the origin, that will basically all be mapped to 0 (assuming their exponent is sufficiently different from the highest occurring exponent). These points might have described some geometry with intersections, but we will not be able to report any of these intersection, because they are "invisible" on the fixed-point grid. So the statement "In this way, any and every floating point input can be handled robustly with a minimum loss of precision at the intersection points and none at all at the original vertices." might be a bit too optimistic about the loss of precision at the original vertices. Inserting the neighboring grid-points into such segments directly after the real endpoints might remove the unreported intersections, but the effect of this is basically equivalent to modifying the coordinates of the original point (only that we now introduced overlapping segments, that might cause downstream code to choke, as you say).
The mapping to the original coordinates might still be useful, but I think that modifying some of the original coordinates might be OK in cases where the alternative would be to introduce overlapping segments.