Boost logo

Boost :

Subject: Re: [boost] [geometry] robustness approaches
From: Simonson, Lucanus J (lucanus.j.simonson_at_[hidden])
Date: 2009-03-14 21:20:17

Thomas Klimpel wrote:
> I really like the idea, but I can't help pointing out a mismatch
> between the integer and floating-point philosophies.
> The typical integer philosophy would allow to introduce new points
> into existing segments, but would "forbid" the computed result (taken
> literally as rounded to integer coordinates) to have more
> intersections than actually reported. The typical floating-point
> philosophy would allow the computed result (when represented as by
> floating-point numbers) to have more intersections than actually
> reported (as long as these additional intersections would only be
> artifacts of the floating-point representation), but would "forbid"
> to introduce new points into existing segments.
> Because the initial topology might be different than the topology
> when converted to integer coordinates, the computed result might (in
> admittedly rare circumstances) have more intersections than actually
> reported, and the computation might also have introduced additional
> points into existing segments, in order to assure that the computed
> result in integer coordinates doesn't have more intersections than
> actually reported.

You are right that I introduce intersections on segments in order to assure that the computed result doesn't have more intersections than actually reported. I'm surprised, however, that unreported intersections are commonly considered allowable in floating point if they are below a threshold. Wouldn't mismatches between thresholds in different floating point modules lead to such unreported intersections in the output causing an invariant to be violated in someone else's input and their code to fail? Isn't it equally possible to introduce intersections on floating point segments if they were "close enough" to intersecting to assure that the computed result in floating point doesn't have more intersections than actually reported. I was working on devising a way to compute x and y epsilon values from the bounding box of the floating point input such that I could bound the lateral movement of segments due to floating point grid snapping to those epsilons and assure that the computed result doesn't contain unreported intersections. It is basically trying to apply my integer methodology to floating point. Is this the opposite of the normal approach to floating point, which you suggest is to ignore the potential to creat such intersections as artifacts of floating point grid snapping since they should be below some threshold and can be ignored?

I know the commercial, floating-point-based, 3D modeling, meshing and physical simulation tools we use are frequently failing due to numerical robustness issues. They place very tight restrictions on the cleanlyness of their inputs, and meeting those kinds of tight restrictions should be the goal of a geometry library, or it precludes its interoperability with quite a few applications. Working around these numerical issues in commercial software consumes significant engineering effort inside Intel. It also keeps the software vendors busy providing us with support and software updates. By this evidence, floating point robustness doesn't appear to be a solved problem. There is no way that a boost library can provide that kind of labor intensive support. In order to be both free and useful, it has to be absolutely reliable and generate no support load for the maintainer due to robustness issues and not need such support to be used for real world applications. I would recommend against accepting a geometry library to boost if I thought it would result in an avalanche of robustness issues being reported by unhappy users. Nor would I want to be responsible for supporting such a boost library. I am only proposing my library because I'm confident in the reliability of what I've done for integer coordinates. Do you think employing my fixed point scheme for floating point coordinates would result in issues when using it in applications? If so I need to understand what those issues are to decide whether going forward with the scheme is a good idea.

Thanks for the LEDA reference, by the way, that will help me with my literature search for the boostcon paper.


Boost list run by bdawes at, gregod at, cpdaniel at, john at