|
Boost : |
Subject: Re: [boost] [geometry] robustness approaches
From: Fernando Cacciola (fernando.cacciola_at_[hidden])
Date: 2009-03-15 21:16:26
Hi Beman,
So we finally discover your area of expertise besides C++ :)
> On Sat, Mar 14, 2009 at 2:26 PM, Fernando Cacciola
> <fernando.cacciola_at_[hidden]> wrote:
>> ...
>> For many many geometric applications, it is the topology what needs to be
>> correct, not the geometry.
>
> In the early years of digital geography, it was not understood that
> the topology had to be correct, and virtually every digital mapping
> project turned into a disaster. Once it became known (in the late
> 1960's) that the topology had to always be totally correct (no
> topological errors whatsoever, no matter how large the database) then
> it became possible to build large scale digital maps, and digital
> mapping became a reality.
>
I can imagine.
>> If you do clipping for instance, it might very
>> well not matter at all (for instance in drawing applications) if the
>> vertices are slightly off.
>
> In commercial digital mapping, data like water features are often
> "cartooned" to save memory and speed processing. In effect,
> intermediate coordinates are deliberately distorted, but only when it
> does not break the topology.
>
Right.
>> But if an entire region is included in the result
>> when it should not, then that is a disaster for whatever application.
>
> The GIS industry has a bunch of colorful names for such topological
> errors; zingers, leaks, etc, based on the appearance of map plots with
> various characteristic mistakes.
>
:) interesting to know
> And, yes, once the underlying rules were understood, most or even all
> of the geometric computation code had to be not just rewritten, but
> usually thrown in the trash and a fresh start made. It was actually
> possible to reduce the precision of coordinates and yet produce far
> more robust code.
>
Indeed :)
The straight skeleton code that exists in CGAL and which I'm using on
this thread to show how floating point numerical issues *can* be handled
correctly, was my third round for that algorithm.
Once I figured out "the right way to do it" I just had to trash my two
previous implementations and start over from scratch. There was no other
choice because the algorithm as stated in paper was based on computing
intersections between bisectors to generate new bisectors that would
subsequently intersect. There is just no way to make that robust (as it
uses intermediate results to drive the logic) without heavy support from
the input numeric type (meaning without asking users to buy a Cry to run
this). It just had to work with plain double, or even float, to be
usable in industry. So I had to refactor the algorithm itself, that is,
develop a new algorithm, to make it work exclusively using the input
vertices... outputing skeleton nodes with approximated coordinates as a
sort of side effect which the algorithm itself never looks at.
-- Fernando Cacciola SciSoft Consulting, Founder http://www.scisoft-consulting.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk