
Geometry : 
Subject: [ggl] Proposal for Delauney triangulation algorithm
From: John Swensen (jpswensen)
Date: 20111020 15:34:36
I don't take anything harshly. I fully realize that most of the time I don't know what the heck I am doing and usually just hack away until something kindof works. I will definitely go and look for Andrii's Summer of Code results and try to take that for a spin. I only used CDT initially because I found it easy to understand/use and it worked for my simple polygons with holes.
John
On Oct 20, 2011, at 1:00 PM, Simonson, Lucanus J wrote:
> Perhaps I should reintroduce Andrii and expand a little on his comments so that new members of the list understand the context.
>
> Andrii implemented during the 2010 Google Summer of Code a voronoi diagram algorithm for points and line segments in the plane and is currently working on incorporating it into the Boost.Polygon library. There is a trivial algorithm that combines the Delaunay triangulation of points with line segment intersection to realize a Delaunay triangulation of polygons by iterative approach. While not optimal, it would still be faster than O(n^2), and because both voronoi diagram of points and line segment intersection are numerically robust in Polygon the resulting Delaunay triangulation of polygons algorithm would also be numerically robust (though constrained to integer coordinates).
>
> Andrii has invested heroic effort into making his implementation both algorithmically optimal and numerically robust. Usually people pick one or neither of these goals for their implementation, since each is hard to achieve and together near impossible.
>
> To underscore the difficultly of implementing a robust and optimal voronoi diagram algorithm. Jonathan Shewchuk is literally famous for doing so based on Priest?s expansion of floating point arithmetic, however, his algorithm is not in fact robust according to recent reports (and is also burdened with an unpleasant commercial license.) See quotation below for experience report:
> http://vterrain.org/Implementation/Libs/triangulate.html
> ? In 2008.01, i adapted the Triangle code with a wrapper to call it from vtlib. It works quite well, for a very large number of polygons. However, there are some (rare) cases of degenerate geometry where it will crash. In particular:
> ? It does not like duplicate vertices or duplicate edges. 'Duplicate' in this case is relative to numeric precision: For a building 10100 meters in size, two vertices within 8cm of each other, defining a very short edge, can cause Triangle to crash.
> ? It does not like it when a hole (inner ring) in the polygon has a vertex in the same location as one in the outer ring (crash).
> Specifically, there is no way to simply throw a big number type and Delaunay triangulation and make a nonrobust algorithm robust. You can use real number arithmetic because the circle test includes an irrational square root. Priest?s expansions works by iteratively calculating more bits of the result until the comparison can reliably be made, however, in the case where the two irrationals are exactly equal you clearly have a problem that you could expand forever without knowing whether you will eventually find a different. Getting the implementation correct requires mathematical proofs about what the code is supposed to do and then getting the code to do what you proved it should. Andrii?s work has exceeded even my own capabilities in computational geometry, in my estimation, and is truly exceptional. His saying the CDT implementation being discussed isn?t numerically robust or algorithmically optimal shouldn?t be taken too harshly.
>
> Regards,
> Luke
>
> From: gglbounces_at_[hidden] [mailto:gglbounces_at_[hidden]] On Behalf Of Andrii Sydorchuk
> Sent: Thursday, October 20, 2011 8:51 AM
> To: Generic Geometry Library Discussion
> Subject: Re: [ggl] Proposal for Delauney triangulation algorithm
>
> Hi John,
>
> I had a brief look through the c++ code of the CDT. Here are the conclusions that I made:
> 1) The implementation is not robust. It uses double type for all the computations. It uses epsilons for the comparison decisions. You should investigate if this might corrupt algorithm internal structures states.
> 2) The complexity seems to be ?(N^2), that is not the best case (N*log(N) is possible). At least advancing front doesn't implement BST (looks like doubly linked list). It might be possible to use std::map or std::set instead of that data structure, but I am not sure that they are suitable for the algorithm needs. Also you might need to check complexity requirements for all the other structures and operations.
>
> Andrii
>
> On Wed, Oct 19, 2011 at 2:18 PM, John Swensen <jpswensen_at_[hidden]> wrote:
>
> On Oct 19, 2011, at 8:07 AM, Bruno Lalande wrote:
>
> > Hi John,
> >
> > Ii could be interesting.
> >
> > First, have you checked that the license used by this library is compatible with what you want to do?
> >
> > Second, not sure you get well what implementing a Boost.Geometry algorithm really is about. You're talking about replacing the classes in the library (point, polygon) by the ones provided by Boost.Geometry. But that's not the point: the purpose of Boost.Geometry algorithms is to be able to work on any data type, as long as it meets some requirements. So basically your algorithm will not know/use geometries from Boost.Geometry directly, but its concepts. You'll have to use all the metafunctions and other mechanisms that allow us to manipulate our data through concepts and not directly. Was this your intention?
> >
> > Regards
> > Bruno
>
> I guess maybe I don't understand the architecture of Boost.Geometry very well. How do you differentiate between algorithms that only work on polygons or only work on sets of points (using the term "algorithm" loosely here to describe the mathematical definition rather than the Boost.Geometry definition)? Or do operations that are specific to a very specific instantiation of a template not belong in Boost.Geometry?
>
> I was proposing to create a class that takes a model::polygon or model::multi_polygon (regardless of the point type) and performs Delauney triangulation on the polygon(s) and spits out a model::multi_polygon containing the resulting triangles as model::polygon's. Does such a thing belong in Boost.Geometry or does it not fit the paradigm of having an operation work on any data type? If it doesn't fit into Boost.Geometry, then I can easily just start a github project for the purpose of "algorithms" that operate on Boost.Geometry objects but don't meet the genericness requirements. Either way is fine with me, but let me know.
>
> John
>
> P.S. It is BSD license and the two core maintainers think it is an awesome idea._______________________________________________
> ggl mailing list
> ggl_at_[hidden]
> http://lists.osgeo.org/mailman/listinfo/ggl
>
> _______________________________________________
> ggl mailing list
> ggl_at_[hidden]
> http://lists.osgeo.org/mailman/listinfo/ggl
Geometry list run by mateusz at loskot.net