
Boost : 
Subject: Re: [boost] [geometry] area, length, centroid, behavior
From: Brandon Kohn (blkohn_at_[hidden])
Date: 20090308 11:30:26

From: "Thomas Klimpel" <Thomas.Klimpel_at_[hidden]>
Sent: Sunday, March 08, 2009 5:49 AM
To: <boost_at_[hidden]>
Subject: Re: [boost] [geometry] area, length, centroid, behavior
>> Is it really just the problem of maintaining a stable sorting of the
>> segment intersections with the sweepline? This will surely require some
>> care, but doesn't sound like a fundamental issue ruling out
>> BentleyOttmann for floating point.
It does seem simple doesn't it. Perhaps you should give it a try to see... I
would certainly welcome a more robust solution. I would like to point out
that even the LEDA BentleyOttmann implementation has issues on FP (at least
the version I tried a couple of years ago.)
Here's another paper describing (in sometimes painful detail ;) some of the
issues.
ftp://ftp.inria.fr/INRIA/publication/publipdf/RR/RR3270.pdf
>> What looks more nasty to me is that for nearly parallel segments,
>> BentleyOttmann has to compute the intersection point of the
>> corresponding lines, even if it is not an intersection point of the
>> segments. But perhaps a simple modification of BentleyOttmann can avoid
>> the need to depend on "false" intersection points.
I don't have anything like this in my implementation. I use segment
intersection testing not line (though these tests do involve some of the
same mechanisms as line intersection testing but with constraints). The
intersection tests are only computed on segments which are adjacent to each
other in the sweep line. I don't see how being nearly parallel makes any
difference. If the segments (not lines) do not intersect then they do not
report a 'false' intersection. The issues that I have encountered come from
the computation of the intersection point of one of the segments with the
sweep line (not between the segments.) and the subsequent comparison with
similar intersection points from neighboring segments in close vicinity.
>>I'm just wondering whether there are more fundamental issues ruling out a
>>BentleyOttmann type algorithm applied to floating point than "simple"
>>stable sorting issues.
>> I have a
>> faithful implementation of the algorithm from a textbook which still
>> manages to fail on occasion (even with fuzzy comparisons though not so
>> frequently as without.)
>
> I don't think that fuzzy comparison is a good idea when you want to make
> BentleyOttmann robust for floating point. You will have to ensure that
> "important decisions" are only computed once, and enforce that no other
> intermediate/computed result can contradict these "decisions".
Perhaps you are right here, but I would have to think about it further. I'm
not sure that there are cases where you are calculating the same information
multiple times. The segment intersections are calculated once. The sorting
is only done when a segment is inserted into the sweepline structure (so one
comparison between each segment inserted and those already in the
structure). I suppose the one bit of information that is calculated many
times is the topology relation to neighboring segments when interpolating
the current intersection point with the sweepline over a range of event
points. When points become very close (to within an epsilon tolerance) the
ordering on the line becomes indeterminate when simply using fp comparison.
It may be possible to do something along the lines of what you suggest here.
Perhaps this idea is similar to what is done here:
http://cutebugs.net/files/curve/227.ps
If you would like to give this a go, I'd be happy to collaborate. The code
is available at:
http://sourceforge.net/svn/?group_id=240335 (project name gengeomalg).
svn checkout:
svn co https://gengeomalg.svn.sourceforge.net/svnroot/gengeomalg gengeomalg
> Assuming it is possible to make a BentleyOttmann type sweepline
> algorithm robust against floating point issues, what other issues will
> have to be considered when deciding which algorithm to use?
>
I would say the same considerations as any other algorithm: correctness,
runtime complexity, and memory use.
Regards,
Brandon
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk