Lucas,

Thank you for the detailed response.  The data was actually from our application and the only reason I was investigating was because the application was performing poorly.  However, as it turns out the application was faulty and generating polygons that were inappropriate.

I had noticed the divide and conquer further up the stack, but as with most code unfamiliar to you, misunderstandings are easy to make so I wanted to make sure.

Regards,
Josh Pieper

On Wed, Jan 19, 2011 at 12:37 PM, Simonson, Lucanus J <lucanus.j.simonson@intel.com> wrote:
Josh Pieper wrote:
>First: we are using boost 1.45.0 with gcc -O3 on Ubuntu 10.04.  (Although similar problems appear with >VS2008 on Windows 7).
>
>We have been observing performance with boost::polygon that is significantly worse than we were
>expecting when working with any angle polygons.  Namely quadratic instead of the expected O(n log n).
>While profiling, I noticed that most of the time is spent in the overload of
>line_intersection::validate_scan at detail/scan_arbitrary.hpp:154 which is commented as: "//quadratic
>algorithm to do same work as optimal scan for cross checking".
>
>I have inlined our test program below, and some performance results measured by instruction count
>under "valgrind --tool=callgrind".  Is this poor performance because our data is degenerate, or
>because a vestigal quadratic double check is still being applied?

So a couple things need explaining.  The validate scan is worst case quatratic.  The expected runtime of this function is O(n^1.5) if we assume that sqrt(n) edges intersect the sweepline at any given time.  It was originally written to validate the bentley ottman sweepline algorithm I was working on but then I found it to be quite fast and put it into production usage without changing the name since it is an implementation detail.  Moreover, you will notice that it is being called from a function with divide an conquer in the name.  On top of the suboptimal validate_scan algorithm I am doing a divide and conquer.  This approximates the optimal O(n log n) performance in the cases where the data is ammenible to divide and conquer (which is commonly the case.)  Finally, your data appears to be randomly generated edges that you inflate by some small ammount in the edge_plus object, which is a worst case for line intersection.  In randomly generated data the expected number of lin
 e intersections is O(n^2).  Any line intersection will have O(n^2) (or worse) runtime wrt input points on randomly generated data.  Whenever I state the algorithmic complexity of the algorithm I always specific that n is input points plus line intersection points.  I also document that I use divide and conquer.  There are pathological cases where divide and conquer doesn't help and even where the average number of line segments intersecting the sweepline is O(n) (think many parallel strips at 45 degrees).  These cases are fairly artificial.  In PCB or package layout where such data would commonly be seen divide and conquer usually is of benefit because 45 degree strips across the entire package or board isn't going to implement a circuit.

Benchmarking polygon clipping can be tricky.  It is just as easy to benchmark a best case input and conclude fantastic performance as to benchmark a worst case and conclude terrible performance.  It is a little harder to get the real story.  I suggest using application data and collecting requirements for performance for your application.  The performance isn't poor unless it doesn't meet requirements on your application data or you have an alternative that is way faster.

Regards,
Luke


_______________________________________________
Boost-users mailing list
Boost-users@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-users