Boost logo

Boost Users :

Subject: Re: [Boost-users] [geometry] Runtime dimensionality
From: Adam Wulkiewicz (adam.wulkiewicz_at_[hidden])
Date: 2017-09-26 13:40:28


The Maschine Via Boost-users wrote:
> This was presented at BoostCon I attended a number of years back.
> https://www.youtube.com/watch?v=RY_YSYz7b8I
> Pablo is a great guy to talk to about this.
>

Thanks for the link, that's interesting but I don't have time right now
to watch the whole talk. Let me be more specific. What's a set of
Boost.Geometry operations that you'd like to use? E.g. would you like to
do spatial indexing using the rtree, perform set operations, perform
relational operations, etc? What's the maximum dimension?

The most generic approach would be to set dimension to the highest
dimension you support and then as Florian pointed out return 0 for
dimensions higher than the actual dimension. Depending on the actuall
use case the unneeded dimensions could be processed by the library
anyway so this could harm the performance. But maybe it would be good
enough for you.

In some cases it would be possible to optimize the code, e.g. if you
wanted to use the rtree you could pass your own equal_to getter checking
only the important dimensions or overload e.g.
boost::geometry::intersects() which is used by the rtree while
performing boost::geometry::index::intersects() spatial query, etc.
Regarding set and relational operations for polygons AFAIR only the
first 2 dimensions are used in the most part of the algorithm anyway
(besides actual points comparison I think).

If the operation you'd like to perform was CPU-bounded, had the
complexity worse or equal to linear, the max dimension was high but the
actual dimension was low then it could be worth converting your geometry
to e.g. one of the Boost.Geometry model having compile-time dimension.
The conversion would have linear complexity so the overall complexity
would stay the same and it could speed up the overall operation because
you'd avoid processing the unnecessary dimensions.

Converting the data could also increase performance if the fact that
purely dynamic Eigen::VectorXd have data pointer to dynamically
allocated memory caused cache misses while accessing data. This probably
would be the case if the operation had complexity higher than linear
because the conversion would be linear itself and I'm assuming that most
of the time the program would wait for the data while copying or
processing using linear algorithm.

And so on...

Adam



Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net