Hi Bruno,
If we use a multipolygon concept for the polyhedron, for a
slightly non-convex polyhedron, the only way to compute its volume
is by tetrahedral decomposition. If the tetrahedral decomposition
fails for a multipolygon polyhedron, we will see inverted
tetrahedrons as parts of the decomposition (negative mixed product
volume). This might be our test per choice, to be used with the
metafunction you mentioned. Of course, there is another way to
compute the volume, which may be faster: analytical formula, but
this requires strict convexity of the polyhedron. We can probably
switch between those options somehow.
I'll start working with P1 as you suggested, since there is a lot
to try out. When I read about intersection algorithms for
polyhedra, often the complexity O is mentioned, but the real
question is how much the intersection can be sped up for a naive
intersection (O(m*n)) with collision detection for the polygons of
the polyhedron (and or Axis Aligned Bounding Box [AABB] tests).
I'll try to get the P1, collision detection and AABB tests up and
running. Then we can apply the naive O(m*n) intersection like
this:
P1 intersect (P1 X , P1 Y)
P1 result
if AABB intersect (X, Y):
if collide (X, Y):
for all polygons X (pX):
for all polygons Y (pY):
if collide (pX, pY):
pResult = intersect (pX, pY)
append pResult to result
return result
P2 which is graph based will wait for a while, but I would like to
do that as well. The question is for reasonable number of polygons
(I am dealing with polyhedra that have up to say 20 polygons),
which algorithm is faster, the Preparata/Muller or the naive
intersection with collision detection (and/or AABB test).
I'll start working with P1 as you suggested and see where it
leads. As soon as I have something to show or to ask, I'll let you
know. :)
Best regards,
Tomislav
On 05/22/2013 11:11 AM, Bruno Lalande wrote:
Hi Tomislav,
Good point, I hadn't realized you were merely using the
existing multipolygon concept. I think it makes sense.
Moving forward we should also propose validation
functions to check it's actually is a polyhedron (as a
polygon soup can be anything) but this can be done
later.
Implementing the Separating Axis Collision test is one of
the first thing I wanted to do when started to handle real
£D stuff so I'm glad you're looking into this. The
algorithm indeed relies on the polyhedron being convex.
Which essentially means that our concepts are going to
need a way to express that (i.e. to say "I am convex" so
optimal algorithms like this one can be selected). Long
ago I simply thought there would be a Polyheron and a
ConvexPolyhedron concept but things are much more
complicated that there are tons of ways to represent
polyhedrons (not just the ones you described), all valid
in some situations. I can see 2 solutions:
- ConvexPolyhedron could be a concept on its own, which a
model can fulfill in addition to another one (e.g. it can be
a AdjacencyGraph and a ConvexPolyhedron) - however this
require changes to our current design I think.
- We could have a is_convex<Geometry> metafunction that
algorithms would call when they're interested in whether a
geometry is convex (could be used also with polygon and plenty
other stuff - could be hardcoded to true for concepts like
box).
So all in all, starting with your P1 is a valid approach.
It terms of the IO stuff you've written it looks right since
the format seems to be representing exactly that (a set of
polygons without any strong guarantee about their spatial
disposition). And you can try to work on other algorithms with
this concept as well to see what it gives.
Regards
Bruno
_______________________________________________
Geometry mailing list
Geometry@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/geometry