|
Geometry : |
Subject: [ggl] spacial index
From: Adam Wulkiewicz (adam.wulkiewicz)
Date: 2011-05-02 08:09:37
Simonson, Lucanus J wrote:
> Adam Wulkiewicz wrote:
>> New nodes are smaller than the old ones. They contains a vector of
>> pointers or values and variant's id. Old ones contains vector, pointer
>> to parent and level's number. Leaf nodes contains also additional
>> vector. Moreover shared_ptrs are used so there are references
>> counters. Size of the new rtree is bigger probably because I don't
>> delete splited nodes. I just clear elements vector for source node
>> and use it as the first node, so the memory is still allocated.
>
> Why shared_ptrs? Can't we assign ownership for the nodes to their parents? I would think that we could make deallocation of them explicit and dispense with the overhead of reference counting.
shared_ptrs were used in the old implementation and aren't used in the
new one, sorry for misunderstanding.
>> Thanks Lucanus that you remembered me about this technique. I'll give
>> it a try.
>
> I'm glad I could make a positive contribution.
After rearranging the code a little bit, compiler seems to handle it
differently than before and times are a lot shorter for searching and
slightly longer for creating the tree. This were not intended, only some
of the classes were moved. E.g. default version of visitors::insert were
implemented instead of leaving the implementation empty and implementing
specialization in different file. Recursive version of the searching
visitor wasn't changed at all. Times related to the old implementation
don't change.
I've tested 5 different search implementations:
1. classic recursive
2. loop with stack implemented as the array of static size. This is only
a test, relative, probably fastest algorithm, and won't be used because
size of the tree is not known in the compile time.
3. loop with stack implemented as std::deque. The memory is allocated
for each search.
4. loop with stack implemented as array of static size and std::deque.
The second container is used if number of elements exceeds the maximum
number of elements which array can hold. In tests the second container
were not used, the memory were not allocated but there is some
additional code, i.e. conditions which slows the process.
5. loop with stack implemented as std::vector but there is one global
container in the tree and is passed to the searching visitor. The memory
is not allocated for each search but find method is no longer const so
this probably won't be used either. Still it's a nice test.
max:8 min:4
old creation: 8.0s
old search: 3.4s
new creation: 3.2s
new search 1 re: 2.46s
new search 2 ar: 2.26s
new search 3 dq: 2.83s
new search 4 ad: 2.40s
new search 5 gl: 2.54s
max:20 min:6
old creation: 7.7s
old search: 1.6s
new creation: 3.4s
new search 1 re: 1.24s
new search 2 ar: 1.19s
new search 3 dq: 1.45s
new search 4 ad: 1.27s
new search 5 gl: 1.34s
max:32 min:8
old creation: 8.0s
old search: 3.4s
new creation: 3.6s
new search 1 re: 1.53s
new search 2 ar: 1.47s
new search 3 dq: 1.75s
new search 4 ad: 1.57s
new search 5 gl: 1.64s
I only take algorithms 1 and 4 into the consideration because they are
the fastest. In general, classic recursive algorithm is better.
Algorithm 4 is better for tree with nodes where number of elements is
small. For small nodes traversing of the tree is more important in
relation to the iterations performed for each traversing step. But if
tree has a lot of levels or overlapping nodes or size of the primary
array used in algorithm 4 is too small, memory will be allocated which
will slow down the algorithm.
I could perform more tests to see what node's size both algorithms has
the same speed for? Then one algorithm could be used for some sizes and
other one for other. But I don't know if this is necessary.
I guess it is better to stay with the classic recursive version because
it is fast, more clear and works for all sizes of the tree in the same
way. In other words, it's characteristic is contineous. The speed of
search will be propotional to the size of the tree. Algorithm 4 may be
fast for some trees but if tree grows it will be suddenly go slower than
someone assumed at the beginning. Moreover there is no way to set good
initial number of elements to the primary array. Well, it should be a
user-define macro but this just complicates the tree and still it will
behave as described above.
Btw what do you think about making min and max elements numbers
compile-time template parameters instead of run-time parameters? Some
additional optimizations could be possible. E.g. use of arrays of the
static size on the stack instead of allocating memory. This could give
us another speedup.
Regards,
Adam
Geometry list run by mateusz at loskot.net