Boost logo

Geometry :

Subject: Re: [geometry] Crash in rtree after growing shared memory, VS2012 x64, Boost 1.55
From: Tim Finer (tim.finer_at_[hidden])
Date: 2014-11-17 16:11:59


Hi Adam,

It's been a few months, but I finally got back to trying out the bgl
rtree with a resizable memory mapped file. Things are looking up and I
have another question.

I've edited our correspondence down to the essentials:

On 7/26/14, 4:08 AM, Adam Wulkiewicz wrote:
> Hi Tim,
>
> Tim Finer wrote:
>>> On Jul 25, 2014, at 6:16 PM, Adam Wulkiewicz
>>> <adam.wulkiewicz_at_[hidden]> wrote:
>>>
>>> In order to work the way you want the rtree requires an allocator
>>> which will be able to increase the size of shared memory if needed.
>>> I'm not sure if this is possible in general and/or with Interprocess.
>>> I'll try to play with it in my free time.
>>> If you were able to come up with something before me, please let me
>>> know.
>> Some other options I'm looking at are:
>>
>> 1 Scanning all the data first (it isn't in a convenient format, so
>> I'll need to store it in another mapped file), then allocating enough
>> space for the rtree.
>
> FYI, the packing algorithm done in the constructor (when you're
> creating the rtree from e.g. a pair of iterators) should construct an
> rtree having nodes "packed" as tightly as possible, you could actually
> predict what'll be the actual number of nodes created. At least for
> currently used algorithm. If you're curious see line 109 of
> https://github.com/boostorg/geometry/blob/develop/include/boost/geometry/index/detail/rtree/pack_create.hpp.
> So maybe it'd be a good idea to first load the objects to a vector,
> calculate their size, then create a shared memory and construct an
> rtree using pack-create.
> Though I fear that if the memory could be fragmented this might not
> work in all cases.
>
I want to avoid a first pass and would prefer to trade space for time.
Is there a way that I could "actually predict what'll be the actual
number of nodes created" given a live rtree? I'd like to do this based
upon the currently populated rtree so that I could find the worst case
scenario and allocate at least that much memory in a node insertion wrapper.

>
>> 3. Monitor the capacity of the shared file and proactively
>> resize given a threshold (say 90%). I just thought of this one and I
>> like it because it sidesteps the entire exception problem
>> altogether. The more I think about it, the more I like this one.
>
> Yes, this is also worth trying.
> It seems that there is a get_free_memory() method in
> managed_shared_memory so the amount of free space could be easily
> returned.
> In case if the memory could be fragmented you could try to allocate
> some bigger block and see if it succeeds. Then resize the memory if
> necessary.
> When the exception is thrown get_free_memory() returns 968 in my test
> and I'm starting from 1024 * 64.
> The author of Boost.Interprocess could probably give us some pointers
> about how big the threshold should be and if the fragmentation should
> be taken into account.
>

Hey! This worked. I arbitrarily chose 90% and resized at that point
and my initial tests work! What I'd like to do know is figure out how
much memory would require. I'm not terribly worried about fragmentation
(yet), but will get to that after I run some tests with finding
appropriate thresholds.

Best,
Tim


Geometry list run by mateusz at loskot.net