|
Boost : |
From: Ion Gaztañaga (igaztanaga_at_[hidden])
Date: 2005-10-02 15:35:50
Hi,
> Couldn't the allocator do this instead of asking the user to do it? It
> would be better if the container did not need special code for different
> allocators.
Well, the container does nothing, because just like when new fails and
std::bad_alloc is thrown the STL container don't manage the exception.
But maybe the allocator could try to grow the file. The problem is that
currently I unmap the file and after that I map it again so this can be
a problem since the allocator itself is in the mapped file. I don't know
if it's safe to increase the mapping or I need to unmap it first, and
this is portable.
> If you can make the assumption that memory will not move, it would make
> the implementation a lot simpler. There is a certain overhead in
> offsetting pointers on each pointer dereference, and the red-black tree
> algorithms are quite pointer intensive. Mt_tree could certainly use the
> pointer type from the allocator, and I'll put that into my next release
> of RML.
I understand that maybe you might want to develop your own container but
have a look to Shmem map/multimap/set/multiset family and look if you
can use those to build your multi-index container.
I somewhere read that some operating systems even don't allow mappings
at fixed addresses but I don't think this is the case for the most used
ones. I think some OSes reserve some virtual addresses to dll-s and
shared memory, but in theory a malloc in an application can use the
memory address I need to map the shared segment another process has just
allocated. But this is a thing a need to investigate.
> Persist's approach uses a pool of mapped memory - thereby avoiding
> needing to move memory. [To people unfamiliar with mmap(): a file does
> not have to be mapped contiguously into the address space]. Allocating
> more memory means mapping another block, and no memory needs to be
> moved.
Sorry, I don't understand this, since I have not investigated Persist's
approach, but what is a pool of mapped memory and how do you use it?
When your allocator is out of memory what do you do? Increase the file's
size and remap it?
> Although I haven't seen it in practice, it is certainly a
> theoretical possibility that the OS will refuse to map the file back to
> the same memory addresses the next time the program is run, and this is
> the one reason why I haven't been pushing the Persist library because I
> just can't guarantee its safety.
Just think that you want to open two mapped files in the same program,
created separately. You couldn't open both mapped files simultaneously.
> The other problem is that other threads won't be expecting objects to
> move. This means that you can't have concurrent access to your
> memory-mapped data. Also if the file is shared between processes and
> you grow the file in one process, when does another process detect the
> change?
The mapped file approach in Shmem is not for concurrent access between
processes for the moment. Obviously, is easier to notify an application
that the mapped file has grown than allocating a new shared memory
segment, discovering the mapping address and hoping it would be mapped
just where you want.
> My feeling is that safety is paramount, and that it is better to have a
> safe slower implementation using offset_ptrs, than to use absolute
> memory addresses and risk mmap() failure.
I think that with few mappings you could achieve fixed addresses, but
obviously, if you let the OS choose the address it can organize memory
so that you can map the maximum number of bytes and the maximum number
of segments.
> You could perhaps provide two allocators in Shmem: one that uses
> offset_ptrs and another that does not.
In Shmem, in STL-like allocators (which in the end, call the master
allocator that manages the mapped file) the pointer type is templatized
in Shmem so you can use raw pointers if you want to map the memory in
the same segment and use just STL containers.
I think that a growing mechanism between processes is quite complicated
to achieve but maybe I'm overlooking something.
Regards,
Ion
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk