|
Boost Users : |
From: Cory Nelson (phrosty_at_[hidden])
Date: 2008-02-19 10:18:17
On Feb 19, 2008 5:25 AM, Lang Stefan <SLang_at_[hidden]> wrote:
>
>
>
> Hello all,
>
> I've been trying to integrate pool allocators for lists within our project,
> but early test runs turned up quite a surprising number of questions and
> problems. I've tried to resolve those issues by closely regarding the
> documentation, debugging, and - finally - checking this list for threads
> that potentially answer my questions, but unfortunately I found that on this
> list more questions were raised than answered.
>
> Specifically, I noticed that Stephen Cleary doesn't seem to frequent this
> list, or has stopped doing so some 5 or 6 years ago.
Indeed, this is very unfortunate.
> Nevertheless maybe there are other people out there who can answer one or
> more of my questions, so here I go:
>
> 1. When using fast_pool_allocator for std::list<> I see no way to influence
> the blocksize being used for the next allocation. Since our application
> handles huge lists I cannot have fast_pool_allocator always simply double
> the next block's size ad infinitum! The pool class does offer methods to
> inquire and set the next_size property, but unfortunately, the
> singleton_pool underlying fast_pool_allocator is not accessible through the
> allocator interface!
After a certain point it may actually be less efficient to double it,
as address space fragmentation will cause the OS to search harder for
a continuous block. This may not be the reason pool does this, of
course, but it's just a thought.
> Is there a recommended way to set or limit next_size for pool allocators?
>
> I have found a workaround to achieve this (see below) but am not very
> comfortable doing it this way.
>
> 2. I am under the impression fast_pool_allocator heavily leaks memory! As
> far as I can tell, it works properly in the beginning, but uses up to 7
> times the memory a std::list<T,std::allocator<T> of the same size uses! I am
> talking millions of list nodes and gigabytes of required memory here, when
> it should be only some 200 MB of used memory. I've ran a test to find out
> how much memory exactly was being allocated in comparison to the amount
> being requested by inserting into the list and found the factor to be
> exactly 7.0 .
There is a bug in pool that causes it to severely leak memory if
sizeof(T) is not aligned on bounds of at least sizeof(void*). This
could be the reason.
-- Cory Nelson
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net