|
Boost : |
Subject: Re: [boost] Proposal: Monotonic Containers
From: Christian Schladetsch (christian.schladetsch_at_[hidden])
Date: 2009-06-16 16:45:48
Hi David,
DISCLAIMER: this post of mine might sound a bit harsh, but since Christian's
> "proposal revision factory" basically has occupied this list for a while, I
> think it is fair.
My proposal has hardly changed since I first introduced it. The only key
un-breaking change that was made was an extra 5 lines to ensure that
allocations for different types made from the same storage are aligned
correctly for those different types on different platforms.
Due to requests from the list, the proposal has been extended to allow the
initial inline_storage to grow transparently to the heap if required.
All other 'changes' have been additions like chain<> (required to have a
sequence-like container that can literally span the stack/heap boundary),
and more benchmarks and tests.
> It is interesting how much leeway this group gives to a quite immature
> "proposal" and its stream of quick fixes.
Again, there has not been a "stream of quick fixes". I added generalised
alignment for different types. That is the only 'fix' that has been made to
the original proposal.
> No offense to Christian,
None taken.
> I truly enjoy the energy he brings and there is something to the *overall
> idea* of having a limited pre-allocated block used by multiple containers,
> including lists and trees.
There are a number of novel aspects to this proposal. I do not believe that
anyone else has ever even tried, let alone succeeded, in creating a system
that spans the stack and heap for storage for generalised containers. This
system can start on the stack and grow to use the heap transparently,
producing containers where some elements are on the stack others are on the
heap. I have demonstrated repeatedly the efficacy of this approach, and have
posted benchmarks showing real performance improvements.
> I am quite certain that a proper proposal for such an idea will not come
> from Christian himself, due to a lack of experience with advanced C++ issues
> and a bit of a focus problem. It might come from Thorsten, though.
Rather than revert to ad hominem attacks, I invite you to criticise the
proposal and the benchmarks as they stand.
> BUT, the truth is that as soon as we enter real high-performance use cases,
> we often end up with one of two scenarios:
>
> 1. We know how many elements we will deal with at most, and they are not
> too many; we can then reserve the space in a vector.
We also often need node-based containers. In any case, a vector on the stack
is faster than a vector on the heap.
> 2. There are many elements or they come and go during runtime. We then need
> a proper allocator, which also can deallocate.
If they come and go in a short space of time and there is some upper bound,
we do not need to deallocate. We can just truncate memory at the end. Even
if there is no previously known upper bound, we can still truncate at the
end. By deferring all memory management we can realise real performance
gains.
I have posted benchmarks that demonstrate the practical benefit of this
approach, which are being ignored.
I should also point out that monotonic storage has the following interface,
and has no upper bound on the size of its buffer:
/// storage that spans the stack/heap boundary.
///
/// allocation requests first use inline fixed_storage of N bytes.
/// once that is exhausted, later requests are serviced from the
heap.
///
/// all allocations remain valid at all times.
template <size_t InlineSize = 8*1024, size_t MinHeapIncrement =
InlineSize*1024, class Al = std::allocator<char> >
struct storage;
It is not clear how frequent, or important, the cases not covered by these
> two scenarios are, and in some of the space between #1 and #2, we can use
> Thorsten's auto_buffer. The only cases that are really palatable to the idea
> behind Christian's stream of proposals
I have not made a stream of proposals. I have made one, which has hardly
changed. That one proposal was altered once to add correct alignment of
different types on different platforms. It was then extended to allow the
initial inline_storage to grow as required.
> is that of multiple small, and pretty fixed, containers being used for an
> extended time in a high performance scenario. I am not sure this case is
> that common.
>
> I.e., it might very well be that even the *overall idea* concerns a
> non-problem in real life.
>
> Questions to the populace here: Do we see an light in the tunnel here,
> i.e., is this problem trying to be solved a problem? Do we want Christian to
> refine his ideas further? Do we want him to do it interactively on the list?
I am happy to leave it be. I have posted the code and my results. I have
made my case.
I do not think that anyone previously has ever even thought of doing what
this proposal does, so I expected some friction against the idea, and yes I
was quite defensive which has not helped my case.
In truth, Thorsten's auto_buffer<> provides a subset of the functionality
provided by this library, so I guess I have ruffled some feathers:
monotonic::storage<> storage;
std::vector<T0, monotonic::allocator<T0> > v0(storage);
std::vector<T1, monotonic::allocator<T1> > v1(storage);
...
std::vector<Tn, monotonic::allocator<Tn> > vn(storage);
std::list<U0, monotonic::allocator<U0> > l0(storage);
...
std::map<int, std::list<std::vector< ... > > m0(storage);
Use the containers as you wish. Storage for them will start on the stack,
using a limit that you can provide. It will grow as required to the heap, in
chunk sizes that you can provide . Allocation will be O(1). Deallocation is
deferred until the storage goes out of scope. Using these containers is
demonstrably faster than using the same containers with a default allocator.
Using the stack for storage for multiple containers is a good idea. Allowing
that storage to then move transparently to the heap makes it more general
and safer to use. By making de-allocation a nop, I also make allocation
O(1), use less memory, and allow adjacent-in-time allocations to be
adjacent-in-memory.
All of these things makes it easy to make small, fast and general
data-structures for use in high-performance applications without having to
revert to custom containers. I have posted results that back up my claims.
/David
Regards,
Christian.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk