|
Boost : |
Subject: Re: [boost] [string] proposal
From: Nevin Liber (nevin_at_[hidden])
Date: 2011-01-28 13:51:13
On 28 January 2011 12:22, Dean Michael Berris <mikhailberis_at_[hidden]>wrote:
The fact that contiguous pages don't get stored in contiguous physical
> memory compounds the problem even more. You feel this a lot when your
> system starts thrashing and the VMM does the work of swapping pages
> and re-arranging page tables for you. The round-trips between a CPU
> page fault and the VMM page fault handler along with potentially
> invalidating L1/L2/L3 caches costs are significant in environments
> where you need to get as much things done per second as physically
> possible. If you can avoid having to go through this dance with a sane
> memory management strategy, I think that's a win even in the trivial
> case.
>
If I have to keep data in physical memory, the way to use the least number
of pages is to store it contiguously with respect to virtual memory.
Anything else will (except for degenerate cases) require *more* physical
pages of RAM.
There are many other reasons to break something up into a segmented data
structure, but avoiding thrashing when storing the object is not one of
them.
Honestly, trying to argue that your new class is more efficient *on all
fronts* is futile. You'd do far better showing the tradeoffs you are
making; which things are more efficient, and being honest about which things
are less efficient.
> If what you're saying is true than std::deque
> should be preferable to std::vector, but this
> is exactly the opposite of what I've always
> heard.
>
It depends on what you're trying to do and what algorithms you're
> applying on your std::dequeue. This is the whole crux of the segmented
> iterators debate/paper by Matt Austern and others. Segmented versions
> of the algorithms would make using std::dequeue a lot more preferable
> if you really needed a data structure that doesn't promote heap
> fragmentation as much as a growing std::vector does.
>
GIven that a deque requires a new heap allocation for every N elements, how
does it avoid heap fragmentation?
And I think this is straying off-topic from Boost...
-- Nevin ":-)" Liber <mailto:nevin_at_[hidden]> (847) 691-1404
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk