Boost logo

Boost :

From: dylan_at_[hidden]
Date: 2001-11-22 01:46:48


--- In boost_at_y..., dylan_at_m... wrote:
> --- In boost_at_y..., dylan_at_m... wrote:
> > --- In boost_at_y..., Carl Daniel <cpdaniel_at_p...> wrote:
> > > Funny you should ask...
> > >

> The other thing I would need to do is decide whether a more
> sophisticated paging scheme would be necessary - my current one
> simple swaps in fixed-sized block when any part of it is required
> (hence swapping out any previously accessed block - it is assumed
to
> be dirty if a non-const pointer is requested). Ideally of course
the
> paging scheme should replaceable, which is not particularly
difficult
> with my current design.
>

Continuing my ongoing conversation with myself...

I spent half of today (our source control server is down at the
moment so I can't do much else) coding up a more sophisticated paging
scheme that actually works in pages rather than swapping the entire
cache space in and out as necessary. But in actual fact under NT it
suffers a serious performace hit, because although writing 8kb
(default page size) out to disk normally takes much less time than
writing 8Mb (default cache size), if the first page that gets swapped
out happens to be a long way from the 'origin', then writing 8kb at a
position, say, 12Mb into a file actually takes about as long as
actually writing 12Mb.
Now the initial scheme of swapping the whole cache space as required
is obviously not tenable - if a series of accesses are required on
both sides of the "cache boundary" the performance would be
miserable. So there would seem to be two alternatives:

* Don't both with paging, but when a request is made for memory that
is "just past the end" of the cache (which for an 8Mb cache, could
mean, say, up to 2Mb past the end), then swap out the first half of
the cache, shift the second half back to the beginning, then swap in
the second half.

* Use paging but don't necessarily store pages contiguously on disk -
ie just write new page to the end of the swap file along with
sufficient information to determine which pages represent which
portions of memory. I assume this is what OS's do. It's ultimately
more flexible as it means performance is reasonable even if the a
huge amount of memory is used very sparsely, and would probably be
able to act as an adequate "sparse array". I'm just not convinced
the extra complexity is worth it (just getting the paging right as it
is now was frustrating enough!).

The next challenge is to write an allocator that can be used for
list/map/set etc, which needs quite different behaviour, because it
will need to manage lists (used and free) of fixed-size blocks.
Unfortunately Dinkumware's implementation of these classes make it
impossible to specialise the allocators, because of the non-standard
use of _Charalloc, and the fact that it doesn't use
allocator::pointer for node pointer types.
I guess I need to look at STLport...

Dylan


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk