Boost logo

Boost Users :

From: Zeljko Vrba (zvrba_at_[hidden])
Date: 2008-05-29 16:21:11


On Thu, May 29, 2008 at 03:42:16PM -0400, Jason Sachs wrote:
>
> reasonable size is of shared memory to allocate. (probably in the 64K - 1MB
> range but I'm not sure)
>
>From the low-level POV:

Modern systems use on demand allocation. I.e. you can allocate a (f.ex.) 32 MB
SHM chunk, but the actual resource usage (RAM) will correspond to what you
actually use. For example:

0 1M 32M
[****|..................]
  | |
  | +- unused part
  |
  +- used part of the SHM segment

As long as your program does not touch (neither reads nor writes) the unused part,
the actual physical memory usage will be 1M + small amount for page tables (worst
case: 4kB of page tables for 4MB of virtual address space). This is at least how
SYSV SHM works on Solaris 10 (look up DISM - dynamic intimate shared memory); I
would expect it to work in the same way on new linux kernels too. I'm not
sufficiently acquainted with NT kernel to be able to comment on it.

Bottom line: allocate as few as large chunks as possible; modern VMs should be
able to handle it gracefully.

===

If you don't know how much memory you will need in total, how do you handle out of
memory situations?

Alternatively, why not use files instead?


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net