Modern systems use on demand allocation.  I.e. you can allocate a (f.ex.) 32 MB
SHM chunk, but the actual resource usage (RAM) will correspond to what you
actually use.  For example:

0   1M                 32M
[****|..................]
  |           |
  |           +- unused part
  |
  +- used part of the SHM segment

As long as your program does not touch (neither reads nor writes) the unused part,
the actual physical memory usage will be 1M + small amount for page tables (worst
case: 4kB of page tables for 4MB of virtual address space).  This is at least how
SYSV SHM works on Solaris 10 (look up DISM - dynamic intimate shared memory); I
would expect it to work in the same way on new linux kernels too.  I'm not
sufficiently acquainted with NT kernel to be able to comment on it.

Interesting. I hadn't thought about that.

I tried a test program (running on Windows XP), and had a number of separate processes each allocate a new managed_windows_shared_memory object (with a different name for each process) with size 2^29 bytes (=512MB). I'm not exactly sure what resources it allocates; using TaskInfo to view resource usage, each process's "virtual KB" usage goes up by 512MB, but its "working set KB" usage doesn't increase until I actually allocate memory within the shared memory segment.

Sounds good, but the 6th one of these failed and I got a warning saying my system was low on virtual memory. So it sounds like there is a 4GB total system limit for WinXP even for just reserving virtual address space -- which seems silly since each process should have its own address space and therefore as long as I don't actually allocate the memory, and each process's reserved address space doesn't exceed 2^32 (or 2^31 or whatever the per-process limit is), I should be able to reserve an unlimited amount of total address space. No can do. :(

So strategy #1 of being profligate in choosing shared memory segment size fails on WinXP; there's a significant resource cost even if you don't actually allocate any memory. Drat.