|
Boost Users : |
From: Jason Sachs (jmsachs_at_[hidden])
Date: 2008-05-30 14:25:10
>> of these going on at once (though usually just one or two). On my own
>> computer I have increased my max swap file size from 3GB to 7GB (so
>> the hard limit is somewhat adjustable), though it didn't take effect
>> until I restarted my PC. I'm going to be using my programs on several
>> computers + it seems silly to have to go to this extent.
>
>Ok, and you'll run your job on a machine with e.g. 1GB of swap[*], and this
>particular instance will need 4GB of swap. What will happen when the
>allocation fails? Note that growing the SHM segment in small chunks will
>not help you with insufficient virtual memory, so you might as well allocate
>M*N at once and exit immediately if the memory is not available.
I wouldn't allocate M*N at once. Each process could start/stop at
random times (this is triggered by users other than me who would start
multiple logs as necessary) so N changes as a function of time.
What I will probably do is just use one memory segment of size M[i]
memory per process #i, where M[i] has a default value M0, say 64MB,
that I can preset to a larger value if I know I'm going to have a long
duration log.
It's not a huge deal to increase the swap file (even in an old
computer, which most of our lab pc's are, I could add a 2nd hard drive
if I needed), & is almost certainly the most expedient solution for
the time being.
>(I'm sorry, I'm very pragmatic, and I don't seem to have enough info to really
>understand why you're making such a fuss over the swap size issue. I'm afraid
>I can't offer you any further suggestions, since I consider this a non-problem
>unless you have further constraints.)
Not a fuss, just trying to be aware of all the problems. This
discussion has been helpful. I have a career where my resources are
spread thinly among a wide range of things, & it's much more expensive
for me to design quickly for 90% success + then refactor 1-2yrs later
when absolutely necessary, than it is to spend the extra effort up
front to design for 99% success, understand where the 1% failure lies,
and move on to other things knowing I'm far less likely to have to
revisit. Especially when 90% success rates have a tendency to be
overestimated as there are customers who forget to mention certain
design requirements ;)
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net