From: William Kempf (williamkempf_at_[hidden])
Date: 2002-03-22 10:23:10
>From: "Moore, Dave" <dmoore_at_[hidden]>
>To: "'boost_at_[hidden]'" <boost_at_[hidden]>
>Subject: RE: [boost] thread_pool.zip added to Files section
>Date: Thu, 21 Mar 2002 23:15:43 -0500
> > That is the point. You inprint in your implementation
> > following algorithm of idle threads management:
> > 1. Minimum number of threads m1.
> > 2. Maximum number of threads m2.
> > 3. If thread is older t1 -> destroy.
> > 4. If need more thread - create one.
> > Are you sure that this model wil fit for any possible usage
> > of thread pool. What if I do not want to destroy my threads
> > ever during program lifetime of vice vesa immidiatelly when
> > returned to pool? What if I just want support some ballance
> > between amount of used threads at the moment and amoumt of
> > idle ones? What if I want ballance parameters to change
> > depend of time of the day or load of the server? And so on.
> > I would prefer provide a solution that allows more flexibility
>I appreciate the desire for flexibilty. However, I do think that the
>combination of min, max, and timeout covers alot of the common cases,
>though. Having idle threads standing by to handle an influx of new jobs
>be very important, but a longer timeout and/or a higher minimum thread
>can help in this case, too.
>If we were to use the pool_managemnet algorithms....
>First, a library would have to provide at least a "fixed size" algorithm
>a min/max/timeout algorithm. Users of the library should -not- be forced
>create their own algorithm just to use the thread pool....
>I think the model you presented would have to -also- take into account:
>1. A thread that is deciding whether to exit should take into account
>whether other jobs are queued up.
>2. A thread asking whether it should exit must express whether it has just
>-immediately- finished a job, or whether it has been idle for the timeout
>3. The algorithms -might- have to take timeouts into account. This is the
>most troubling problem, since the pooled threads tend to use the timeouts
>during a wait() operation which
>Basically, many dynamic algorithms would require these "facts" as inputs to
>get_object and put_object, so these arguments would have to be also added
>enlarge_storage and hold_returned...
>This seems like alot of complication, but it might be manageable with
>default algorithms and a small pool_management_alg interface so that users
>aren't scared from implementing their own pool mgmt. algorithms. If the
>interface is too daunting, no one will customize the algorithm, and this
>will defeat the purpose of making thread_pool flexible.
>I would like to hear from Bill Kempf on this issue - how much flexibility
My own personal opinion is that min, max and timeout are enough to handle
all but corner cases. More flexibility, if it's not designed very
carefully, will only make usage vastly more complex to allow the class to be
used in a few corner cases where rolling your own might be more appropriate
A policy sort of design could probably give you the flexibility for covering
even these corner cases with out compromising ease of use for the more
typical cases. However, I'm not sure that it would do anyone any service,
since programming in a new policy would likely be as complex, or even more
so, then simply rolling your own to begin with.
Now that I've said a lot of things that will likely be opinion not held by
most, let me state that I am fairly new to policy usage and so my opinion
may be based on FUD and not fact, and I *am* totally open to explore such an
alternative design. So I'd like to: (1) hear more about what other
scheduling policies people think are needed (with concrete examples of where
it would be useful) and (2) see an interface proposal (not necessarily with
implementation) that illustrates how people think more flexibility can be
achieved (and hopefully illustrating that such a design doesn't complicate
things just for corner cases).
Join the worlds largest e-mail service with MSN Hotmail.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk