Boost logo

Boost :

From: danl_miller (danl_miller_at_[hidden])
Date: 2002-03-13 20:16:48


--- In boost_at_y..., "bill_kempf" <williamkempf_at_h...> wrote:
> How do you disagree with anything I said? Nothing in your
> description differs from anything I've said, nor does it actually
> mention any need for any specific mutex scheduling. In a thread
pool
> it doesn't matter what thread acquires the mutex, and thus acquires
> the "job". All that matters is that *some* thread acquires the
mutex
> and thus the "job". It doesn't even matter if a thread (or set of
> threads) is starved in this case, because the jobs are still being
> executed in a timely manner. A thread can remain idle indefinately
> and not cause any problems with the operation of the thread pool.
>
> Bill Kempf

  Bill, you claimed in the following

> > --- In boost_at_y..., "bill_kempf" <williamkempf_at_h...> wrote:
> > > LIFO must be applied to the queued jobs, but there need not be a
> > LIFO
> > > gaurantee on any synchronization primitives to do this. So I
> think
> > > this is a different topic from what was being discussed.

that a synchronization primitive which unpends its pending threads in
LIFO-order was unrelated to scheduling jobs in a thread-pool which is
what Darryl & I were delving into in the postings included in their
entirety below.

That different-topic claim is false. I provided a factual
counter-example which disspells your false claim. The counter-example
showed precisely how a thread-pool whose threads pend on a single
message-queue is in fact a thread-pool which

  1) is a fully-functioning thread-pool which schedules its threads;

  2) makes direct & intimate use of an interthread messge-queue which
  in turn is based directly & intimately on a thread-synchronization
  primitive (i.e., semaphore, or if you prefer a reinvented semaphore
  implemented by condition-variables, for those people who do not
  consider semaphore in the set of thread-synchronization primitives
  which was the line of reasoning which lead to semaphore's
  unfortunate removal);

  3) permits a LIFO ordering/scheduling-priority or FIFO
  ordering/scheduling-priority or an undefined
  ordering/scheduling-priority

  4) makes use of a thread-sychronization primitive as the one &
  only way that the LIFO vs FIFO vs undefined
  ordering/scheduling-priority is provided in this thread-pool.

Because I have explained a decades-old thread-pool design which is in
common practice in embedded real-time systems, I have disproved your
false claim that the ordering-of-unpending/scheduling-priority of
thread-synchronization primitives is a different topic from
LIFO-ordering of jobs in thread-pools by showing how they can be
effectively used together.

[I offer vacuous agreement with your claim that there are alternatives
to a LIFO guarantee on thread-syncrhonization primitives. In that
artificially-narrow (inferior-)alternatives-do-exist sense, I agree
that we do not absolutely "need" such guarantees on
thread-synchronization primitives because a god-controller executing
in user-space can be written which could perform the scheduling of
threads in the thread-pool more complexly & less efficiently than the
threads-pend-on-a-single-message-queue lightweight technique of
teaching the kernel/operating-system to schedule a thread from the set
of message-queue-pending-threads for us. But using a different sense
of the word "need", I vehemently disagree. We need the
highly-efficient time-honored RTOS-style interthread message-queue
concept if for no other reason than to implement an efficient
lightweight thread-pool where all threads pend on a single
message-queue which doles out the work. In turn we need the semaphore
concept to implement the producer/consumer pattern in message-queue.
Also we need the semaphore concept to implement RTOS-style
pool-allocation pattern in an efficient manner in all pools of shared
resources-other-than-threads, which obviously are allocated by some
sort of pool-controlling unit(s) of software. (Why "other than
threads"? Because thread-pool scheduling is a truly unique case in
that the scheduling of threads from the set of threads pending on that
single message-queue can be performed by the beautiful intrinsic
interaction between the semaphore thread-synchronization primitive
teaching the operating-system/kernel which thread from the set of
pending threads to allocate&unpend as one unified operation without
the assistance of an external god-controller executing in yet another
thread.) Speaking of needs, note that we do not need the
message-queue (and thus its internal semaphore) to have LIFO order
unless paging in LRU threads in an operating-environment equipped with
VM (or with some other penalty to threads which have not executed
recently).]

So in short, you made a claim with which I vehemently disagree. That
well-supported disagreement is perfectly valid, not malformed.

--- In boost_at_y..., "bill_kempf" <williamkempf_at_h...> wrote:
> --- In boost_at_y..., "danl_miller" <danl_miller_at_b...> wrote:
> > --- In boost_at_y..., "bill_kempf" <williamkempf_at_h...> wrote:
> > > --- In boost_at_y..., Darryl Green <green_at_t...> wrote:
> > > > > I have been working with multithread software designs for
> > > nearly 10
> > > > > years now in embedded real-time telecom equipment. I have
> seen
> > > > > neither the need nor the use for the LIFO case, but maybe
> > someone
> > > else
> > > > > has.
> > > > I think a thread pool would want to use LIFO - it would seem
> > > reasonable to
> > > > expect that the dispatch of making a more recently run thread
> > ready
> > > would be
> > > > faster than a less recently run one (it would be unlikely to
be
> > > slower in
> > > > any case). I can't think of any other reason (off hand) for
> > > wanting LIFO. I
> > > > must admit that I have never explicitly requested LIFO.
> >
> > Darryl, this applies if there is some form of penalty regarding
> the
> > use of the least-recently-used (LRU) unallocated resource in a
pool
> > and some sort of economy of using the most-recently-used (MRU)
> > unallocated resource in a pool.
> >
> > Regarding this allocation-of-a-thread-from-a-thread-pool topic,
I
> > can see that one such penalty could arise from the use of virtual
> > memory (VM).
> >
> > In RTOSes and in embedded software and in real-time software, VM
> > usually falls somewhere in between thoroughly unacceptable and
> frowned
> > upon. I agree with a LRU/LIFO approach if in a VM OS where there
> is a
> > penalty for demand-paging in (portions of) the LRU thread's
> > address-space into core RAM.
> >
> > > LIFO must be applied to the queued jobs, but there need not be a
> > LIFO
> > > gaurantee on any synchronization primitives to do this. So I
> think
> > > this is a different topic from what was being discussed.
> > >
> > > Bill Kempf
> >
> > Bill, this is where you & I disagree. Semaphore is at its
core, a
> > mechanism of doling out (i.e., decrement) permission to access a
> > resource in a shared fixed-size pool (which must be put back when
> > finished: i.e., increment) or to consume/kill (i.e., decrement) a
> > disposable member of a population, whose members are dynamically
> > manufactured/born (i.e., increment). The debate which killed of
> > semaphore treated semaphore only as a list of mechanical
> > increment/decrement functionality without discussing this
> > pool-allocation and producer/consumer metaphor. The mechanical
> > viewpoint says that the semaphore whell can be reinvented upon
> demand
> > with the use of condition-variables. So be it for the next
> paragraph;
> > avoid semaphore if that makes common ground for the conversation
> > possible.
> >
> > In real-time software, RTOSes, and embedded software, message-
> queue
> > falls into the second producer/consumer metaphor of semaphore.
> > Certain threads all produce work-requests and all pushes those
> > work-requests into a single interthread message-queue. Each
thread
> in
> > a pool of threads (which might or might not be the aforementioned
> > threads) pends on that single message-queue when they have nothing
> > else to do (i.e., when that pooled thread has completed its prior
> > work-request if it had one or at start-up if it has not yet
> processed
> > a prior work-request). Thus when more than one thread in that
> > thread-pool is idle, more than one thread is pending on the same
> > message-queue, which as mentioned is a case of the
producer-consumer
> > application of semaphore. The message-queue has as the core of
its
> > internal implementation a semaphore governing the size of the
> > work-requests pushed into the shared(-among-producers) queue as
well
> > as governing the doling out of the work-requests which each idle
> > thread pops from the shared(-among-consumers) queue.
> >
> > Thus the message-queue (and in fact its internal semaphore) is
> > performing the entirety of the scheduling behavior for all
threads
> in
> > the thread-pool. Thus we have a case of interthread
synchronization
> > primitives performing the entirety of the doling out of work to
the
> > threads in a thread-pool. [This case is very much like a single
> queue
> > of people at the bank or post office serviced by multiple
> > tellers/clerks.]
> >
> > Another alternative is to have a god-controller with some sort
of
> > per-thread FIFO (multiple queue-orifices into the community of
> threads
> > instead of a single queue/orifice that the prior architecture
had).
> > But that architecture 1) has gratuitous complexity and 2) has
> defects
> > related to work-requests being placed in suboptimal queuing
> > strategies. [This case is very much like a queue of people for
each
> > clerk/teller, where some clerks'/tellers' line moves faster than
> other
> > clerks/tellers.]
>
> How do you disagree with anything I said? Nothing in your
> description differs from anything I've said, nor does it actually
> mention any need for any specific mutex scheduling. In a thread
pool
> it doesn't matter what thread acquires the mutex, and thus acquires
> the "job". All that matters is that *some* thread acquires the
mutex
> and thus the "job". It doesn't even matter if a thread (or set of
> threads) is starved in this case, because the jobs are still being
> executed in a timely manner. A thread can remain idle indefinately
> and not cause any problems with the operation of the thread pool.
>
> Bill Kempf


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk