Boost logo

Boost :

From: williamkempf_at_[hidden]
Date: 2001-10-13 10:22:39

--- In boost_at_y..., mf_dylan_at_y... wrote:
> I sent this as a mail to Bill Kempf first, he suggested I repost it
> here:

The main reason for the request was that the answer would benefit

> One thing though, how would I go about waiting on multiple
> conditions? This is actually the norm in much of my multi-thread
> work...most of the time is spending waiting on various threads and
> conditions, usually one of them is a "shutdown" event, another is a
> checking for work to do etc. etc. Doesn't seem to be any easy way
> achieving this unless I'm missing something.

Windows is one of the few threading systems I'm aware of that allow
you to "wait" for multiple synchronization objects at once. However,
the lack of this ability shouldn't really be an issue in any real
world examples because of the nature of condition variables. See, a
condition, unlike a Win32 event, relies on three components: the
condition itself, some external shared data "state" and a mutex to
insure proper synchronization between the three components. It's the
shared data that's the key here. Taking your description and
providing two example implementations, one in Win32 and one in
Boost.Threads, should illustrate this.

// Win32

struct ThreadData
   HANDLE shutdown_event;
   HANDLE do_work_event;

// signature not accurate, but this simplifies example
void do_thread(ThreadData* data)
   // Note that there are serious race conditions in the following
   // code... this is for illustrating how to handle multiple
   // "events" in both styles only, not for proper threading
   // techniques on Win32.
   for (;;) {
      HANDLE handles[2];
      handles[0] = data->shutdown_event;
      handles[1] = data->do_work_event;
      DWORD result = WaitForMultipleObjects(2, handles, FALSE,
      if (result == WAIT_OBJECT_0) {
         // Handle shutdown_event;
      else if (result == WAIT_OBJECT_0 + 1) {
         // Handle do_work_event;
      else {
         // Handle error

// Boost.Threads

struct ThreadData
   boost::mutex mutex;
   boost::condition condition;
   bool shutdown;
   int pending_work;

void do_thread(ThreadData* data)
   // Again, thread synchronization will need adjusting for real
   // code.
   boost::mutex::scoped_lock lock(data->mutex);
   for (;;) {
      while (!data->shutdown && data->pending_work == 0) {
      if (data->shutdown) {
         // data->condition.notify_*() was called because another
         // thread was requestion we shutdown. Handle this "event".
      } else {
         // There's work pending, so do it.

A single condition variable has been used for waiting, but we are
waiting on multiple states, which achieves the same effect as waiting
on multiple Win32 events.

> If some method for doing this was provided it would be arguably
> essential to be able to also wait on other threads terminating (and
> ideally other processes, but this is probably out of the scope of
> boost threads). I had thought some time back about how something
> like this could be possible using an extendable scheme where
> provide "waitable" handles that can be, well, waited on! I'm not
> exactly sure how you could go about doing this with pthreads to be
> honest, it doesn't seem to have anything close
> to "WaitForMultipleObjects". I thought about doing it with select,
> but never got around to experimenting enough to be able to
> if that was viable.

This would be done in pthreads the same as illustrated above for
Boost.Threads. The open nature on what the "state" is in a monitor
pattern using condition variables allows for everything that WFMO
gives us in Win32, though the plumbing for implementing a "waitable
handle" is obviously going to be a little more complex since it's not
built in. However, I've never had code that actually could benefit
from "waitable handles" in Win32. The only time I've employed WFMO
has been with mutliple events, and the condition variable allows for
this pattern with out a need for "waitable handles".

Bill Kempf
> Dylan

Boost list run by bdawes at, gregod at, cpdaniel at, john at