Boost logo

Boost :

From: Beman Dawes (bdawes_at_[hidden])
Date: 2001-08-13 15:31:07


At 02:55 PM 8/13/2001, Peter Dimov wrote:

>From: <duncan_at_[hidden]>
>
>> pdimov_at_[hidden] (Peter Dimov) wrote:
>>
>> > synchronized_queue<int> q;
>> > event e;
>> >
>> > void threadfunc()
>> > {
>> > for(;;)
>> > {
>> > waitFor(e);
>> >
>> > while(!q.empty())
>> > {
>> > int m = q.pop();
>> > if(m & 1) do_something_a();
>> > if(m & 2) do_something_b();
>> > if(m & 4) do_something_c();
>> > }
>> > }
>> > }
>> >
>> > void notifyThread(bool a, bool b, bool c)
>> > {
>> > q.push(a + 2 * b + 4 * c);
>> > e.raise();
>> > }
>> >
>> > Did I utterly miss the point? :-)
>>
>>
>> Surely notifyThread() can happen between end of while loop and before
>> the waitFor(e) is entered again. In which case you have a lost wakeup.
>
>Could you elaborate on that? It is possible to 'lose' a wakeup since
>notifyThread() can be called several times while the thread is in the
while
>loop, but what's the problem? The event will remain signalled.

I tried to look back at the older literature to see why events originally
came to be viewed as not the best solution. I came across "Concurrent
Programming Concepts", Per Brinch Hansen, Computing Surveys, December,
1973.
If you are an ACM Digital Library member, it's online at
http://www.acm.org/pubs/citations/journals/surveys/1973-5-4/p223-hansen/

What follows is based on that, except errors are my own.

Events were "used in early multiprogramming systems to synchronize
concurrent processes."

Lost wakeups, in effect, are what gave events a bad name. They make the
robustness of the code dependent on the relative speed of the threads
unless augmented by some additional protection. What happens is something
like this:

If thread_func() slows way down relative to notifyThread(), lots of wakeups
get lost. That isn't a problem in this particular program, but it means
that the queue grows, and eventually runs out of resources unless that's
internally protected against. There is a lot of opportunity for error.

This caused the early pioneers to decide that for reliable programs "The
effect of an interaction between two processes must be independent of the
speed at which it is carried out."

Since raw events didn't meet that criteria, Dijkstra, Brinch Hansen, Hoare,
and others invented synchronization tools which did.

End of hopefully not too mangled history lesson. If you have access to it,
read the original publication. It is clearer than my summary.

--Beman


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk