From: Anthony Williams (anthony_at_[hidden])
Date: 2006-03-13 04:23:00
Roland Schwarz <roland.schwarz_at_[hidden]> writes:
> I thought we don't want the list implementation beacuse it will
> interfere with the system scheduler doesn't it? (I think I was suggesting
> this some time before, but from discussions I had to learn that we
> shouldn't use it. Don't remember every detail at the moment.)
That depends on what you mean. A condition var isn't a native windows
synchronization object, so the scheduler doesn't have anything to say by
My current implementation identifies which thread will be woken by notify_one
using a FIFO list. It then leaves it up to the scheduler when this thread will
run. Of course, you could liken it to a win32 Event, in which case it is up to
the scheduler which thread will be woken, which is not necessarily FIFO. I
note also that the pthreads docs say that it is considered undesirable for a
synchronization primitive to specify an ordering rule.
The key benefit of the list implementation is that once a notify has been
triggered, further waits will not impact on that notify cycle --- there is no
Thread 1: wait
Thread 2: wait
Thread 3: start notify_all
Thread 4: wait
Thread 5: wait
The notify_all on thread 3 here will only wake threads 1 and 2, even if they
are not actually notified until after threads 4 and 5 have started
waiting. You can actually have multiple concurrent notifications running, as
they isolate the list before notifying the threads on it.
My implementation of notify_all unblocks all threads *currently blocked*, but
not those that may start waiting subsequently. Part of the previous algorithm
was a "gate" which stopped threads waiting whilst a notification was
underway. Though this has the same consequence, it does mean that no new
waiters can be registered *until all unblocked threads have resumed*.
One alternative I can think of is to have multiple Event objects such that if
a notification is currently underway, new waiting threads wait on a new
Event. Event objects can then be recycled once all the unblocked threads have
> I didn't yet have time to study it thouroughly, but is the algorithm
> also free from the "overly strict locking" problem?
I believe so. The lock is only held whilst adding and removing items from the
list. During a notify_all, the lock is claimed once for each item in the
list, but is then released. This may have a performance hit on notify_all, but
will allow other threads to wait, or call notify, without interfering.
> I can remember a lot of past discussions about the best algorithm
> for condition variables on windows, and as far as I remeber there
> was agreement on Terenkovs algorithm wasn't it? Also it is proven
> to work well and also is the basis of windows pthreads. So why should
> we go for something different (yet unreviewed) now?
I believe my algorithm is better, but am open to discussion. I was unhappy
with my old rewrite, for reasons I will explain below, so decided to start
from scratch. I had an idea on how to do it, and this implementation is proof
of concept. If it turns out my idea was inefficient, or not usable for other
reasons, I'm happy to change it. Please consider my posting to be a review
> Please don't get me wrong, but I was thinking we are about to
> rewrite the library and aren't going to change it's design for the
> moment? I feel very uncomfortable of introducing such big changes
> without even a minimum of review. Having said this I do not maintain
> Anthonys code is bad, it might even turn out that it is better than
> everything we had so far, but I would see some sound reasoning
> about why the change.
Fair points. Rewriting the existing code "as is" seems a bit of a pointless
exercise. If I'm writing the code, then I'm going to think through every
aspect, and do as I see fit. I am, however, open to criticism. I posted the
message here, partly as a progress report, but also in the hope that someone
would review my code --- that's why I gave a simple explanation of my
approach, rather than just saying "conditions are done for windows".
The "overly strict locking" aspect --- preventing new waits/notifies whilst a
notify is in progress --- is the key issue that has been nagging at the back
of my mind about all this, and the primary driver for the new algorithm. I
have been thinking on-and-off about how to get Windows to unblock "any one of
these threads, but only one of these threads" for a while, and this is what I
came up with. Maybe the multiple Event objects technique would be better.
> E.g. I would like to see what "careful thought" really means.
I had concerns about a potential race condition between one flag locking and
Thread 1 acquires lock.
Thread 1 calls unlock.
Thread 2 calls lock.
Thread 2 tries to acquire lock flag and fails
Thread 1 releases lock flag
Thread 1 gets current semaphore --- there isn't one
Thread 1 now quits
Thread 2 creates semaphore and waits on it.
Thread 2 will now wait forever, as there is noone to release the semaphore.
Looking again now, I see that the semaphore is created with a count of 1, so
thread 2 will acquire the semaphore the first time round, and try again. Once
the semaphore is created, this problem goes away. Maybe my "careful thought"
wasn't so careful after all :-(
Anyway, this is related to the second aspect of my lightweight_mutex that I
was unhappy with --- it has to loop when locking, to handle spurious signals
of the semaphore as happens here. I had to use a similar loop in my new
implementation of a timed mutex, and I'm not happy with that either.
-- Anthony Williams Software Developer Just Software Solutions Ltd http://www.justsoftwaresolutions.co.uk