From: William E. Kempf (williamkempf_at_[hidden])
Date: 2002-10-18 09:15:51
From: "David Abrahams" <dave_at_[hidden]>
> "William Kempf" <williamkempf_at_[hidden]> writes:
> > >From: David Abrahams <dave_at_[hidden]>
> > > > > Well, sort of. Isn't deadlock supposed to be when two threads are
> > > > > waiting on one another to finish with some resource before
> > > > > Yes, that's how definitions.html specifies it.
> > I think maybe the definition needs some tweaking. A deadlock occurs
> > when a thread waits on a resource that it can never acquire.
> Well, OK, that's a different definition, and takes us away from the
> previous meaning of "deadly embrace".
It's a better definition, since a thread can "self deadlock" but the current
given definition would obviously make that impossible. I think the
definition we gave is simply not complete enough, and not that we've misused
the term elsewhere in the document.
> > This case, where a thread simply never releases the resource, most
> > certainly can result in deadlock.
> Yes. There's a difference between "can" and "does".
Yep. I pointed that out below as well. We'll have to clean this up a bit.
Note, however, that the chances are VERY high you'll cause deadlock. The
only time you won't is when no other thread is currently waiting, no other
thread will wait in the future, and the thread that leaked either will not
wait in the future or the mutex is recursive. In practice it's going to be
highly unlikely that any of these cases occur, much less all of them.
> > > > > I think what happens if you fail to unlock a mutex is that the
> > > > > resource becomes permanently unavailable**, which is rather
> > > > > though the behavior may appear to be similar.
> > In what ways is this case different? Even disregarding the effect?
> Because for example all threads which use the mutex may decide to exit
> without locking it again.
Very unlikely. They have no way to detect the "leaked lock", so for this to
occur it would have to be by sheer chance, and the likelyhood of it is very
small. But if the point is hinged on adding "may" to the sentence about
deadlock occuring here, I fully agree.
> > >You can't possibly interpret that to cover the case where no thread
> > >tries to lock the mutex again after the lock is leaked.
> > Did you mean precisely this?
> Don't know what you mean by "this", but I meant precisely what I said.
Never mind... for what ever reason (probably a Hotmail issue) I'm receiving
messages from Boost out of order and with significant delay. In context
with the rest of the thread of discussion this makes perfect sense. Taken
as a snapshot I couldn't see how the same thread trying to lock the mutex
again had any significant bearing on the subject.
> > If so, that points out that the sentence should have read "The
> > result is likely deadlock.", rather than the certainty...
> That could certainly be one outcome, but I think it's much more likely
> that one thread hangs waiting for another thread to release the mutex
> (your new definition of deadlock) than that two threads wait on one
> another (or one waits on itself in the degenerate case) which is what
> our current definition says. It depends on the system of course.
No, my viewpoint is based solely on the definition given for deadlock being,
though not incorrect, not encompassing enough. The definition I gave above
is more in line with the accepted definition of deadlock.
> > > > - recursive mutex or not.
> > >
> > >Since this can't be described in terms of our definition of deadlock,
> > >I'm trying to describe which threads will block forever when trying to
> > >lock it. Whether it's recursive or not affects which threads will
> > >block forever when trying to lock the mutex.
> > How so?
> If a thread leaks a recursive mutex, it has a lock on that mutex, and
> it can "re-lock" the mutex as many times as it likes without
> blocking. So the leaking thread itself can't block forever waiting for
> the mutex.
Agreed, and that was the part that needed more context for me to understand
:). But the deadlock potential is more with other threads then with the
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk