Boost logo

Boost :

From: David Abrahams (dave_at_[hidden])
Date: 2006-09-21 10:51:07


"Peter Dimov" <pdimov_at_[hidden]> writes:

> David Abrahams wrote:
>> "Peter Dimov" <pdimov_at_[hidden]> writes:
>>
>>> Anthony Williams wrote:
>>>
>>>> The case given in Alexander's email above is:
>>>>
>>>> A: lock [lockcount == 0]
>>>> B: lock [lockcount == 1, slow path]
>>>> A: unlock [lockcount == 0, slow path/post sema]
>>>> A: lock - [lockcount == 1, slow path]
>>>>
>>>> He claims that the second A: lock takes the slow path, when it could
>>>> have taken the fast path because the lock is free. I disagree. The
>>>> lock is NOT free, since B is waiting on it. If A can take the fast
>>>> path here, that's opening the door for B to be starved.
>>>
>>> This is correct. The mutex does not guarantee fairness (for the above
>>> definition of fairness) at the expense of performance. It is indeed
>>> true that in heavily contended scenarios thread A can finish well
>>> ahead of thread B; however, the total time taken by threads A+B can
>>> be (and is) significantly less than that with the "fairer" CS design.
>>
>> Probably dumb question: does this reasoning apply when A and B need to
>> run "continuously" (e.g. in a server)?
>
> I think that it does, because A and B are roles, not permanent labels. At
> the last lock above, A is the currently running thread; B is a thread that
> is currently blocked on the mutex. Since the scheduler distrubutes the
> "currently running" role among the threads based on its notion of fairness,
> a synchronization primitive that favors the currently running thread is as
> fair as the scheduler in the long run.

I get it: A fair scheduler will force context switches at arbitrary
points (e.g. on some timer interval) anyway, so there's no point in
_encouraging_ an expensive context switch where access to resources
needs to be arbitrated.

Now that I said it so simply, I buy 100% into the arguments you and
Alexander have been putting forth... so I must be missing something
:). Nothing in threading is supposed to be simple.

OK, so here's what I think I missed: B is blocking on a resource. The
scheduler only distributes the running role among the ready threads.
If B never becomes ready, it starves. The only way B can become ready
is if A releases its lock to the system, so I guess the question is,
does A's unlock actually move B into the ready state, or at least
cause the scheduler to try to ready B in its next timeslice?

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk