|
Boost : |
From: Howard Hinnant (hinnant_at_[hidden])
Date: 2004-07-13 09:56:16
On Jul 13, 2004, at 7:26 AM, Peter Dimov wrote:
> Howard Hinnant wrote:
>> I'm not familiar with how a native pthread_mutex is made recursive.
>
> This kind of answers my question. ;-)
>
> See pthread_mutexattr_settype and PTHREAD_MUTEX_RECURSIVE. Note also
> that an
> implementation is allowed to make the default pthread_mutex recursive;
> in
> this case your users pay for the recursive overhead twice. Not that
> they
> don't deserve it for using a recursive_mutex. ;-)
I really hate English sometimes. I think I need an English compiler to
tell me when I've written something ambiguous. :-\
What I meant was that when PTHREAD_MUTEX_RECURSIVE is defined, I'm not
familiar with what the implementors have done with the internals of
pthread_mutex, in comparison to when PTHREAD_MUTEX_RECURSIVE is not
defined. I.e. what is the cost for the implementation to define
PTHREAD_MUTEX_RECURSIVE?
I can only speculate that that cost is probably similar to what I have
to do to create a recursive mutex from a non-recursive one.
The question I was answering concerned whether overhead was imposed for
the use of a recursive mutex with a condition. I can only answer that
question for the case where I've built the recursive mutex myself out
of a non-recursive mutex. I've never implemented a native mutex
library and so am not familiar with the costs at that level.
If the OS provides a mutex that is recursive without documenting that
fact, then that sounds like a docs bug to me. If I need a recursive
mutex, I will gladly use an OS-supplied one if I can find such a beast.
Otherwise I have to build it myself out of an OS-supplied
non-recursive mutex. If I find that I've ended up paying for recursion
overhead twice, or unnecessarily re-implemented recursive mutexes for
that platform, I'll send a bug report to the OS vendor.
>> But with a native non-recursive mutex, the added space overhead simply
>> to handle recursive locking was also sufficient to negotiate use with
>> condition variables without further space overhead needed just for the
>> condition variables. To support the condition variables, a little
>> more
>> code is needed (maybe a dozen lines of C++) executed from within the
>> wait function, and maybe a dozen or so bytes of stack space within the
>> condition's wait function. Essentially the wait function saves the
>> state of the mutex before the wait, then frees it for the wait, then
>> restores the state of the mutex after the wait.
>
> That's how Boost.Threads behaves, but (AFAICS) it doesn't protect
> itself
> against a thread switch and lock immediately after freeing the mutex
> for the
> wait, so it doesn't meet the "correctly" requirement. ;-)
I've just reviewed my code and I am not seeing a possibility of this
happening. After the state is saved, and the state is "freed", the
code still owns a pthread_mutex_t protecting other threads from
accessing (read or write) this modified state. Permission to do so is
then granted using a pthread_cond_wait call with the pthread_mutex_t
protecting the recursive mutex state. Upon return from the wait, the
code again owns the pthread_mutex_t, and can atomically restore state
without fear of another thread interfering (after checking of course
for a spurious wake up).
<shrug> I haven't reviewed the boost code, so I can neither agree nor
disagree with your assessment of that implementation. I can only
assert that I believe it is possible to do correctly, and that I
believe the Metrowerks implementation does so on at least one platform
(and also does it wrong on at least one platform).
-Howard
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk