|
Boost : |
From: Howard Hinnant (hinnant_at_[hidden])
Date: 2008-02-01 12:39:32
On Feb 1, 2008, at 11:31 AM, Phil Endecott wrote:
>> Kowalke Oliver wrote:
>> what's the best practice in following scenario:
>> m threads and n resources (each resource is protected by a mutex)
>> thread x needs read/write access to resources (...i,j,k,..)
>> threads y,z,... do access another subset of resources at the same
>> time
>>
>> inorder to protected deadlocks I would do following in the code of
>> the
>> different threads (code executed by the threads are different):
>>
>> ...
>> try_lock: // label
>> unique_lock< mutex > lk1( mtx1, try_to_lock_t);
>> unique_lock< mutex > lk4( mtx2, try_to_lock_t);
>> unique_lock< mutex > lk7( mtx3, try_to_lock_t);
>> if ( ! ( lk1 && lk2 && lk3) ) goto try_lock;
>> // all mutexes are locked
>> // now modify safely resource 1,2,3
>> ...
>>
>> Maybe you have a better pattern do solve this problem?
>
> N2447 proposes a generic locking algorithm. Quote:
>
> template <class L1, class L2, class ...L3> void lock(L1&, L2&,
> L3&...);
>
> Requires: Each template parameter type must supply the
> following member functions with semantics corresponding to the Mutex
> concept, except that try_lock is allowed to throw an exception [Note:
> The unique_lock class template meets these requirements when suitable
> instantiated. --end note]
>
> void lock();
> bool try_lock();
> void unlock();
>
> Effects: All arguments are locked with an algorithm that
> avoids
> deadlock. If an exception is thrown by a call to lock() or try_lock(),
> there are no effects.
>
> End Quote.
>
> Presumably the intention is that you use it like this:
>
> {
> unique_lock< mutex > lk1( mtx1, defer_lock);
> unique_lock< mutex > lk4( mtx2, defer_lock);
> unique_lock< mutex > lk7( mtx3, defer_lock);
> lock(lk1,lk2,lk3);
> ....critical section....
> } // locks are released when they go out of scope
>
> I guess that a multi-lock could be built on top of this, for more
> concise code:
>
> struct multilock {
> unique_lock lk1; unique_lock lk2; unique_lock lk3;
> multilock(mutex& mtx1, mutex& mtx2, mutex& mtx3):
> lk1(mtx1, defer_lock), lk2(mtx2, defer_lock), lk3(mtx3,
> defer_lock)
> { lock(lk1,lk2,lk3); }
> };
>
> {
> multilock mlk(mtx1,mtx2,mtx3);
> ...critical section....
> }
>
>
> This is not included in Boost.Threads (AFAIK), and doing so without
> C++0x varargs templates is a bit messy.
>
> The obvious (to me!) implementation of the lock algorithm is to order
> the mutexes. This can be done based on their addresses, if every
> instance of the same underlying mutex has the same address. This
> doesn't work for shared-memory mutexes; maybe the mutex itself should
> be required to be less-than-comparable?
>
> Anyway, you can easily write a version for a fixed number of locks:
>
> void lock(lock& lk1, lock& lk2) {
> if (lk1.mutex() < lk2.mutex()) {
> lk1.lock();
> lk2.lock();
> } else {
> lk2.lock();
> lk1.lock();
> }
> }
Here's the reference implementation for N2447:
// Copyright Howard Hinnant 2007. Distributed under the Boost
// Software License, Version 1.0. (see http://www.boost.org/LICENSE_1_0.txt)
// returns index of failed lock or -1 on success
template <class L1, class L2>
int
try_lock(L1& l1, L2& l2)
{
unique_lock<L1> u(l1, try_to_lock);
if (u.owns_lock())
{
if (l2.try_lock())
{
u.release();
return -1;
}
return 1;
}
return 0;
}
template <class L1, class L2, class ...L3>
int
try_lock(L1& l1, L2& l2, L3& l3...)
{
unsigned r = 0;
unique_lock<L1> u(l1, try_to_lock);
if (u.owns_lock())
{
r = try_lock(l2, l3...);
if (r == -1)
u.release();
else
++r;
}
return r;
}
template <class L1, class L2>
void
lock(L1& l1, L2& l2)
{
while (true)
{
{
unique_lock<L1> __u1(l1);
if (l2.try_lock())
{
__u1.release();
break;
}
}
std::this_thread::yield();
{
unique_lock<L2> __u2(l2);
if (l1.try_lock())
{
__u2.release();
break;
}
}
std::this_thread::yield();
}
}
template <class L1, class L2, class ...L3>
void
__lock_first(int __i, L1& l1, L2& l2, L3& l3...)
{
while (true)
{
switch (__i)
{
case 0:
{
unique_lock<L1> __u1(l1);
__i = try_lock(l2, l3...);
if (__i == -1)
{
__u1.release();
return;
}
}
++__i;
std::this_thread::yield();
break;
case 1:
{
unique_lock<L2> __u2(l2);
__i = try_lock(l3..., l1);
if (__i == -1)
{
__u2.release();
return;
}
}
if (__i == sizeof...(L3))
__i = 0;
else
__i += 2;
std::this_thread::yield();
break;
default:
__lock_first(__i - 2, l3..., l1, l2);
return;
}
}
}
template <class L1, class L2, class ...L3>
inline
void
lock(L1& l1, L2& l2, L3& l3...)
{
__lock_first(0, l1, l2, l3...);
}
Notes:
1. It doesn't require Lock::mutex().
2. It is *much* faster than the lock-in-order algorithm.
3. The yields are absolutely necessary to get the high performance
mentioned in the previous note.
4. I've beat the heck out of this code (with extremely high
contention tests). Live-lock just doesn't happen. Neither does
thread starvation. The above code is very "fair".
5. Lock-in-order can result in thread starvation (experimentally
determined under high contention tests).
6. These algorithms have the strong exception safety guarantee for a
throwing lock() or try_lock(). If one of these throws, nothing is
locked. unlock() must not throw.
7. Anthony's previous post:
> A better version of try-and-back-off is to lock the first mutex
> unconditionally,
> and then try to lock the others in turn. If a lock fails, then
> unlock ALL the
> mutexes and start again, but this time with a blocking lock on the
> failed mutex.
> This means that you're waiting for the mutex that you couldn't get
> last time,
> but without holding any others, so you won't deadlock with another
> thread.
is a good description of the above algorithms.
8. The above algorithms can painstakenly be converted to not use
variadic templates for a fixed number of locks (I've done up to 4
before the hair loss became too great ;-)).
-Howard
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk