Boost logo

Boost :

From: Howard Hinnant (hinnant_at_[hidden])
Date: 2006-01-01 10:28:00

On Jan 1, 2006, at 5:38 AM, Preston A. Elder wrote:

> Hey all,
> What happened to the multi-lock semantics?
> I had expected by now to see something like:
> // Takes two mutexes as arguments, locking them in a pre-defined order
> // (ie. lowest ptr first). Blocks until both locked.
> boost::mutex::dual_scoped_lock

If this is to be done, I recommend a series of templated algorithms:

template <class TryLock1, class TryLock2>
lock(TryLock1& l1, TryLock2& l2);

template <class TryLock1, class TryLock2, class TryLock3>
lock(TryLock1& l1, TryLock2& l2, TryLock3& l3);


As the template parameter suggests, the locks must model try-lock.

I once did a performance study on the best algorithm to use for
implementing the two and three-lock versions of these. On single
processor machines it didn't seem to make a lot of difference. On
dual processor machines, the "try and back off" algorithm was
significantly faster than the "lock in order" algorithm, especially
if a thread::yield() was strategically placed just before one waits
on a lock that just failed to lock via a try_lock. The lock-in-order
algorithm tended to rapidly put the machine in a "partial deadlock"
which would have the effect of temporarily idling a processor which
was otherwise available for work. A brief analysis can reveal this:

Thread 1 locks mutex A and then waits for mutex B to be unlocked.
Thread 2 is busy with mutex B (but not A).
Thread 3 blocks on mutex A, even though it doesn't need B, only
because thread 1 is sitting on it doing nothing.

If Thread 1 had released A before waiting on B, then both thread 2
and thread 3 could get work done.

In a "dining philosophers" simulation, system performance was greatly
increased by try-and-back-off while starvation was virtually
eliminated (on dual processor machines).

I also found these algorithms handy:

template <class TryLock1, class TryLock2>
try_lock(TryLock1& l1, TryLock2& l2);

template <class TryLock1, class TryLock2, class TryLock3>
try_lock(TryLock1& l1, TryLock2& l2, TryLock3& l3);


These return 0 if all locks were locked. Otherwise they return 1 if
the first lock failed to lock, 2 if the second lock failed to lock,
etc. On return, all TryLocks are either locked (if the return is 0)
or unlocked (if the return is non-zero). It turns out that try_lock
(lock,lock) is very handy in implementing lock(lock,lock,lock) (and
etc.). The general pattern is:

Pick a lock to lock first.
call try_lock on the remaining locks.
If it fails, record which lock failed, unlock the first lock, yield,
and then repeat with the failed lock first. Otherwise the try_lock
succeeded so return.

I know the use of the yield is counter intuitive. I tested it on two
different OS's and it increased performance. At least try it and
stress test it (on a multiprocessor platform).

Use case for the 2-lock lock algorithm might look like:

struct A
    A& operator=(const A&);
    int data_;
    mutable rw_mutex mut_;
    typedef rw_mutex::sharable_lock read_lock;
    typedef rw_mutex::scoped_lock write_lock;

operator=(const A& a)
    if (this != &a)
       write_lock w( mut_, defer_lock);
       read_lock r(a.mut_, defer_lock);
       lock(w, r); // Use generic 2-lock algorithm
       data_ = a.data_; // *this now write-locked, a now read-locked
    return *this;


Boost list run by bdawes at, gregod at, cpdaniel at, john at