Boost logo

Boost :

From: Tyson Whitehead (twhitehe_at_[hidden])
Date: 2004-07-05 14:43:56

Hash: SHA1

Aaron W. LaFramboise wrote:
> I am not sure what the status of these are. Are other components
> supposed to be using these?

I came upon them when I was checking out the smart pointers implementation.
Grepping the source seems to indicate 'lightweight_mutex.hpp' is only used by
the smart pointers (only included by 'quick_allocator.hpp' and
'shared_count.hpp') under multithreaded conditions.

> In addition to what you mentioned, they're broken in other ways:
> If a low priority thread is preempted by a high priority thread
> immediately before __atomic_add, and the high priority thread is
> attempting to grab the spinlock, deadlock is acheived, despite the
> sched_yield().

I missed that one. You're right though, and not only that, but the
sched_yield manpage states (I'm assuming this applies to threads -- threads
in Linux are actually processes -- right?):

"Note: If the current process is the only process in the highest priority list
at that time, this process will continue to run after a call to

The implementer seemed to be aware there could be priority problems though.
The relevant comments in 'lightweight_mutex.hpp' are:

// * Used by the smart pointer library
// * Performance oriented
// * Header-only implementation
// * Small memory footprint
// * Not a general purpose mutex, use boost::mutex, CRITICAL_SECTION or
// pthread_mutex instead.
// * Never spin in a tight lock/do-something/unlock loop, since
// lightweight_mutex does not guarantee fairness.
// * Never keep a lightweight_mutex locked for long periods.
// The current implementation can use a pthread_mutex, a CRITICAL_SECTION,
// or a platform-specific spinlock.
// You can force a particular implementation by defining
// If neither macro has been defined, the default is to use a spinlock on
// Win32, and a pthread_mutex otherwise.
// Note that a spinlock is not a general synchronization primitive. In
// particular, it is not guaranteed to be a memory barrier, and it is
// possible to "livelock" if a lower-priority thread has acquired the
// spinlock but a higher-priority thread is spinning trying to acquire the
// same lock.
// For these reasons, spinlocks have been disabled by default except on
// Windows, where a spinlock can be several orders of magnitude faster than a

The current Win32 doesn't have the counter problems, and claims to be able to
yield to lower priority threads, so it should be okay. The relevant lock
routine is (m_.l_ is initialized to 0):

while( InterlockedExchange(&m_.l_, 1) ){
  // Note: changed to Sleep(1) from Sleep(0).
  // According to MSDN, Sleep(0) will never yield
  // to a lower-priority thread, whereas Sleep(1)
  // will. Performance seems not to be affected.

The unlock routine is:
InterlockedExchange(&m_.l_, 0);

- -T

- --
 Tyson Whitehead (-twhitehe_at_[hidden] -- WSC-)
 Computer Engineer Dept. of Applied Mathematics,
 Graduate Student- Applied Mathematics University of Western Ontario,
 GnuPG Key ID# 0x8A2AB5D8 London, Ontario, Canada
Version: GnuPG v1.2.4 (GNU/Linux)


Boost list run by bdawes at, gregod at, cpdaniel at, john at