Boost logo

Threads-Devel :

From: Matt Hurd (matt.hurd_at_[hidden])
Date: 2006-03-04 22:24:03

> > Doesn't just apply to shared-memory threading models. As the name
> > suggests it can be just to prevent concurrent access to a resource which
> > may be re-entrancy protection in a single threaded model.
> Unless you're now talking about process-level parallelism, I don't see
> how any mutex other than one supporting try-locks could be useful in a
> single-threaded program. Anything else leads to deadlocks or
> pointless waiting. And a try-lock, in a single-threaded program, is
> just a good old bool. Am I missing something?

You might need to execute a transaction against an I/O register or set that is
atomic. A simple daemon may just aways block waiting on input... Other
concurrency in a singly threaded program where conflict is rare...
Concurrency != threads.

> > The concept is mutual exclusion as per its name. Might be a
> > physical thing that can only be accessed one at time, like hardware
> > registers of a device or some such.
> >
> > It is also important to remember that this may apply to a singly
> > threaded restricted area in an application that is multi-threaded.
> What may apply?

That mutual exclusion is or isn't necessary just relevant to threads. Other
patterns of concurrent access may require it. Concurrency != threads.

> > For example, it is unfortunate that boost::shared_ptr adopts its
> > concurrency policy based on whether or not it is a multi-threaded
> > build.
> It's a design choice with certain advantages and limitations. Other
> design choices have their own advantages and limitations. A different
> choice would be unfortunate in its own way.

Agreed. It is similar to the argument to favour shared_ptr over a policy
based one. Simplicity for general use versus configuration and interop
issues. We're lucky to have a simple, safe, well designed shared_ptr.

> > It is the conservative safe approach but restricts the
> > usefulness of the pointer. I use boost::shared_ptr by default in my
> > apps and replace it with a policy based pointer, usually Loki's,
> > where I'm performance critical w.r.t. a lack of concurrent access.
> That's appropriate. I would consider boost::intrusive_ptr, or, if
> possible, uniqueu_ptr, where performance was critical, though. I
> don't see why it makes anything about shared_ptr "unfortunate."

True. "unfortuante" is a poor choice of words on my part. The reference
counting on a shared_ptr is very slow (relative to an unsafe inc/dec) in a
multithreaded env due to the atomic ops or mutex protected reference count.
intrusive_ptr would be ok as you are rolling your own
intrusive_ptr_/add_ref/release mechanism and storage but all you really want
is the shared_ptr in non-threaded mode. shared_ptr is exactly what you need
but it is a unfortunate configuration setting away...

90% of the time I use a shared_ptr. 5% is just use a raw pointer but probably
shouldn't, 5% of the time I use a loki smart pointer. (statistics made up to
reflect how i think i use ptrs ;-) ).

> >> In C++, there are several primary usage scenarious for mutex objects:
> >>
> >> 1. Standalone, as an object of static storage duration at namespace or
> >> block scope.
> >>
> >> 2. As a class member, for a class object of static storage duration.
> >>
> >> 3. As a class member, for a class object created on the stack or heap.
> >>
> >> In each case, the mutex object is used to synchronize access to other
> >> objects at the same scope.
> >
> > Not always. Different scoping is not that uncommon is practice I think.
> >
> > On some platforms mutexes are not necessarily cheap resources and
> > their number has a finite boundary. You may wish for object level
> > locking for a big collection but you can't swing it. Older versions
> > of windows are like this for example where your are restricted by
> > "system resources" of 64 to 128K.
> >
> > Here a common structure might be a mutex pool, for example a vector
> > of mutexes where the access is hashed on object address, and your
> > object is initialised with a reference to a mutex in the pool that
> > it shares with other objects.
> >
> > A common structure we should support to prevent deadlocks is the
> > idea of a group of mutexes for simulataneous locking. Deadlock is
> > guaranteed not to occur if they are always accessed in the same
> > order. Enforcement of ordered access will prevent deadlock, a noble
> > goal.
> Sure, although it isn't always the best way. I recently put together
> an example of deadlock prevention using ordered access. I was
> surprised at the complexity of the resulting locking strategy required
> (see enclosed). A loop around a pair of locking attempts with an
> intervening unlocked section was required in order to make it work; it
> reminded me of some of those lock-free algorithms. It was suggested to
> me that it would have been cleaner to use timed locks (actually
> try-locks sound better to me now, but I'm not a threading expert),
> since I needed a loop anyway; then you can lock in any order and just
> come back and try again if a deadlock would have occurred. It would
> have saved the two-phase locking dance I do in the code.

I had a look at the code. The use case wasn't clear to me. What is the
locking trying to protect and what is decouple for?

A basic safe partnering might look this this:

void switch_and_report(collab* new_partner)
  // gates must be locked
  synco() << "partnered " << this << " with " << new_partner << std::endl;
  this->partner = new_partner;
  new_partner->partner = this;
void couple(collab* new_partner)
  if (new_partner == this) {
    boost::mutex::scoped_lock lk(gate);
  } else {
    lockpair<mutex> lk (gate, new_partner->gate);

But I'm clearly missing the point of your algorithm. If you explain the use
case I'll have a crack at it with the real use in mind.

If you want the lock to outlive the collaboration acquisition then your
locking has to live over the life of the acquisition unless it is up to the
acquisition to free itself after doing work. Perhaps that is what decouple
is about??

> >> It is essential that the mutex be correctly initialized by the time the
> >> first thread attempts to lock it. For objects of static storage
> >> duration, this requires that the mutex can be initialized with a static
> >> initializer, and not rely on dynamic initialization, since dynamic
> >> initialization exposes the possibility of race conditions.
> That's what once routines are for, no?
> For concurrently-accessed objects of static storage duration, you need
> a once routine anyway, so you may as well initialize the object *and*
> its mutex at the same time (?)

> > Static mutexes are really handy and very annoying due to the static
> > syntax messiness of their static initialisation requirements. It would
> > be nice if there were some syntax sugar that lead to their safe
> > concurrent
> > initialisation at start-up. The syntax you suggest below of the pthread
> > like BOOST_MUTEX_INITIALIS(Z)ER is good. It is necessary to support this
> > as the initialisation can then be at compile time and end up as a zero in
> > the executable image for example.
> I have serious doubts about whether this can work and still have the
> mutex be useful and safe in other contexts. You'd have to do that
> sort of initialization always, even when the mutex is used as a class
> member, and if you forgot it, your program would be broken. I hate
> some of the restrictions the current Boost threads interface imposes
> in the name of safety, but I strongly agree with the basic idea that
> C++ abstractions should be leveraged to avoid common errors like this
> one. If you absolutely *have* to have static initialization (I don't
> buy that idea yet), I there should be a separate
> user_initialized_mutex type for that purpose.

Static initialisation is much more common than I'd like. You often end up
with _global_ protection to external api's to prevent concurrent access.
Perhaps it is not a boost.thread problem though as it is really just a
singleton issue.

> > The messiness of always needing external initialisation of the
> > static to the class and therefore cpp location or an anonymous
> > namespace trick makes me wonder if a registration process/macro
> > where all the static mutexes get initialise in a one off singleton
> > object on start up might be neat... Dunno.
> Interesting, but I'm not sure it's necessary. You need once routines
> for those static objects anyway.
Maybe a standard thread-safe singleton is appropriate as it solves the generic
problem rather having a special mutex.

> >> Constructor calls constitute dynamic initialization, so therefore we
> >> need mutex objects which are aggregates, with no members that require
> >> constructor calls, so there are no constructor calls. Ideally,
> >> zero-initialization would be sufficient, as then the user would not have
> >> to specify an initializer, but failing that, a static initializer akin
> >> to PTHREAD_MUTEX_INITIALIZER would work. e.g.
> >>
> >> boost::mutex m=BOOST_MUTEX_INITIALIZER;
> >>
> >> At the other end of the application lifetime, we have destruction. Since
> >> we have no idea how many threads will be running during application
> >> shutdown, or what order objects will be destructed, we need to be
> >> conservative here, too --- we need the mutex objects to remain usable
> >> for as long as possible. On the platforms which I know about, it is
> >> acceptable to leak OS resources such as mutexes, as these will be
> >> cleaned up by the OS on program exit.
> >>
> >> Therefore, I suggest that the most straightforward way to deal with
> >> destruction order race conditions is for the mutex objects to have no
> >> destructors (which is consistent with being POD).
> >
> > This is unacceptable I feel because of the resource consumption issues on
> > some platforms.
> Mutexes being a system resource, you mean? I agree. Finally, this is
> C++. If I want to program against the bare metal using no
> abstractions, type safety, constructors, or destructors, I'll be doing
> it in C. *If* it can be shown the cost of the usual C++ abstractions
> will be too high for a significant part of the C++ community, it might
> be worth developing a portable lightweight-but-dangerous sublibrary on
> which the abstractions can be built, but you need to be *very* careful
> about the knee-jerk "if there's a cost, they can't pay it" attitude
> towards the embedded community. There are a broad range of embedded
> applications, and most of those people can afford to pay a little for
> safety. This library should not be skewed to serve requirements on
> the margins at the expense of more mainstream requirements, like
> safety.

Not just the embedded community, win95 and friends have restrictions on
"system resources" which prohibit large collections of objects all with their
own mutex. That is bad enough, but if you had a time dimension where the
resources aren't freed till program exit then I think you are buying a lot of

> >> I therefore suggest that we supply two mutex types, one of which is POD,
> >> and requires a static initializer, and one of which has a default
> >> constructor and a destructor. Of course, if we equip the first one with
> >> explicit initialize() and destroy() functions, then the second can just
> >> be a wrapper for the first that calls initialize() in the constructor,
> >> and destroy() in the destructor.
> >>
> >> I suggest that these two different variants are named
> >> boost::static_mutex and boost::mutex.
> Okay, two different types. But why is static_mutex needed? I still
> haven't seen a good argument, given that you need once routines to
> initialize the shared data anyway.


> >> The next challenge is timed_mutex --- this cannot be written as an
> >> extension of the current scheme, since that would expose a race
> >> condition. I might have to revert to my lightweight_mutex scheme for
> >> that.
> >
> > Hmmm, the mutex isn't timed. The lock is timed, or rather, the attempt
> > to lock is timed. Important difference.
> In the boost design, not all mutexes have the capability of a timed
> lock, and a mutex that does have that capability may consume more
> resources. By having different mutex types, you prevent people from
> attempting to use a timed lock on a mutex that was only initialized to
> support blocking locks. I think that's the right design unless the
> assumption that not all mutexes can support timed locking doesn't
> reflect the reality of any real implementations.

Spinlocks typically don't have a timed locking mechanism as you shouldn't be
using a spin lock if it might time out.

I can imagine a mutex requiring some housekeeping or something to make it
special to require a timed lock, but this doesn't happen I don't think on
windows or posix AFAIK. POSIX.1d introduced int pthread_mutex_trylock
(pthread_mutex_t *mutex). There is a similar beast for rw stuff on posix.
Even a windows critical section supports a time out.

On these main systems (posix and win) timed_locking is relevant to the lock
acquisition and independent of the mutex. So whilst I can imagine a timed
lock interface being dependent on a particular mutex, I don't know of one but
my experience is quite limited.

> >> I also have a "checked" version of basic_mutex (not in CVS), that checks
> >> which thread owns the mutex, and throws if the same thread tries to lock
> >> it twice, or a thread that doesn't own it tries to unlock it.
> Uh, throws? Why? Isn't that a precondition violation? If so,
> BOOST_ASSERT would be much more appropriate.
> >> This requires additional storage for the ID of the locking thread,
> >> and also has the overhead of the additional checks. Does anyone
> >> think it's worth adding?
> >
> > Perhaps, especially for debugging.
> For debugging, it *definitely* should not be a throw. It should be an
> assertion.

It would be a very useful assertion.

> > I also wonder if we should require a name concept for a mutex as
> > this would derive some orthogonality across shared memory mutexes
> > across process space.
> I don't think a _requirement_ is appropriate. People who aren't
> targeting shared memory will be extremely frustrated with the need to
> provide unique names for every mutex.

Would be nice if there were an automagic way to generate one at least in a
debug mode so you'd be able to diagnose issues. For example, I can imagine a
debug lock that would assert after a long enough timeout with some debug info
on what it was waiting for and who had it.

> > Also a debug version of a mutex could have conncurrent tests as well
> > as logging to simply debugging.
> I don't know what that means.

Debug mutexes and locks could have extra tests and perhaps even optionally
traced output to assist in debugging.

> > Debugging errors in concurrent access is one of the nasiest and
> > hardest things in programming. A debug mutex that can report its
> > name and a deadlock condition and the names of the calling threads,
> > for example, would be rather magic and very simple to implement.
> > Worth its weight in electrons me thinks.
> Sounds useful.
> > Some more food for thought.
> >
> > A condition variable relies on a mutex. We haven't considered the
> > orthogonality of a condition being able to be built around any mutex
> > concept. This is important to consider as it would be great to do but
> > makes building conditions very difficult as we have to write one rather
> > than rely on just a posix wrapper on posix platforms.
> >
> > A shared mutex and null mutex need to be considered.
> >
> > Boost misses this requirement completely and it is devastating.
> Boost has a read/write mutex, notwithstanding that it's currently
> broken :)

But the interfaces are not generic and thus not subsitutable. Easy to fix if
a little weird. I did it in my wrapper mainly by providing a constructor
that required a lock_priority and locks that always require a lock_status
type. For the non-shared these are defaulted to exclusive.


basic enums
    struct lock_priority
        enum type
    struct lock_status
        enum type

A minor lock adaption:

template<class Mutex>
struct simple_lock
  simple_lock( Mutex& m, lock_status::type ls = lock_status::exclusive)
    : lock_(m) {}

  bool try_lock() { return lock_.try_lock(); }

  bool timed_lock(const boost::xtime &xt )
    return timed_lock.timed_lock(xt);

  simple_lock (const simple_lock&);
  simple_lock& operator= (const simple_lock&);

  typename Mutex::scoped_lock lock_;

Then a small mutex adaption to make it work:
struct simple
  struct adapted_mutex : public boost::mutex
    adapted_mutex( lock_priority::type lp = lock_priority::exclusive ) {}
    ~adapted_mutex() {}

  struct adapted_try_mutex : public boost::try_mutex
    adapted_try_mutex( lock_priority::type lp = lock_priority::exclusive ) {}
    ~adapted_try_mutex() {}

  struct adapted_timed_mutex : public boost::timed_mutex
        lock_priority::type lp = lock_priority::exclusive ) {}
    ~adapted_timed_mutex() {}

  typedef adapted_mutex mutex;
  typedef adapted_try_mutex try_mutex;
  typedef adapted_timed_mutex timed_mutex;

  typedef simple_lock<mutex> lock;
  typedef simple_lock<try_mutex> try_lock;
  typedef simple_lock<timed_mutex> timed_lock;

  template < typename T >
  struct atomic_op : public zomojo::synch::atomic_op_synch< T >
  template < typename T >
  struct atomic_holder : public synch::atomic_holder< T >
    atomic_holder( T& t) : synch::atomic_holder< T >(t) {}


Similar generic classes are then defined for:

    struct no_synch
        struct nil_mutex
            nil_mutex( lock_priority::type lp = lock_priority::exclusive ) {}
            ~nil_mutex() {}
    struct recursive
        struct adapted_mutex : public boost::recursive_mutex
            adapted_mutex( lock_priority::type lp = lock_priority::exclusive )
// struct shareable
// {
// protected:
// struct adapted_mutex : public boost::read_write_mutex
// {
// adapted_mutex(
// lock_priority::type lp = lock_priority::exclusive
// )
// : read_write_mutex( convert( lp) ) {}
// ~adapted_mutex() {}
// };

> > Libraries like ACE have support this for many years. We can go a
> > step further and support a substituable conceptual heirarchy like
> > this:
> >
> > shared_mutex
> > ^
> >
> > recursive_mutex
> > ^
> >
> > simple_mutex
> > ^
> >
> > null_mutex
> Concept hierarchies (refinement) imply substitutability. You don't
> need to say it. Normally they're written in the opposite order,
> though ;-)
> Are you sure that a shared_mutex can always support recursion without
> penalty?

I said in a later mail, I think it was a mistake to include recursiveness in
this conceptual mix. It is separate as the substituability breaks.

> > This is a conceptual heirarchy which gives the Liskov subsitutability and
> > enables generic code to be written to shared mutex with the knowledge
> > that the don't pay for what you don't use concept is alive and good
> It's not alive and good unless the assumptions I mentioned before
> about the cost of lockability are false, or you're willing to
> compromise safety by allowing code to create a timed lock on a mutex
> only initialized for plain locking.

Time locking is ok. In my current wrapper I still have separate timed_mutex
and try_mutex types. The argument of the efficacy of that is separate.

You create a simple::mutex or simple::timed_mutex.
Likewise you have no_synch::mutex and shareable::mutex.

You have then the mechanism to write

template< class T, class S >
struct whatever
    typename S::mutex guard_;
    typename S::timed_mutex guard_;

    when you lock you use a
    S::lock or S::timed_lock

The locking default to exclusive locking. If you request a shared lock on a
non shareable mutex you get an exclusive lock.

If you use a no_synch::lock or no_synch::mutex then you get no-ops by the
magic of compiler optimization.

> > as you can throw in a null mutex and have a normal class/object.
> >
> > I use this pattern a lot and have a synch wrapper that wraps the
> > boost mutex implementation to give me this. Such a wrapper also
> > exposes other synch stuff, like atomic ops and fencing, so these can
> > be used w.r.t. to the conceptual need at the time, so to speak.
> >
> > In my opinion all of STL should support such a concept for the
> > containers and this could be an extension that boost offers.
> That's very vague, sorry.

See the above code snippets. Like an allocator, you could parameterise a
collection or any other classes behaviour with such a generic template
knowing that you'd be enabling it for shared, exclusive or nil concurrency.

I also throw atomic ops into my wrappers to allow those to be parameterised in
a similar way. Fencing could be exposed similarly.

Having a standard way to introduce such parameterisation to STL collections
would be quite useful I think. It is quite a small task to achieve this with
a little rejig of the locking and mutex concepts.

> > I think the statically initialise mutex concept is orthogonal to
> > this but I don't have a clear picture in my head.
> >
> > Going back to the containers for a second there is a natural two
> > level mutex concept that we should think about doing out of the box.
> > A container has activity is at a container level _and_ each object
> > in the container may have its own mutex requirement. There is a
> > relationship here as a common pattern is for resource pools of
> > mutexes for a container to limit resource consumption and provide
> > better scalability than a single mutex for all objects. So there is
> > a container access mutex and a mutex per object and the mutex may be
> > one and the same for simple requirements.
> >
> > Deadlock becomes a real possibility with such things as the
> > potential ordering of access needs to be constrained. E.g. if you
> > need to lock on two or more objects in the collection then the locks
> > should be required to be accessed in the same order or a scheme
> > where if you are locked and need another object then you may have to
> > give up your lock and attempt then attempt to acquire both,
> > otherwise you're inviting a deadlock.
> >
> > Containers of containers make this even more interesting :-)
> I don't think this has anything in particular to do with containers.
> I see no reason that locks needing to be acquired in order have to be
> related by being grouped into the same data structure, as my enclosed
> example illustrates. And conversely, I see no reason that objects in
> the same data structure are more likely to be locked together in the
> way you describe.

I'm not sure I follow w.r.t. to the enclosed example as I mentioned above.
I'm not clever enough to have a clear picture in my head of what the example
is trying to achieve. If you have time to explain a little I'll give it some
more thought.



Threads-Devel list run by bdawes at, david.abrahams at, gregod at, cpdaniel at, john at