Boost logo

Threads-Devel :

From: Anthony Williams (anthony_at_[hidden])
Date: 2006-03-01 12:13:20


Matt Hurd <matt.hurd_at_[hidden]> writes:

> G'day,
>
>>On Thursday 02 March 2006 00:58, Anthony Williams wrote:
>> Hi all,
>>
>> I thought I'd start off some discussion about mutexes, to inaugurate our
>> new mailing list.
>>
>> The mutex is the basic synchronization primitive for shared-memory
>> threading models, such as that of boost.thread, so it is very important
>> that the model, and implementation, is sound. With a working mutex, you can
>> build more expressive synchronization mechanisms, such as the smart shared
>> pointers proposed by Roland.
>
> Doesn't just apply to shared-memory threading models. As the name suggests it
> can be just to prevent concurrent access to a resource which may be
> re-entrancy protection in a single threaded model. The concept is mutual
> exclusion as per its name. Might be a physical thing that can only be
> accessed one at time, like hardware registers of a device or some such.

Agreed --- mutexes can be useful in lots of places, and as a concept is very
widespread. However, my point was that it's important for threading.

> It is also important to remember that this may apply to a singly threaded
> restricted area in an application that is multi-threaded. For example, it is
> unfortunate that boost::shared_ptr adopts its concurrency policy based on
> whether or not it is a multi-threaded build. It is the conservative safe
> approach but restricts the usefulness of the pointer. I use
> boost::shared_ptr by default in my apps and replace it with a policy based
> pointer, usually Loki's, where I'm performance critical w.r.t. a lack of
> concurrent access. In practice this has only really been necessary where I
> need reference counted access wear I know there will only be one per thread,
> possibly different threads at different parts of the lifecycle but never
> concurrent access.

Yes, it can be useful to be able to have versions of the same type of thing
that support concurrent access, alongside ones that don't, in the same app.

>> In C++, there are several primary usage scenarious for mutex objects:
>>
>> 1. Standalone, as an object of static storage duration at namespace or
>> block scope.
>>
>> 2. As a class member, for a class object of static storage duration.
>>
>> 3. As a class member, for a class object created on the stack or heap.
>>
>> In each case, the mutex object is used to synchronize access to other
>> objects at the same scope.
>
> Not always. Different scoping is not that uncommon is practice I think.

[snip examples]

Ok. My main point here was the different scoping of the mutex, and the
requirements that imposes.

> A common structure we should support to prevent deadlocks is the idea of a
> group of mutexes for simulataneous locking. Deadlock is guaranteed not to
> occur if they are always accessed in the same order. Enforcement of ordered
> access will prevent deadlock, a noble goal.

That would be useful. Something akin to the Win32 API WaitForMultipleObjects?

>> It is essential that the mutex be correctly initialized by the time the
>> first thread attempts to lock it. For objects of static storage duration,
>> this requires that the mutex can be initialized with a static initializer,
>> and not rely on dynamic initialization, since dynamic initialization
>> exposes the possibility of race conditions.
>
> Static mutexes are really handy and very annoying due to the static syntax
> messiness of their static initialisation requirements. It would be nice if
> there were some syntax sugar that lead to their safe concurrent
> initialisation at start-up. The syntax you suggest below of the pthread like
> BOOST_MUTEX_INITIALIS(Z)ER is good. It is necessary to support this as the
> initialisation can then be at compile time and end up as a zero in the
> executable image for example. The messiness of always needing external
> initialisation of the static to the class and therefore cpp location or an
> anonymous namespace trick makes me wonder if a registration process/macro
> where all the static mutexes get initialise in a one off singleton object on
> start up might be neat... Dunno.

Singletons aren't ideal, so I'd rather not go there.

>> Constructor calls constitute dynamic initialization, so therefore we need
>> mutex objects which are aggregates, with no members that require
>> constructor calls, so there are no constructor calls. Ideally,
>> zero-initialization would be sufficient, as then the user would not have to
>> specify an initializer, but failing that, a static initializer akin to
>> PTHREAD_MUTEX_INITIALIZER would work. e.g.
>>
>> boost::mutex m=BOOST_MUTEX_INITIALIZER;
>>
>> At the other end of the application lifetime, we have destruction. Since we
>> have no idea how many threads will be running during application shutdown,
>> or what order objects will be destructed, we need to be conservative here,
>> too --- we need the mutex objects to remain usable for as long as possible.
>> On the platforms which I know about, it is acceptable to leak OS resources
>> such as mutexes, as these will be cleaned up by the OS on program exit.
>>
>> Therefore, I suggest that the most straightforward way to deal with
>> destruction order race conditions is for the mutex objects to have no
>> destructors (which is consistent with being POD).
>
> This is unacceptable I feel because of the resource consumption issues on some
> platforms.

What did you have in mind? I'm only suggesting this for mutexes with static
storage duration, which therefore have a lifetime equal to that of the
program. All POSIX OSes I know, and Windows, all reclaim their mutexes at
program termination, so there's not a lot of point explicitly freeing them
just beforehand, and exposing the consequent race conditions.

>> When I last looked at this, I named the basic_mutex version
>> lightweight_mutex, and had a discussion about it with Roland. The earlier
>> version is still in CVS, but is no longer used for boost::mutex. I believe
>> the current version is marginally simpler, and also
>> boost::detail::lightweight_mutex is a name used by the shared_ptr
>> implementation, which is why I have changed it to basic_mutex. The old
>> implementation has been left in, as I haven't yet changed my implementation
>> of read_write_mutex, which uses it.
>>
>> One of Roland's concerns was that this essentially duplicates
>> CRITICAL_SECTION functionality.
>
> Nothing wrong with that a critical section is a mutual exclusion guarantee.

Yes, but I think Roland's point was "why not then use a CRITICAL_SECTION?",
which I address below.

>> The next challenge is timed_mutex --- this cannot be written as an
>> extension of the current scheme, since that would expose a race condition.
>> I might have to revert to my lightweight_mutex scheme for that.
>
> Hmmm, the mutex isn't timed. The lock is timed, or rather, the attempt to
> lock is timed. Important difference.

Absolutely, though the current boost interface uses boost::timed_mutex to mean
a mutex that supports timed_lock operations. I think the interface could do
with changing here, if only to give things better names.

My current implementation of basic_mutex doesn't support timed_lock
operations, whereas my old lightweight_mutex does, at the expense of being
slightly more cumbersome overall.

>> I also have a "checked" version of basic_mutex (not in CVS), that checks
>> which thread owns the mutex, and throws if the same thread tries to lock it
>> twice, or a thread that doesn't own it tries to unlock it. This requires
>> additional storage for the ID of the locking thread, and also has the
>> overhead of the additional checks. Does anyone think it's worth adding?
>
> Perhaps, especially for debugging. I also wonder if we should require a name
> concept for a mutex as this would derive some orthogonality across shared
> memory mutexes across process space. Also a debug version of a mutex could
> have conncurrent tests as well as logging to simply debugging. Debugging
> errors in concurrent access is one of the nasiest and hardest things in
> programming. A debug mutex that can report its name and a deadlock condition
> and the names of the calling threads, for example, would be rather magic and
> very simple to implement. Worth its weight in electrons me thinks.

The only deadlock you can detect with a single mutex is a recursive
lock. Otherwise, deadlocks come from sets of mutexes. I don't fancy writing
code that identifies that thread A is trying to lock mutex X, whilst holding a
lock on mutex Y, and whilst thread B is trying to lock mutex Y, whilst holding
a lock on mutex X, let alone the multiple thread/multiple mutex case. That
said, it would be useful.

> Some more food for thought.
>
> A condition variable relies on a mutex. We haven't considered the
> orthogonality of a condition being able to be built around any mutex concept.
> This is important to consider as it would be great to do but makes building
> conditions very difficult as we have to write one rather than rely on just a
> posix wrapper on posix platforms.

Yes. It just extends to Posix the difficulties we're going to face on Windows.

> A shared mutex and null mutex need to be considered.

What is a shared mutex?

> I use this pattern a lot and have a synch wrapper that wraps the boost mutex
> implementation to give me this. Such a wrapper also exposes other synch
> stuff, like atomic ops and fencing, so these can be used w.r.t. to the
> conceptual need at the time, so to speak.
>
> In my opinion all of STL should support such a concept for the containers and
> this could be an extension that boost offers.

> Going back to the containers for a second there is a natural two level mutex
> concept that we should think about doing out of the box. A container has
> activity is at a container level _and_ each object in the container may have
> its own mutex requirement. There is a relationship here as a common pattern
> is for resource pools of mutexes for a container to limit resource
> consumption and provide better scalability than a single mutex for all
> objects. So there is a container access mutex and a mutex per object and the
> mutex may be one and the same for simple requirements.

Interesting thought.

> Deadlock becomes a real possibility with such things as the potential ordering
> of access needs to be constrained. E.g. if you need to lock on two or more
> objects in the collection then the locks should be required to be accessed in
> the same order or a scheme where if you are locked and need another object
> then you may have to give up your lock and attempt then attempt to acquire
> both, otherwise you're inviting a deadlock.

I wrote a double-locker once that did this --- it acquired one lock, then
tried to acquire the second. If that failed, it released the lock it had, and
then tried again, in the other order. It kept switching orders until it had
both mutexes locked.

Anthony

-- 
Anthony Williams
Software Developer
Just Software Solutions Ltd
http://www.justsoftwaresolutions.co.uk

Threads-Devel list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk