Boost logo

Boost :

From: Matt Hurd (matt.hurd_at_[hidden])
Date: 2004-07-16 00:21:26


On Thu, 15 Jul 2004 22:30:37 -0400, Howard Hinnant <hinnant_at_[hidden]> wrote:
> On Jul 15, 2004, at 7:42 PM, Matt Hurd wrote:
>
> > I'm not sure I see the case for deferring anything a stack based
> > scoped lock with a:
> > void folly ()
> > {
> > {
> > scoped_lock l(m);
> > do_stuff();
> > }
> >
> > {
> > scoped_lock l(m);
> > do_more_stuff();
> > }
> > }
> > is still preferable to me than allowing explicit locking of a lock.
>
> I have a couple of places where I need a "deferred" lock: Namely I
> have 2 generic templated lock classes that take other locks as template
> parameters, and store references to those locks:
>
> template <class Lock1, class Lock2> class transfer_lock;
> template <class Lock1, class Lock2> class lock_both;
>
> The constructor for transfer_lock takes two locks, referencing the same
> mutex, the first of which must be unlocked, and the second locked.

In principle, tranferring the true from one bool to another sounds a
nice thing to do, however:

I'm not sure this is a good idea. For a single thread this could
probably be worked out with an alternative design / refactor.
Transferring locks between threads might be desirable, but could
introduce a bevy of complexities for other things as usually a mutex
lock is assumed to be associated with a particular thread.

For example, we have talked about a vector of mutexes and supporting
locking of a collection where a mutex is associated, potentially, with
many objects. Transfer lock would complicate this as you need to map
thread to object for some models that allow recursive locking for the
same thread to avoid deadlock.

You might be right though, I'm a slave to efficiency. Do you have a
specific use case in mind?

> The
> constructor for lock_both takes two locks, referencing different
> mutexes, both of which must be unlocked.
>
> Example use:
>
> mutex m;
> scoped_lock lock1(m1, deferred); // or whatever syntax for not locking
> scoped_lock lock2(m2, deferred);
> lock_both<scoped_lock, scoped_lock> lock(lock1, lock2);
> // m1 and m2 atomically locked here, without fear of deadlock
>
> Without the ability to construct but defer locking of lock1 and lock2,
> the construction of lock_both becomes inefficient and awkward.

My immediate reaction was to think this was a bad idea, but a moment
later I'm not so sure.

My initial thought, and I think it is still valid, was that locking
many things atomically doesn't make sense. You have to lock them one
after the other and always in the same order to prevent deadlock.
Perhaps you have a scheme in mind where you try lock for all things
and release if you can't get all and then try again causing a group
spin lock? I think I'm missing a part of your picture, let me know
what you're thinking.

If I forget about the atomicity aspect then locking groups of things
does make sense.

Now, consider the locking of many mutexes (mutexi? ;-) ).

I'm not sure passing deferred locks to the collection, in your case a
two lock collection, is the best approach.

Something like
 
    scoped_multi_lock lk( mutex_vector );
or
    scoped_multi_lock lk( m1, m2 );
or
    scoped_multi_lock lk( some_mpl_mutex_vector_thing );

might work just as well.

Specification of ordering for locking the mutexes is another matter.

Very interested in your thoughts on this.

> > I worry about paying for the overhead, perhaps in space for a scoped
> > time lock that records it timing versus a blocking lock that needs no
> > such information, but I'm not sure that it is a big deal as the
> > difference is negligible and these will typically be stack based.
> > Implementation will reveal all I guess.
>
> There is no overhead in the lock types, at least with the current boost
> design. All locks I've coded contain a reference to the mutex, and a
> bool indicating locked status (even the read_lock, write_lock and
> upgradable_read_lock). But there is overhead in the mutex types, and
> it can vary significantly among the different types: (basic, try,
> timed, recursive, read/write). Sometimes the overhead is on the stack,
> sometimes not (depends on the OS and on the mutex implementation).
>
> Here is a sample survey of sizeof() on one of my platforms I need to
> support:
>
> sizeof(mutex) = 44
> sizeof(try_mutex) = 44
> sizeof(timed_mutex) = 76
> sizeof(recursive_mutex) = 44
> sizeof(recursive_try_mutex) = 44
> sizeof(recursive_timed_mutex) = 80
> sizeof(rw_mutex) = 108
>
> On another platform it looks like:
>
> sizeof(mutex) = 44
> sizeof(try_mutex) = 44
> sizeof(timed_mutex) = 76
> sizeof(recursive_mutex) = 80
> sizeof(recursive_try_mutex) = 80
> sizeof(recursive_timed_mutex) = 80
> sizeof(rw_mutex) = 108
>
> On a third platform I support there are differences between mutex and
> try_mutex, but the difference is not well illustrated by sizeof()
> because the "mutex type" is just a 4 byte handle in some cases (but not
> others).
>
> sizeof(mutex) = 24
> sizeof(try_mutex) = 4
> sizeof(timed_mutex) = 4
> sizeof(recursive_mutex) = 24
> sizeof(recursive_try_mutex) = 4
> sizeof(recursive_timed_mutex) = 4
> sizeof(rw_mutex) = not implemented
>
> And of course all of this is subject to change. But if I'm forced to
> include all capability into one mutex type, then there is definitely
> going to be a price paid on some platforms by those who only want a
> mutex (which is also by far the most often needed type).

Thanks for the information. I take your point that lock is pretty
much the same size regardless of the mutex. The mutex size variation
does not concern me too much but I could see it will concern some,
e.g. an embedded resource poor system or a design where you might go
crazy and have a million mutexes which is something I'm thinking off
as a space / time tradeoff.

Regards,

Matt Hurd.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk