Boost logo

Boost :

From: Harro Verkouter (verkouter_at_[hidden])
Date: 2008-02-01 08:28:46


That's a hairy one :) I dealt with something like this at some point.
Doesn't sound too bad, this approach.

One thing you should not forget is, when finding out that you didn't
lock all resources you wanted, unlock the ones you did get the lock for
and sleep a while before retrying. Otherwise you will end up in a
deadlock, at some point or the lock attempt will eat your CPUcycles.
Some OS/thread schedulers don't do well with a tight loop ie: the other
thread(s) will get hardly any CPU time at all anymore.

Propably, but that may be dependent on the access frequency and/or
pattern to the resources and the proportion of number of threads to
number of resources and the amount of time spent by the thread modifying
or working with the resources, you may be better of by having just a
single mutex ...
If you need to spend more time waiting to lock only the objects you need
than it actually takes to work on/with the resources, than that is not a
win-win situation.

HTH,

Harro Verkouter

Kowalke Oliver (QD IT PA AS) wrote:
> Hi,
> what's the best practice in following scenario:
> m threads and n resources (each resource is protected by a mutex)
> thread x needs read/write access to resources (...i,j,k,..)
> threads y,z,... do access another subset of resources at the same time
>
> inorder to protected deadlocks I would do following in the code of the different threads (code executed by the threads are different):
>
> ...
> try_lock: // label
> unique_lock< mutex > lk1( mtx1, try_to_lock_t);
> unique_lock< mutex > lk4( mtx2, try_to_lock_t);
> unique_lock< mutex > lk7( mtx3, try_to_lock_t);
> if ( ! ( lk1 && lk2 && lk3) ) goto try_lock;
> // all mutexes are locked
> // now modify safely resource 1,2,3
> ...
>
> Maybe you have a better pattern do solve this problem?
>
> best regards,
> Oliver
>
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk