Boost logo

Boost :

From: Howard Hinnant (howard.hinnant_at_[hidden])
Date: 2007-08-21 16:48:06


On Aug 21, 2007, at 3:22 PM, Yuval Ronen wrote:

> Howard Hinnant wrote:
>> On Aug 21, 2007, at 8:46 AM, Yuval Ronen wrote:
>>
>>> Howard Hinnant wrote:
>>>> Here is a link to a reference implementation and a FAQ for mutexes,
>>>> locks and condition variables I am currently anticipating proposing
>>>> for C++ standardization (or subsequent TR).
>>>>
>>>> http://home.twcny.rr.com/hinnant/cpp_extensions/concurrency_rationale.html
>>> After some not-so-thorough reading of this, a few comments:
>>>
>>> 1. I couldn't understand what defer_lock is good for, even after
>>> reading
>>> Q.9 of the FAQ. I believe the use-case shown in Q.9 should actually
>>> use
>>> accept_ownership instead. Can you elaborate please?
>>
>> See if this is any clearer:
>>
>> http://home.twcny.rr.com/hinnant/cpp_extensions/concurrency_rationale.html#unique_lock_defer_lock
>
> I'm afraid not...
>
> This example has 3 lines, the first 2 create unique_lock with
> defer_lock, and the 3rd calls std::lock. Those unique_lock don't lock,
> because std::lock to lock. OK. But who unlocks? The unique_locks don't
> own the mutexes, and therefore don't unlock them. But someone needs to
> unlock, and it sounds logical that the unique_locks would... Had we
> used
> accept_ownership, the unique_locks would have owned the mutexes, and
> unlock them. That's the difference between defer_lock and
> accept_ownership, the ownership, isn't it?

Ok, the lightbulb went off in my head and I think I understand your
question now. Thanks for not giving up on me.

I've tried again here:

http://home.twcny.rr.com/hinnant/cpp_extensions/concurrency_rationale.html#unique_lock_defer_lock

and see the next question (#10) as well. If that doesn't do it, see:

http://home.twcny.rr.com/hinnant/cpp_extensions/mutex_base

and search for "defer_lock_type" and "accept_ownership_type" for the
unique_lock implementation of these constructors. Neither constructor
does anything to the mutex, and simply sets the owns() flag to false
or true respectively. The unique_lock destructor will unlock the
mutex iff it owns()'s the mutex.

>>> Either way, I believe this design wouldn't meet the use case which I
>> didn't effectively communicate in #14:
>>
>> Given a read-write mutex and its associated condition variable:
>>
>> my::shared_mutex rw_mut;
>> std::condition<my::shared_mutex> cv(rw_mut);
>>
>> client code wants to wait on that cv in two different ways:
>>
>> 1. With rw_mut read-locked.
>> 2. With rw_mut write-locked.
>>
>> If we initialized the condition variable with cv(rw_mut.exclusive()),
>> then cv.wait() would wait with rw_mut write-locked, but we wouldn't
>> be
>> able to wait on cv with rw_mut read-locked.
>>
>> If we initialized the condition variable with cv(rw_mut.shared()),
>> then cv.wait() would wait with rw_mut read-locked, but we wouldn't be
>> able to wait on cv with rw_mut write-locked.
>>
>> This use case desires *both* types of waits on the *same* mutex/cv
>> pair.
>
> The last sentence starts with "This use case", but I see no use
> case. Do
> we really have such a use case? I haven't seen one yet. But even if we
> had, then maybe the solution is the same solution to the requirement
> you
> phrased as "The freedom to dynamically associate mutexes with
> condition
> variables" or "The ability to wait on general mutex / lock types"
> (what's the difference between those two sentences anyway?) in your
> response to Peter. Add a 'set_mutex(mutex_type &)', or maybe even
> 'set_mutex(mutex_type *)' to std::condition. I think it will solve
> this
> rare case.

Ok, perhaps I'll clean the following use case up, and include it in
the faq. First I'll try it out here. :-)

I've got a "communication node" class. It serves as a node in a
network. It gets data from somewhere and temporarily stores it while
it uses several threads to forward (or relay) the information to other
nodes in the network. For simplicity I'm using vector<int> for the
data, and only two relay threads. The example is incomplete (not even
compiled), just illustrative right now:

class communication_node
{
     std::vector<int>* from_;
     std::vector<int> data_;
     std::vector<int> to_[2];

     typedef std::tr2::shared_mutex Mutex;
     Mutex mut_;
     std::condition<Mutex> cv_;
     bool get_data_;
     bool fresh_data_;
     bool data_relayed_[2];
public:
     void supplier()
     {
         while (true)
         {
             std::unique_lock<Mutex> write_lock(mut_);
             while (!get_data_ || !data_relayed_[0] || !
data_relayed_[1])
                 cv_.wait(write_lock);
             std::copy(from_->begin(), from_->end(), data_.begin());
             get_data_ = false;
             fresh_data_ = true;
             data_relayed_[0] = false;
             data_relayed_[1] = false;
             cv_.notify_all();
         }
     }
     void relayer(int id)
     {
         while (true)
         {
             std::tr2::shared_lock<Mutex> read_lock(mut_);
             while (data_relayed_[id])
                 cv_.wait(read_lock);
             std::copy(data_.begin(), data_.end(), to[id].begin());
             data_relayed_[id] = true;
             cv_.notify_all();
         }
     }
};

One thread will be running "supplier", while two other threads will
run "relayer". Additionally some fourth thread (not shown) will tell
"supplier" when there is new data that it needs to go get.

In this design, which may not be the best way to do things, but it
looks like reasonable client-written code to me, there is one
shared_mutex and one condition to control the data flow. The supplier
waits for there to be new data to get, and also waits until the
relayers have done their jobs, before getting new data. It needs
write access to the data. So it holds mut_ write-locked, and waits on
the cv_ with it.

The relayer threads only need read access to the data. So they each
have mut_ read-locked, and wait on the cv until they get the
instruction that it is time to relay the data. Once they relay the
data, they notify everyone else that they're done.

This example is meant to demonstrate a reasonable use case where one
thread wants to wait on a cv with a read-lock while another thread
wants to wait on the same cv/mutex with a write lock. Both readers
and the writer may all be waiting at the same time for there to be
data available from an upstream node (and a fourth thread would have
to notify them when said data is available). Because of this, it is
not possible (in the above use case) for there to be a set_mutex on
the condition to change the facade, since both facades are
simultaneously in use.

Did I make more sense this time? I often complain when people use too
much English and not enough C++ in their arguments, and then I find
myself being guilty of the same thing. :-)

-Howard


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk