Boost logo

Boost :

From: Howard Hinnant (howard.hinnant_at_[hidden])
Date: 2007-08-21 18:46:56


On Aug 21, 2007, at 6:02 PM, Yuval Ronen wrote:

> Howard Hinnant wrote:
>> On Aug 21, 2007, at 3:22 PM, Yuval Ronen wrote:
>>
>>> Howard Hinnant wrote:
>>>> On Aug 21, 2007, at 8:46 AM, Yuval Ronen wrote:
>>>>
>>>>> 1. I couldn't understand what defer_lock is good for, even after
>>>>> reading
>>>>> Q.9 of the FAQ. I believe the use-case shown in Q.9 should
>>>>> actually
>>>>> use
>>>>> accept_ownership instead. Can you elaborate please?
>>>> See if this is any clearer:
>>>>
>>>> http://home.twcny.rr.com/hinnant/cpp_extensions/concurrency_rationale.html#unique_lock_defer_lock
>>> I'm afraid not...
>>>
>>> This example has 3 lines, the first 2 create unique_lock with
>>> defer_lock, and the 3rd calls std::lock. Those unique_lock don't
>>> lock,
>>> because std::lock to lock. OK. But who unlocks? The unique_locks
>>> don't
>>> own the mutexes, and therefore don't unlock them. But someone
>>> needs to
>>> unlock, and it sounds logical that the unique_locks would... Had we
>>> used
>>> accept_ownership, the unique_locks would have owned the mutexes, and
>>> unlock them. That's the difference between defer_lock and
>>> accept_ownership, the ownership, isn't it?
>>
>> Ok, the lightbulb went off in my head and I think I understand your
>> question now. Thanks for not giving up on me.
>>
>> I've tried again here:
>>
>> http://home.twcny.rr.com/hinnant/cpp_extensions/concurrency_rationale.html#unique_lock_defer_lock
>>
>> and see the next question (#10) as well. If that doesn't do it, see:
>>
>> http://home.twcny.rr.com/hinnant/cpp_extensions/mutex_base
>>
>> and search for "defer_lock_type" and "accept_ownership_type" for the
>> unique_lock implementation of these constructors. Neither
>> constructor
>> does anything to the mutex, and simply sets the owns() flag to false
>> or true respectively. The unique_lock destructor will unlock the
>> mutex iff it owns()'s the mutex.
>
> Not yet... :)
>
> One of the added sentences is "After std::lock locks l1 and l2, these
> locks now own their respective mutexes". How does that happen? I
> looked
> at the implementation code, but could see nothing that changes owns()
> from false to true after construction. Have I missed it?

I think so, but it is rather subtle. Re-quoting std::lock (the two-
lock version) here:

template <class _L1, class _L2>
void
lock(_L1& __l1, _L2& __l2)
{
     while (true)
     {
         {
         unique_lock<_L1> __u1(__l1);
         if (__l2.try_lock())
         {
             __u1.release();
             break;
         }
         }
         std::this_thread::yield();
         {
         unique_lock<_L2> __u2(__l2);
         if (__l1.try_lock())
         {
             __u2.release();
             break;
         }
         }
         std::this_thread::yield();
     }
}

And I'm going to answer your second question below before continuing:

> Another related question is why std::lock works with locks and not
> mutexes? Can't see the benefit in that.

You're absolutely right. This is my fault due to lack of proper
documentation at the moment. All that is required of _L1 and _L2 in
the above algorithm is that they support:

void lock();
bool try_lock();
void unlock();

It will work for _L1 being a lock or mutex (and same for _L2).

Ok, now back to how std::lock works:

This line:

         unique_lock<_L1> __u1(__l1);

implicitly calls __.l1.lock() inside of the unique_lock constructor.
If __l1 is a mutex, the deed is done. If __l1 is a lock, hopefully
that will forward to the referenced mutex's lock() function in the
proper manner. And in the process, that should set the lock's owns()
data to true as well.

template <class _Mutex>
void
unique_lock<_Mutex>::lock()
{
     if (__m == 0 || __owns)
         throw lock_error();
     __m->lock();
     __owns = true; // owns() set here!!!
}

Similarly for:

         if (__l2.try_lock())

And if all that fails, then the second half of the algorithm just does
the same thing in the reverse order.

The reason for the local unique_lock is because I'm anticipating that
the try_lock() could throw an exception. If it does, the local
unique_lock will unlock its referenced lock/mutex as the exception
propagates out. If an exception isn't thrown, and the try_lock
succeeds, then the unique_lock just releases the the lock/mutex (much
like auto_ptr). Both locks remain locked and the algorithm returns
normally.

This algorithm (the std::lock) is actually my cornerstone of "why I
like uncoupled - non-nested - lock types". The local unique_lock is
applied to any mutex, or any lock, even another unique_lock,
transparently. It just does its job without caring what type of mutex
or lock it needs to temporarily hold on to. unique_lock is very, very
analogous to unique_ptr (formerly auto_ptr). One holds lock/mutex
ownership state, the other holds heap allocated memory. Both are
resource holders. Both have virtually the same responsibilities and
design. Their names are purposefully similar.

>>>>> Either way, I believe this design wouldn't meet the use case
>>>>> which I
>>>> didn't effectively communicate in #14:
>>>>
>>>> Given a read-write mutex and its associated condition variable:
>>>>
>>>> my::shared_mutex rw_mut;
>>>> std::condition<my::shared_mutex> cv(rw_mut);
>>>>
>>>> client code wants to wait on that cv in two different ways:
>>>>
>>>> 1. With rw_mut read-locked.
>>>> 2. With rw_mut write-locked.
>>>>
>>>> If we initialized the condition variable with
>>>> cv(rw_mut.exclusive()),
>>>> then cv.wait() would wait with rw_mut write-locked, but we wouldn't
>>>> be
>>>> able to wait on cv with rw_mut read-locked.
>>>>
>>>> If we initialized the condition variable with cv(rw_mut.shared()),
>>>> then cv.wait() would wait with rw_mut read-locked, but we
>>>> wouldn't be
>>>> able to wait on cv with rw_mut write-locked.
>>>>
>>>> This use case desires *both* types of waits on the *same* mutex/cv
>>>> pair.
>>> The last sentence starts with "This use case", but I see no use
>>> case. Do
>>> we really have such a use case? I haven't seen one yet. But even
>>> if we
>>> had, then maybe the solution is the same solution to the requirement
>>> you
>>> phrased as "The freedom to dynamically associate mutexes with
>>> condition
>>> variables" or "The ability to wait on general mutex / lock types"
>>> (what's the difference between those two sentences anyway?) in your
>>> response to Peter. Add a 'set_mutex(mutex_type &)', or maybe even
>>> 'set_mutex(mutex_type *)' to std::condition. I think it will solve
>>> this
>>> rare case.
>>
>> Ok, perhaps I'll clean the following use case up, and include it in
>> the faq. First I'll try it out here. :-)
>>
>> I've got a "communication node" class. It serves as a node in a
>> network. It gets data from somewhere and temporarily stores it while
>> it uses several threads to forward (or relay) the information to
>> other
>> nodes in the network. For simplicity I'm using vector<int> for the
>> data, and only two relay threads. The example is incomplete (not
>> even
>> compiled), just illustrative right now:
>>
>> class communication_node
>> {
>> std::vector<int>* from_;
>> std::vector<int> data_;
>> std::vector<int> to_[2];
>>
>> typedef std::tr2::shared_mutex Mutex;
>> Mutex mut_;
>> std::condition<Mutex> cv_;
>> bool get_data_;
>> bool fresh_data_;
>> bool data_relayed_[2];
>> public:
>> void supplier()
>> {
>> while (true)
>> {
>> std::unique_lock<Mutex> write_lock(mut_);
>> while (!get_data_ || !data_relayed_[0] || !
>> data_relayed_[1])
>> cv_.wait(write_lock);
>> std::copy(from_->begin(), from_->end(), data_.begin());
>> get_data_ = false;
>> fresh_data_ = true;
>> data_relayed_[0] = false;
>> data_relayed_[1] = false;
>> cv_.notify_all();
>> }
>> }
>> void relayer(int id)
>> {
>> while (true)
>> {
>> std::tr2::shared_lock<Mutex> read_lock(mut_);
>> while (data_relayed_[id])
>> cv_.wait(read_lock);
>> std::copy(data_.begin(), data_.end(), to[id].begin());
>> data_relayed_[id] = true;
>> cv_.notify_all();
>> }
>> }
>> };
>
> The relayer, which supposed to be a reader, actually is a writer, to
> data_relayed_[id], so I believe a read_lock is not enough...

I should have made to_:

     std::vector<int>* to_[2];

I.e. a vector to a pointer to data outside the node. And now that I
think more about it, the relayer thread is the "missing supplier"
thread not demonstrated above, but for the next node downstream. It
only needs read access to this node, but it will need write access to
a downstream node. The example definitely needs more work.

> You keep on improving. I hope we all are ;-)

:-)

-Howard


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk