Boost logo

Boost :

From: Yuval Ronen (ronen_yuval_at_[hidden])
Date: 2007-08-21 19:34:44


Howard Hinnant wrote:
> On Aug 21, 2007, at 6:02 PM, Yuval Ronen wrote:
>
>> One of the added sentences is "After std::lock locks l1 and l2, these
>> locks now own their respective mutexes". How does that happen? I
>> looked
>> at the implementation code, but could see nothing that changes owns()
>> from false to true after construction. Have I missed it?
>
> I think so, but it is rather subtle. Re-quoting std::lock (the two-
> lock version) here:
>
> template <class _L1, class _L2>
> void
> lock(_L1& __l1, _L2& __l2)
> {
> while (true)
> {
> {
> unique_lock<_L1> __u1(__l1);
> if (__l2.try_lock())
> {
> __u1.release();
> break;
> }
> }
> std::this_thread::yield();
> {
> unique_lock<_L2> __u2(__l2);
> if (__l1.try_lock())
> {
> __u2.release();
> break;
> }
> }
> std::this_thread::yield();
> }
> }
>
> And I'm going to answer your second question below before continuing:
>
>> Another related question is why std::lock works with locks and not
>> mutexes? Can't see the benefit in that.
>
> You're absolutely right. This is my fault due to lack of proper
> documentation at the moment. All that is required of _L1 and _L2 in
> the above algorithm is that they support:
>
> void lock();
> bool try_lock();
> void unlock();
>
> It will work for _L1 being a lock or mutex (and same for _L2).
>
> Ok, now back to how std::lock works:
>
> This line:
>
> unique_lock<_L1> __u1(__l1);
>
> implicitly calls __.l1.lock() inside of the unique_lock constructor.
> If __l1 is a mutex, the deed is done. If __l1 is a lock, hopefully
> that will forward to the referenced mutex's lock() function in the
> proper manner. And in the process, that should set the lock's owns()
> data to true as well.
>
> template <class _Mutex>
> void
> unique_lock<_Mutex>::lock()
> {
> if (__m == 0 || __owns)
> throw lock_error();
> __m->lock();
> __owns = true; // owns() set here!!!
> }
>
> Similarly for:
>
> if (__l2.try_lock())

OK, I finally see how it works. I completely missed the part that
std::lock can work both for mutexes and for locks. However...

> This algorithm (the std::lock) is actually my cornerstone of "why I
> like uncoupled - non-nested - lock types". The local unique_lock is
> applied to any mutex, or any lock, even another unique_lock,
> transparently. It just does its job without caring what type of mutex
> or lock it needs to temporarily hold on to. unique_lock is very, very
> analogous to unique_ptr (formerly auto_ptr). One holds lock/mutex
> ownership state, the other holds heap allocated memory. Both are
> resource holders. Both have virtually the same responsibilities and
> design. Their names are purposefully similar.

... I don't agree with why this is needed. I don't see any reason why
any algorithm would work both for mutexes and locks. Just as you won't
write an algorithm that works both for unique_ptrs and the objects they
point to. They are different things.

All this complexity is simply unnecessary, IMO. You can drop the
lock/try_lock/unlock functions from the locks, keep them for mutexes,
and declare that std::lock/scoped_lock/unique_lock work only for
mutexes. You can then drop defer_lock also, and everything is much
simpler...

>>> Ok, perhaps I'll clean the following use case up, and include it in
>>> the faq. First I'll try it out here. :-)
>>>
>>> I've got a "communication node" class. It serves as a node in a
>>> network. It gets data from somewhere and temporarily stores it while
>>> it uses several threads to forward (or relay) the information to
>>> other
>>> nodes in the network. For simplicity I'm using vector<int> for the
>>> data, and only two relay threads. The example is incomplete (not
>>> even
>>> compiled), just illustrative right now:
>>>
>>> class communication_node
>>> {
>>> std::vector<int>* from_;
>>> std::vector<int> data_;
>>> std::vector<int> to_[2];
>>>
>>> typedef std::tr2::shared_mutex Mutex;
>>> Mutex mut_;
>>> std::condition<Mutex> cv_;
>>> bool get_data_;
>>> bool fresh_data_;
>>> bool data_relayed_[2];
>>> public:
>>> void supplier()
>>> {
>>> while (true)
>>> {
>>> std::unique_lock<Mutex> write_lock(mut_);
>>> while (!get_data_ || !data_relayed_[0] || !
>>> data_relayed_[1])
>>> cv_.wait(write_lock);
>>> std::copy(from_->begin(), from_->end(), data_.begin());
>>> get_data_ = false;
>>> fresh_data_ = true;
>>> data_relayed_[0] = false;
>>> data_relayed_[1] = false;
>>> cv_.notify_all();
>>> }
>>> }
>>> void relayer(int id)
>>> {
>>> while (true)
>>> {
>>> std::tr2::shared_lock<Mutex> read_lock(mut_);
>>> while (data_relayed_[id])
>>> cv_.wait(read_lock);
>>> std::copy(data_.begin(), data_.end(), to[id].begin());
>>> data_relayed_[id] = true;
>>> cv_.notify_all();
>>> }
>>> }
>>> };
>> The relayer, which supposed to be a reader, actually is a writer, to
>> data_relayed_[id], so I believe a read_lock is not enough...
>
> I should have made to_:
>
> std::vector<int>* to_[2];

Now I lost you. I was talking about 'data_relayed_', not 'to_'. Or maybe
you did answer to the point and I failed to understand? Certainly possible.

> I.e. a vector to a pointer to data outside the node. And now that I
> think more about it, the relayer thread is the "missing supplier"
> thread not demonstrated above, but for the next node downstream. It
> only needs read access to this node, but it will need write access to
> a downstream node. The example definitely needs more work.

AFAICU this example, the main point is that the relayer thread needs
write access, at least to the flag that means "dear supplier thread, I'm
ready for whatever you have for me, bring it on". That same flag that
the supplier checks with the call to wait() - someone needs to set it,
right? And that someone is the relayer, because I don't see anyone else
around...

OK, it's time for me to go to sleep. See ya tomorrow :)


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk