Boost logo

Boost Users :

Subject: Re: [Boost-users] boost::shared_locks and boost::upgrade_locks
From: Kelvin Chung (kelvSYC_at_[hidden])
Date: 2011-12-07 22:51:53


On 2011-12-08 03:17:27 +0000, Brian Budge said:

> On Wed, Dec 7, 2011 at 4:05 PM, Kelvin Chung <kelvSYC_at_[hidden]> wrote:
>> I'm trying to implement a read-write lock, and I was told to use
>> boost::upgrade_lock.  However, only one thread can have a
>> boost::upgrade_lock, and everyone else gets blocked, so it's not much on the
>> read end.  (Correct me if I am wrong, but boost::upgrade_lock only waits if
>> someone has exclusive, but someone with boost::upgrade_lock will cause
>> everyone else to block)
>>
>> But reads are cheap in my case - I just need to use locks for a cache.  So,
>> suppose I have the following:
>>
>
> You don't necessarily want to release your lock before upgrading.
> This is more or less how I'd think about it (through example
> pseudocode):
>
> struct Example {
> boost::shared_mutex m_mtx;
> ...
>
> Data doRead() {
> boost::shared_lock<boost::shared_mutex> rlock(m_mtx);
> ...read and return
> }
> void doWrite(OtherData d) {
> boost::unique_lock<boost::shared_mutex> wlock(m_mtx);
> ...write d somewhere protected by m_mtx
> }
> void getCachedOrComputed(Data &d) {
> boost::upgrade_lock<boost::shared_mutex> uplock(m_mtx);
> if(found in cache)
> ...read data from a cache into d
> d = compute_output()
> //upgrade the lock to unique for write
> boost::upgrade_to_unique_lock<SharedMutex> wlock(uplock);
> ... write d into the cache
> }
> }
>
> Obviously "getCachedOrComputed" is most akin to what you want to do
> (including where I would place compute_output()), but I included the
> other functions to illustrate other cases (no need for upgrade). I
> had a difficult time figuring this out myself a while back, so I hope
> this helps.

The problem is that only one thread can have the upgrade lock, but any
number of threads can have shared locks. Going back to my example, if
I had

Output query(const Input& in) {
        boost::upgrade_lock<boost::shared_mutex> readLock;
        
        if (cache.count(in) == 0) {
                boost::upgrade_to_unique_lock<boost::shared_mutex> writeLock(readLock);
                
                cache[in] = compute_output(in);
        }
        return cache[in];
}

Then only one thread could access query() at a time - this is no better
than just doing things serially (ie. I could have just used
boost::unique_lock and assumed that I have all unconditional writes).
My impression that boost::upgrade_lock is basically "I need write
access, but I don't need to write now". So I'm thinking that either of
the two solutions I proposed is the "right" way to do query(), where I
can let through other threads who are calling query() on Inputs already
in the cache, say, rather than locking them out.


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net