Boost logo

Boost Users :

Subject: Re: [Boost-users] boost::shared_locks and boost::upgrade_locks
From: Kelvin Chung (kelvSYC_at_[hidden])
Date: 2011-12-08 12:30:16


On 2011-12-08 16:58:00 +0000, Vicente Botet said:

> Kelvin Chung wrote
>>
>> On 2011-12-08 04:27:51 +0000, Brian Budge said:
>>>
>>> Upgrade ownership is just shared ownership that can be upgraded to
>>> exclusive ownership.
>>
>> My understanding of the documentation of the UpgradeLockable concept
>> seems to suggest otherwise. And I quote:
>>
>> "a single thread may have upgradeable ownership at the same time as
>> others have shared ownership"
>>
>> This seems to imply that shared and upgrade are very different levels.
>> Especially when it later says "upgradeable ownership can be downgraded
>> to plain shared ownership". If upgrade is just shared with a license
>> to upgrade, why would you ever need to downgrade? Why would "downgrade
>> to shared" even exist in the first place?
>>
>
> I guess that it is to free the single thread that can take a upgrade lock,
> so that another thread could take an upgrade_lock.
>
>
>
>> Here's a scenario that proves my point: Suppose cache::query() is just
>> implemented with upgrade locks. Suppose you have two Inputs, foo and
>> bar. Suppose you also have two threads, one calling Cache::query(foo)
>> and the other Cache::query(bar). Both will be trying to get the
>> upgrade_lock, but according to the UpgradeLockable concept, only one
>> thread gets it, and the other one will be blocked. So, if foo is not
>> in the cache and bar is in the cache, and bar is the one that gets
>> blocked, then bar has to wait for foo to finish (which, as
>> compute_output() could be expensive, could take a while) - this is no
>> better than doing things serially, when you could just let the bar go
>> through (since it only needs to read from the cache, which is cheap) as
>> foo is waiting for the exclusive lock upgrade.
>>
>
> You are right, as you are using the same function query. Now suppose that
> query spend some time on compute_output() but that you needed to do some
> more things. There you could downgrade the lock so that bar will be
> unblocked.

Would either try-locking the upgrade lock or doing compute_output()
before upgrading work? Something like

Output Cache::query(const Input& in) {
        boost::shared_lock<boost::shared_mutex> readLock(mutex);
        if (cache.count(in) == 0) {
                readLock.unlock();
                
                boost::upgrade_lock<boost::shared_mutex> rereadLock(mutex,
boost::try_to_lock); // Option 1 - try locking rereadLock
                Output out = compute_output(in); // Option 1 - and while waiting for
it, do calculations
                if (!rereadLock.owns_lock()) rereadLock.lock_upgrade(); // Nothing
left to do, forced to wait for the lock
                
                if (cache.count(in) == 0) {
                        Output out = compute_output(in); // Option 2 - locking rereadLock do
calculations before upgrading
                        boost::upgrade_to_unique_lock<boost::shared_mutex> writeLock(rereadLock);
                        cache[in] = out;
                        return cache[in];
                } else {
                        // Another thread has written to the cache while waiting, so out has
been wasted :-(
                        return cache[in];
                }
        } else {
                return cache[in];
        }
}

(There appears to be no way to "try-upgrade", which I'd imagine would
be the best place to do compute_output(). Why is that?)

Speaking of downgrading, to downgrade, you just assign to a
"lower-class" lock, right? So it's alright to do something like

Output Cache::query(const Input& in) {
        boost::shared_lock<boost::shared_mutex> readLock(mutex);
        if (cache.count(in) == 0) {
                readLock.unlock();
                
                boost::upgrade_lock<boost::shared_mutex> rereadLock(mutex); if
(cache.count(in) == 0) {
                        Output out = compute_output(in);
                        boost::upgrade_to_unique_lock<boost::shared_mutex> writeLock(rereadLock);
                        cache[in] = out;
                }
                
                // Downgrade upgrade lock to shared lock here
                readLock = boost::move(rereadLock);
        }
        return cache[in];
}

>> Thus, my conception is that you get a shared lock when you don't know
>> that you need to write to cache. When it turns out that you do, you
>> unlock and get an upgrade lock, which expresses the intention of
>> writing to the cache, while still letting other threads get shared
>> locks for their cache lookups. After checking whether your input is in
>> the cache again (since another thread may have written what you needed
>> into the cache while waiting for the upgrade lock), you upgrade to
>> exclusive (you have the upgrade lock and so no one else could have
>> written to the cache while all the other threads with shared locks
>> leave), where you actually write to cache. Then downgrade from
>> exclusive to upgrade to shared when you are done, and return the cached
>> value.
>
>
> In general unlocking a shared lock and lock a upgrade lock is not a good
> idea as the read data on the shared lock scope can be changed by another
> thread as soon as you release the lock and so you computation is not
> coherent.

My data is "single-assignment" style: once an Input/Output pair is
written into the Cache it is never modified, so that isn't an issue for
me as long as I re-read after getting the upgrade lock. However,
wouldn't re-reading after getting the upgrade lock address this in
general?


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net