Boost logo

Boost Users :

Subject: Re: [Boost-users] Help needed with boost::shared_mutex
From: Steven Watanabe (watanabesj_at_[hidden])
Date: 2011-11-26 00:00:56


AMDG

On 11/24/2011 04:00 PM, Kelvin Chung wrote:
> I need assistance on how to properly use boost::shared_mutex. Suppose I
> have this routine, which I want to make thread-safe:
>
> class FunctionEvaluationCache { // Assume singleton
> std::unordered_map<Function, std::map<Input, int>> cache; //
> cache of results to Function::evaluate()
> }
>
> class Function { // Copyable
> bool operator==(const Function& fn2); // All Functions that
> operator==() share a cache entry
>
> int evaluate(const Input& in) { // Expensive operation, hence the
> cache
> if (cache.count(*this) == 0 || cache[*this].count(in) == 0) {
> // Not in cache, do expensive operation
>
> // Add result to cache
> cache[*this][in] = result;
> }
>
> return cache[*this][in];
> }
> }
>
> Clearly, the big obstacle to a threadsafe version of
> Function::evaluate() is this cache. I believe that I will need a
> boost::shared_mutex for cache, as well as each cache entry.

It's probably easier to use only one mutex.
Every call has to lock this mutex anyway,
so it's doubtful whether you'd gain anything
by having a separate mutex for each entry.

> I'm not
> sure on how to use boost::shared_mutex, however: the straightforward way
> would be to expose the boost::shared_mutex instances publicly and modify
> evaluate() to get the proper locks, while the other line of thinking is
> that I should not expose the boost::shared_mutex instances at all and
> have a series of methods that evaluate() can use. But what I would like
> to do is not modify evaluate() in any way. Is it possible to do this?
>

It's possible, but probably not a good idea.

You're going to want the code to work something
like this:

typedef std::map<Input, int> CacheItem;

struct Cache
{
    typedef std::unordered_map<
        Function,
        CacheItem > cache_type;
    typedef cache_type::iterator iterator;
    cache_type cache;
    boost::shared_mutex mutex;
};

FunctionEvaluationCache global_cache;

int Function::evaluate(const Input& in) {
  {
    boost::shared_lock<boost::shared_mutex>
      lock( global_cache.mutex );
    Cache::iterator pos = global_cache.cache.find( *this );
    if ( pos != global_cache.end() ) {
      CacheItem::iterator inner = pos->second.find( in );
      if ( inner != pos->second.end() ) {
        return inner->second;
      }
    }
  }
  {
    // do expensive operation

    boost::unique_lock<boost::shared_mutex>
      lock( global_cache.mutex );
    global_cache.cache[*this][in] = result;
    return result;
  }
}

The main limitation of this, that you should be
aware of is that <expensive operation> can
run more than once if multiple threads
reach it at the same time. I'm assuming
that this is acceptable.

It's possible to make evaluate work as is,
but it's a bit tricky, and wouldn't recommend
it. The problem is that cache[*this][in] is
used for both reading and writing, so you'd
need to create a proxy reference.

In Christ,
Steven Watanabe


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net