Boost logo

Boost Users :

Subject: Re: [Boost-users] Help needed with boost::shared_mutex
From: Steven Watanabe (watanabesj_at_[hidden])
Date: 2011-11-26 11:50:25


AMDG

On 11/25/2011 11:19 PM, Kelvin Chung wrote:
> On 2011-11-26 05:00:56 +0000, Steven Watanabe said:
>> It's probably easier to use only one mutex. Every call has to lock
>> this mutex anyway, so it's doubtful whether you'd gain anything by
>> having a separate mutex for each entry.
>
> I was hoping that the outer mutex (for FunctionEvaluationCache) would
> only be needed to retrieve a reference to the inner table, so that a
> cache lookup for one Function wouldn't block a cache lookup for a
> different Function.
>

It will still block around the outer table lookup.
Separate mutexes for the inner tables would just
make the shared critical section smaller. I don't
know how much that would actually save you.

> However, it would appear that the inner mutexes (for the
> Function-specific std::map<Input, int>) would make the Function-specific
> cache noncopyable; which is fine, since the cache need only return
> references. However, I have to compile on a C++03 compiler (so I'm
> using boost::unordered_map), and it would seem that in this environment,
> inserting a value into a boost::unordered_map would involve copying the
> value (which I can't do since I have a mutex).
>

The usual solution is to use shared_ptr.

>>
>>> But what I would
>>> like to do is not modify evaluate() in any way. Is it possible to do
>>> this?
>>>
>>
>> It's possible, but probably not a good idea.
>>
>> You're going to want the code to work something like this:
>>
>> <snip>
>>
>> The main limitation of this, that you should be aware of is that
>> <expensive operation> can run more than once if multiple threads reach
>> it at the same time. I'm assuming that this is acceptable.
>
> I'd think not, but given my circumstances (the expensive operation
> returns the same value every time the same input is passed in), I'd have
> to reconsider.
>

To avoid this you need to lock around the expensive
operation. It gets a bit more complex, because you
have to recheck the cache. Also, in this case, it
would definitely help to have a separate mutex
for each Function, since you don't want
other functions to block during the expensive
operation. As long as the expensive operation is
idempotent, it's a lot easier just to let it
run multiple times. If you're concerned about
performance, you can track the number of duplicate
calls to decide whether it's a real issue.

>>
>> It's possible to make evaluate work as is, but it's a bit tricky, and
>> wouldn't recommend it. The problem is that cache[*this][in] is used
>> for both reading and writing, so you'd need to create a proxy reference.
>
> How would that work?
>

Here's the basic structure that you'd need:

struct int_proxy {
  Cache * cache;
  Function * func;
  Input * in;
  /// \pre the element exists
  operator int() {
    shared_lock l(cache->mutex);
    return (cache->cache[*func])[*in];
  }
  void operator=(int val) {
    unique_lock l(cache->mutex);
    (cache->cache[*func])[*in] = val;
  }
};

struct cache_item_proxy
{
    int count(const Input& in);
    int_proxy operator[](const Input& in);
};

struct Cache
{
    cache_item_proxy operator[](const Function& in);
    int count(const Function& in);
};

If the cache performance is significant, then
this is going to be quite a bit worse than
if you modify evaluate, because it has to
lock the mutex and look up the elements
multiple times.

If you count the number of operations,
you do 3 outer lookups and 2 inner lookups,
in 3 critical sections for elements already
in the cache. If you're willing to do
completely crazy stuff, like expression
templates, you can get rid of one outer lookup
and one lock by evaluating the expression
cache.count(*this) == 0 || cache[*this].count(in) == 0
as a unit. *It isn't worth it* It's more
work to write, harder to understand, and
less efficient.

In Christ,
Steven Watanabe


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net