Boost logo

Boost :

From: John Maddock (john_at_[hidden])
Date: 2004-04-25 06:00:46


> Actually I'm not convinced about this. I have the same concerns as
> Bronek. They are only concerns. I am not sure of myself. But here's
> a data point:
>
> In the Metrowerks std::tr1::shared_ptr I give the option to the client
> whether or not shared_ptr contains a mutex (via a #define flag). This
> decision can be made independent of whether the entire C++ lib is
> compiled in multithread mode or not. And at the same time I use
> shared_ptr in the implementation of std::locale.
>
> In the std::locale implementation I am careful to wrap each use of the
> locale implementation with a mutex lock. I believe I would have to do
> this whether or not shared_ptr protected its count. And yet
> std::locale is just a library, not an application.

I've used shared_ptr a lot for reference-counted copy-on-write-pimpl's
(which is basically what std::locale is right?), and I'm not sure that you
are right here - the problem is avoiding a race condition inside
shared_ptr::~shared_ptr - which is pretty difficult correctly IMO. Here's
my best attempt so far (note not necessarily correct!), getting this right
is distinctly non-trivial IMO, I would much rather have shared_ptr thread
safe to begin with:

class pimpl
{
class implementation;
shared_ptr<implementation> m_pimp;
public:

pimpl()
{
  m_pimp = new implementation();
}
pimpl(const pimp& p)
{
  // compiler generated version will compile but do the wrong thing:
  mutex::lock l(m_pimp->get_mutex())
  m_pimp = p.m_pimp;
}
pimpl& operator=(const pimpl&)
{

  // compiler generated version will compile but do the wrong thing:
  mutex::lock l(m_pimp->get_mutex())
  m_pimp = p.m_pimp;
  return *this;
}
~pimpl()
{
  // this one is tricky:
  mutex::lock l(m_pimp->get_mutex())
  if(!m_pimp.unique())
  {
    shared_ptr<implmenation> old;
    old.swap(m_pimp);
  }
  // when we get here m_pimp must be either empty or unique:
  assert(m_pimp.unique() || !m_pimp.get());
}
};

In contrast, the thread safe version "just plain works", where as the in the
idiom I've sketched above, there are at least three potential traps for the
unwary, that can only be detected if your code actually happens to hit a
race condition (probably not until the product has shipped ;-) ).

> Perhaps a more correct statement is that if you expose a shared_ptr as
> part of your interface, and advertise that multiple threads can operate
> on copies of that exposed shared_ptr, then shared_ptr must be
> internally protected. But if your shared_ptr is used as an
> implementation detail within your library, an internal mutex is not
> necessarily beneficial, and may even be a handicap.

Maybe, but lets not forget that shared_ptr doesn't need a mutex at all, what
it needs is an atomic counter (actually less than that, because we never
need the value of the counter just whether it's less than/greater than, or
equal to zero). There are plenty of platforms that support this natively,
and these should expect much less of a hit than a mutex would give.

Having said all that, I think I would be in favour of something like:

#ifdef BOOST_HAVE_THREADS
#define BOOST_SP_DEFAULT true
#else
#define BOOST_SP_DEFAULT false
#endif

template <class T, bool threaded = BOOST_SP_DEFAULT>
class shared_ptr;

Which I think I'm right in saying would be std conforming (because we're
permitted additional defaulted template parameters right?), and would also
quash any potential ODR violations. We could even make the non-thread safe
version use a linked list implementation if that really is so much faster,
but please, don't change the default thread safe behaviour!

Thoughts?

John.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk