|
Boost Users : |
Subject: Re: [Boost-users] Shared mutex vs normal mutex
From: Alessandro Bellina (abellina_at_[hidden])
Date: 2009-08-26 14:57:56
Stefan,
I think you are right, and also lead to a complete different way of looking
at my example. In this application I would want to make sure that I am
accessing a multiindex that isn't being changed... so the lock should be
outside of the for loop (including the call to size()). I'll look into that
example using mutex and shared_mutex. I expect that shared_mutex would be
faster, I don't see how it couldn't be.
Thanks,
Alessandro
On Wed, Aug 26, 2009 at 2:51 PM, Stefan Strasser <strasser_at_[hidden]>wrote:
>
> I haven't done any performance comparisons, but locking the mutex for each
> element in the reader thread is not only inefficient but erroneous.
> you are reading the size of your container outside of any lock, while the
> writer thread is modifying it.
> I don't know about the implementation of multi index, if you're lucky
> size()
> is an atomic read, but that's certainly not guaranteed by the interface.
> most std::vector implementations for instance, return (end - begin) as
> size(),
> and can cause your reader to crash.
>
>
>
> Am Wednesday 26 August 2009 19:10:23 schrieb Alessandro Bellina:
> > Guys,
> > I know my question is long, but it is actually very simple. It can be
> > summarized as: "Have you used shared_mutex and shared_locks and have you
> > seen an improvement against regular locks?"
> >
> > Thanks, any comments appreciated
> >
> > Alessandro
> >
> > On Aug 25, 2009, at 7:27 AM, Alessandro Bellina <abellina_at_[hidden]>
> wrote:
> > > Hello
> > > I am testing the boost mutex/lock classes in order to implement a
> > > multiple reader single writer model.
> > >
> > > Because of this I thought that the shared mutex with lock_shared and
> > > lock_unique would be perfect. I'm finding that the performance is no
> > > better than with a simple mutex.
> > >
> > > My writer is adding a bunch of elements, locking the shared container
> > > exclusivelly until it is done. The readers looks at all elements of
> the
> > > container, locking on a per element basis.
> > >
> > > Any ideas as to why the shared mutex would be slower in this case than
> > > the simple mutex?
> > >
> > > This is the code for my producer thread and my consumers (as a side
> note,
> > > employee_set is a multi index):
> > >
> > >
> > > typedef boost::shared_mutex mutex_type;
> > > typedef boost::shared_lock<mutex_type> read_lock_type;
> > > typedef boost::unique_lock<mutex_type> write_lock_type;
> > >
> > > static mutex_type rw_mutex;
> > > struct ConsumerThread{
> > > ConsumerThread (employee_set* e):es(e){}
> > > void operator()(){
> > > for (int i=0; i<es->size(); i++){
> > > read_lock_type l(rw_mutex);
> > > es->get<1>();
> > > }
> > > }
> > > employee_set* es;
> > > };
> > >
> > > struct ProducerThread{
> > > ProducerThread(int n, employee_set* e):N(n),es(e){}
> > > void operator()(){
> > > write_lock_type l(rw_mutex);
> > > for (int i=0; i<N; i++){
> > > es->insert (employee(i, "TestEmployee",100-i));
> > > }
> > > l.unlock();
> > > }
> > > int N;
> > > employee_set* es;
> > > };
>
>
> _______________________________________________
> Boost-users mailing list
> Boost-users_at_[hidden]
> http://lists.boost.org/mailman/listinfo.cgi/boost-users
>
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net