Boost logo

Boost :

From: bill_kempf (williamkempf_at_[hidden])
Date: 2002-02-12 14:04:43


--- In boost_at_y..., "Peter Dimov" <pdimov_at_m...> wrote:
> From: "bill_kempf" <williamkempf_at_h...>
> > --- In boost_at_y..., "Peter Dimov" <pdimov_at_m...> wrote:
> > > If that's true, why do people bother with critical sections and
> > pthread
> > > mutexes instead of simply using spinlocks? :-)
> >
> > It is true. There are a couple of reasons to not use a spin lock,
> > though.
> >
> > 1) Spin locks, by themselves, are prone to memory visibility
issues.
> >
> > 2) Spin locks use "busy waits" and thus chew up CPU resources
while
> > trying to acquire the lock. Unless the lock contention is low
this
> > overhead can quickly become detrimental to the performance of the
> > application.
>
> Actually (1) is rarely an issue, since the atomic instructions used
to
> implement a spinlock usually act as memory barriers.

This is not necessarily the case. The atomic instructions may only
gaurantee the memory visibility of the integral type being acted on.
A full memory barrier need not be used.

> (2) is only part of the story. The main problem with spinlocks is
that, on a
> single processor system, (or when the threads that compete for a
spinlock
> are executing on the same CPU), a busy-wait spinlock is useless. It
spins
> until the timeslice elapses and can never grab the lock, because
the other
> thread cannot release it - it isn't running.

Yes, but a spin lock need not behave this way. It can relinquish the
timeslice on the first failure in single processor systems. Of
course, the act of performing the context switch at this point can be
expensive, but generally not as expensive as most kernel level
synchronization. I've made use of such spin locks on single
processor systems and saw a definate performance boost over Win32
mutexes (though on some Win32 platforms I saw worse performance then
critical sections).
 
> Another, less important, problem is that when the threads use a
tight
> acquire-operate-release loop, a spinlock is not fair - thread A
always gets
> the lock.

That, again, depends on the implementation, though it is true that
spin locks are never fair.
 
> Both can be fixed with inserting a ::Sleep(0), or sched_yield,
although it
> can be argued that the second problem is user's fault.

The fact is, though, that in this particular case a spin lock might
well prove to be effective... just not very portable.

Bill Kempf


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk