From: bill_kempf (williamkempf_at_[hidden])
Date: 2002-02-12 14:25:34
--- In boost_at_y..., "Peter Dimov" <pdimov_at_m...> wrote:
> From: "bill_kempf" <williamkempf_at_h...>
> > > Actually (1) is rarely an issue, since the atomic instructions
> > to
> > > implement a spinlock usually act as memory barriers.
> > This is not necessarily the case. The atomic instructions may
> > gaurantee the memory visibility of the integral type being acted
> > A full memory barrier need not be used.
> Depends on how the atomic operation is documented to perform. The
> instruction itself may or may not synchronize memory, but the atomic
> primitive is _usually_ documented to act as a memory barrier. It
> very difficult to use otherwise (see the comments in
> In particular, ::InterlockedXXX (Win32) and atomic_* (linux) are
> to be memory barriers... if you can call that documentation but
> another story.
Huh? Win32 has no need for memory barriers and I'm unaware of any
documentation indicating the Interlocked* methods are memory barriers.
> > > (2) is only part of the story. The main problem with spinlocks
> > that, on a
> > > single processor system, (or when the threads that compete for a
> > spinlock
> > > are executing on the same CPU), a busy-wait spinlock is
> > spins
> > > until the timeslice elapses and can never grab the lock, because
> > the other
> > > thread cannot release it - it isn't running.
> > Yes, but a spin lock need not behave this way. It can relinquish
> > timeslice on the first failure in single processor systems.
> Well, it's either a busy-wait loop or it's not, right?
By "spinlock" above I
> meant the busy-wait variant that you described in (2).
Giving up the time slice does not mean the wait is no longer a busy
wait. There's much more to non-busy waiting then this.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk