Boost logo

Boost :

From: Peter Dimov (pdimov_at_[hidden])
Date: 2002-06-03 10:36:24


From: "Alexander Terekhov" <terekhov_at_[hidden]>
> Darin Adler wrote:
> [...]
> > Once we found this out, Peter Dimov and others changed smart_ptr
> > so that it doesn't use this header any more. They also made other
> > improvements to the thread safety code to make it simpler and faster.
> > The current version of Boost, 1.28.0, has this fixed.
>
> Memory visibility (acquire-on-lock/release-on-unlock semantics) aside,
> the 'lwm' stuff in CVS (the one full of sched_yield()/winapi::Sleep(0))
> is BROKEN... unless, of course, ALL boost MT client applications are
> meant for deployment on scheduling allocation domains of size ONE
> (uniprocessors) ONLY... WITH FIFO/RR priority scheduling {POSIX realtime
> option} among *ALL-EQUAL-PRTY* threads.
>
> regards,
> alexander.
>
> P.S. "....Similarly, sched_yield() can be used to resolve some problems
> on a uniprocessor, though such problems can usually be solved more
> cleanly in other ways. Using sched_yield() will never solve a
> problem
> (unless the problem statement is "the performance is too good") on
> a
> multiprocessor, and you should never use it there."
> --Butenhof_at_c.p.t

Butenhof is right; sched_yield does not affect the correctness of the code,
only its performance.

In this particular case, the yield is necessary. On a single processor, the
spinlock will never acquire if it fails the first time, and will spin madly
until the time slice elapses. On multiple CPUs, there is a chance that a
subsequent iteration will acquire, but my timing tests showed that, contrary
to my intuitive understanding, an immediate yield offers the best
performance.

sched_yield aside, lwm_* is most certainly "broken" as a general
synchronization primitive; it's only useful in lock-do something quickly,
like increment a count-unlock scenarios.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk