Boost logo

Boost Users :

Subject: Re: [Boost-users] Q: N00b spinlock
From: Tim Blechmann (tim_at_[hidden])
Date: 2010-01-21 14:48:14


>>>> Once you are experienced with MT programming, STILL stick to simple
>>>> mutex locking/unlocking.
>>>
>>> So why would these people use spinlocks?
>>
>> spinlocks are usually faster to acquire and release than mutexes, but
>> require busy waiting, which may be cause an overall performance impact
>> if the critical section takes some time to execute (especially, if the
>> critical section itself is blocking).
>> a calling thread would be suspended when waiting for a mutex to be
>> locked, which could lead to some issues for certain (actually very
>> little) use cases.
>
> I have experimented with spinlocks within my library, and even if I
> only use them for mutexes whose locks are usually very short-lived I
> could produce usecases that end up in a performance-desaster, with 80%
> CPU consumed by yield() system calls.
> especially when 3 or more threads contend about a mutex, so 2 of them
> are yield()ing:
> if thread 1 has acquired the mutex and thread 2 and 3 are yield()ing,
> it seems the scheduler constantly switches between threads 2 and 3
> until they´ve used up a full time slot until thread 1 is continued and
> releases the lock.

as rule of thumb, i am using spinlocks in the following cases:
- locks are acquired very rarely
- critical region is very short (a instructions)
- the number of threads to be synchronized is lower than the number of
cpu cores
- the synchronized threads have a similar (possibly real-time) priority

btw, using yield() inside a spinlock may cause unwanted behavior, since
it preempts the calling thread, but the scheduler keeps it as `ready'
instead of `blocked'. so it may wake up, before it can acquire the
mutex, burning cpu cycles.

tim

-- 
tim_at_[hidden]
http://tim.klingt.org
Lesser artists borrow, great artists steal.
  Igor Stravinsky



Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net