Boost logo

Boost :

Subject: Re: [boost] [thread] semaphore
From: Niall Douglas (s_sourceforge_at_[hidden])
Date: 2013-09-19 11:46:58


On 18 Sep 2013 at 13:03, Tim Blechmann wrote:

> >> many use cases is not necessary. and locks are not free, as they involve
> >> memory barriers and atomic operations. and how many platforms implement
> >> PI mutexes?
> >
> > Don't semaphores involve memory barriers and/or atomics too? POSIX
> > semaphores do.
> >
> > (N.B. I'm not disputing that semaphores have less overhead than CVs.)
>
> they do, but much less: according to some micro-benchmarks with much
> contention, dispatch semaphores on osx are about 80 times faster than
> CVs, while posix semaphores on linux are about 30 times faster.

Semaphores are ripe for people misusing them. There are good reasons
their use isn't encouraged, especially as it's extremely easy to roll
your own with an atomic, a mutex and condvar which will be just as
fast as any native implementation. I also might add that your figures
might be good for an uncontended semaphore, but I'd bet you a top
dollar semaphores perform terribly when contended whereas CVs will be
less pathological (agreed this depends on the CPU in question).

I would also be far more concerned with correctness than performance
in any threading primitive. If the power users really need
performance, they'll roll their own. Boost is no place for such
brittle threading primitives with many caveats and/or architecture
specific assumptions.

(That said, Boost could do with providing the underlying mechanism
for a future<> wakeup so you don't need the baggage of a full future.
future<void> would work just fine by me)

Niall

-- 
Currently unemployed and looking for work.
Work Portfolio: http://careers.stackoverflow.com/nialldouglas/



Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk