Subject: Re: [boost] [atomic] comments
From: Helge Bahmann (hcb_at_[hidden])
Date: 2011-10-31 16:33:19
On Monday 31 October 2011 19:10:05 Andrey Semashev wrote:
> On Monday, October 31, 2011 15:01:35 Helge Bahmann wrote:
> > a) IMHO atomics for inter-process coordination is the exception, while
> > inter-thread coordination is the norm
> I have to disagree. Atomics may be used to communicate processes just as
> well as threads, if not better.
the question is not whether they *can* be used, but which case is more
common -- and considering the enormous amount of simple atomic
counters, "init-once" atomic pointers etc. found in typical applications make
me doubtful that inter-process coordination accounts for more than 1% of use
> Well, yes and no. Consider this example, which illustrates the control
> structure of a lock-free ring buffer:
> struct index_t
> uint32_t version;
> uint32_t index;
> I want to be able to write atomic< index_t > so that it compiles and works
> on any platform, even without 64-bit CAS support in hardware. It may work
> slower, yes, but it will.
what's wrong with just implementing a platform-specific "ipc queue"?
mind that you are going to rely on platform specifics as soon as you start
considering things such as sleep/wakeup for congestion control
> > I have serious difficulties justifying such a change, maybe others can
> > offer their opinion?
> I think, having a mutex per atomic instance is an overkill. However, a
> spinlock per instance might just be the silver bullet. The size overhead
> should be quite modest (1 to 4 bytes, I presume) and the performance would
> still be decent. After all, atomic<> is intended to be used with relatively
> small types with simple operations, such as copying and arithmetics. In
> other cases it is natural to use explicit mutexes, and we could emphasise
> it in the docs.
might be possible, the problem is that this assumes that there is
atomic<something> available -- as soon as you hit a platform where everything
hits the fallback, you just have to use a mutex and the cost becomes
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk