Subject: Re: [boost] [atomic] comments
From: Tim Blechmann (tim_at_[hidden])
Date: 2011-10-22 14:32:44
> > then we need some kind of interprocess-specific atomic ... maybe as part
> > of boost.interprocess ... iac, maybe we should provide an
> > implementation which somehow matches the behavior of c++11 compilers
> > ...
> well if the atomics are truely atomic, then BOOST_ATOMIC_*_LOCK_FREE == 2
> and I find a platform where you cannot use them safely between processes
> difficult to imagine (not that something like that could not exist)
one would have to do the dispatching logic in the preprocessor, so one cannot
dispatch depending on the typedef operator.
> if they are not atomic, then you are going to hit a "fallback-via locking"
> path in whiche case you are almost certainly better off picking an
> interprocess communication mechanism that just uses locking directly
true, but at the cost of increasing the program logic. however there are
cases, when you are happy that you don't have to change the program at the
cost of performance on legacy hardware.
> > it would be equally correct to have something like:
> > static bool has_cmpxchg16b = query_cpuid_for_cmpxchg16b()
> > if (has_cmpxchg16b)
> > use_cmpxchg16b();
> > else
> > use_fallback();
> > less bloat and prbly only a minor performance hit ;)
> problematic because the compiler must insert a lock to ensure thread-safe
> initialization of the "static bool" (thus it is by definition not
> "lock-free" any more)
well, one could also set a static variable with a function called before main
(e.g. via __attribute__(constructor))
> > in the average, but not in the worst case. for real-time systems it is
> > not acceptable that the os preempts a real-time thread while it is
> > holding a spinlock.
> prio-inheriting mutexes are usually much faster than cmpxchg16b -- use these
> for hard real-time (changing the fallback path to use PI mutexes as well
> might even be something to consider)
do you have some numbers which latencies can be achieved with PI mutexes?
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk