|
Boost Users : |
From: David Greene (greened_at_[hidden])
Date: 2006-02-22 11:31:06
Gottlob Frege wrote:
> Very long answer: More correctly the problem isn't really 'cache
> coherence' in the traditional meaning of cache coherency (which is that
> the cache, for your cpu, is consistent with main memory, etc), it is the
> order of memory reads and writes, and Mutex's are guaranteed to do
> whatever is necessary to make sure all queued reads are read before you
> get the mutex lock (ie they force a memory 'acquire' barrier) and they
> make sure all writes are written before the mutex is released (a
> 'release' memory barrier).
I understand what you're saying and agree with you in that that's the
current way hardware and software is implemented in the vast majority
of cases.
However, the concepts of serializing access and maintaining memory
consistency and conherence are orthogonal. There have been
architectures (in academia, mostly) that require explicit software
cache control, for example. One would have to include a cache flush in
your examples. The theory is that by separating concerns the programmer
(or compiler) has more freedom to loosen up implementations based
on weaker requirements of the application, thereby gaining performance.
We're starting to see this much more in HPC systems, for example, where
there are a multitude of synchronization primitives available with
varying semantics that imply performance tradeoffs. Some machines
cache remote memory (often under software control), others don't.
So I agree with you in the case of the typical machine architecture,
but it won't necessarily hold in the future.
-Dave
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net