Boost logo

Boost-Commit :

Subject: [Boost-commit] svn:boost r81976 - in branches/release: boost boost/atomic boost/atomic/detail boost/lockfree boost/lockfree/detail boost/lockfree/detail/atomic boost/lockfree/detail/atomic/atomic boost/lockfree/detail/atomic/atomic/detail doc doc/html doc/src libs libs/atomic libs/atomic/build libs/atomic/doc libs/atomic/src libs/atomic/test libs/lockfree libs/lockfree/doc libs/lockfree/examples libs/lockfree/test status
From: andrey.semashev_at_[hidden]
Date: 2012-12-15 13:28:35


Author: andysem
Date: 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
New Revision: 81976
URL: http://svn.boost.org/trac/boost/changeset/81976

Log:
Boost.Atomic and Boost.Lockfree merged from trunk.
Added:
   branches/release/boost/atomic/
   branches/release/boost/atomic.hpp (contents, props changed)
   branches/release/boost/atomic/atomic.hpp (contents, props changed)
   branches/release/boost/atomic/detail/
   branches/release/boost/atomic/detail/base.hpp (contents, props changed)
   branches/release/boost/atomic/detail/cas32strong.hpp (contents, props changed)
   branches/release/boost/atomic/detail/cas32weak.hpp (contents, props changed)
   branches/release/boost/atomic/detail/cas64strong.hpp (contents, props changed)
   branches/release/boost/atomic/detail/config.hpp (contents, props changed)
   branches/release/boost/atomic/detail/gcc-alpha.hpp (contents, props changed)
   branches/release/boost/atomic/detail/gcc-armv6plus.hpp (contents, props changed)
   branches/release/boost/atomic/detail/gcc-cas.hpp (contents, props changed)
   branches/release/boost/atomic/detail/gcc-ppc.hpp (contents, props changed)
   branches/release/boost/atomic/detail/gcc-sparcv9.hpp (contents, props changed)
   branches/release/boost/atomic/detail/gcc-x86.hpp (contents, props changed)
   branches/release/boost/atomic/detail/generic-cas.hpp (contents, props changed)
   branches/release/boost/atomic/detail/interlocked.hpp (contents, props changed)
   branches/release/boost/atomic/detail/linux-arm.hpp (contents, props changed)
   branches/release/boost/atomic/detail/lockpool.hpp (contents, props changed)
   branches/release/boost/atomic/detail/platform.hpp (contents, props changed)
   branches/release/boost/atomic/detail/type-classification.hpp (contents, props changed)
   branches/release/boost/atomic/detail/windows.hpp (contents, props changed)
   branches/release/boost/lockfree/
   branches/release/boost/lockfree/detail/
   branches/release/boost/lockfree/detail/atomic/
   branches/release/boost/lockfree/detail/atomic.hpp (contents, props changed)
   branches/release/boost/lockfree/detail/atomic/atomic/
   branches/release/boost/lockfree/detail/atomic/atomic/detail/
   branches/release/boost/lockfree/detail/branch_hints.hpp (contents, props changed)
   branches/release/boost/lockfree/detail/copy_payload.hpp (contents, props changed)
   branches/release/boost/lockfree/detail/freelist.hpp (contents, props changed)
   branches/release/boost/lockfree/detail/parameter.hpp (contents, props changed)
   branches/release/boost/lockfree/detail/prefix.hpp (contents, props changed)
   branches/release/boost/lockfree/detail/tagged_ptr.hpp (contents, props changed)
   branches/release/boost/lockfree/detail/tagged_ptr_dcas.hpp (contents, props changed)
   branches/release/boost/lockfree/detail/tagged_ptr_ptrcompression.hpp (contents, props changed)
   branches/release/boost/lockfree/policies.hpp (contents, props changed)
   branches/release/boost/lockfree/queue.hpp (contents, props changed)
   branches/release/boost/lockfree/spsc_queue.hpp (contents, props changed)
   branches/release/boost/lockfree/stack.hpp (contents, props changed)
   branches/release/doc/html/atomic.html (contents, props changed)
   branches/release/doc/html/lockfree.html (contents, props changed)
   branches/release/libs/atomic/
   branches/release/libs/atomic/build/
   branches/release/libs/atomic/build/Jamfile.v2 (contents, props changed)
   branches/release/libs/atomic/doc/
   branches/release/libs/atomic/doc/Jamfile.v2 (contents, props changed)
   branches/release/libs/atomic/doc/atomic.hpp (contents, props changed)
   branches/release/libs/atomic/doc/atomic.qbk (contents, props changed)
   branches/release/libs/atomic/doc/examples.qbk (contents, props changed)
   branches/release/libs/atomic/doc/platform.qbk (contents, props changed)
   branches/release/libs/atomic/index.html (contents, props changed)
   branches/release/libs/atomic/src/
   branches/release/libs/atomic/src/lockpool.cpp (contents, props changed)
   branches/release/libs/atomic/test/
   branches/release/libs/atomic/test/Jamfile.v2 (contents, props changed)
   branches/release/libs/atomic/test/api_test_helpers.hpp (contents, props changed)
   branches/release/libs/atomic/test/atomicity.cpp (contents, props changed)
   branches/release/libs/atomic/test/fallback_api.cpp (contents, props changed)
   branches/release/libs/atomic/test/lockfree.cpp (contents, props changed)
   branches/release/libs/atomic/test/native_api.cpp (contents, props changed)
   branches/release/libs/atomic/test/ordering.cpp (contents, props changed)
   branches/release/libs/lockfree/
   branches/release/libs/lockfree/doc/
   branches/release/libs/lockfree/doc/Jamfile.v2 (contents, props changed)
   branches/release/libs/lockfree/doc/lockfree.qbk (contents, props changed)
   branches/release/libs/lockfree/examples/
   branches/release/libs/lockfree/examples/Jamfile.v2 (contents, props changed)
   branches/release/libs/lockfree/examples/queue.cpp (contents, props changed)
   branches/release/libs/lockfree/examples/spsc_queue.cpp (contents, props changed)
   branches/release/libs/lockfree/examples/stack.cpp (contents, props changed)
   branches/release/libs/lockfree/index.html (contents, props changed)
   branches/release/libs/lockfree/test/
   branches/release/libs/lockfree/test/Jamfile.v2 (contents, props changed)
   branches/release/libs/lockfree/test/freelist_test.cpp (contents, props changed)
   branches/release/libs/lockfree/test/queue_bounded_stress_test.cpp (contents, props changed)
   branches/release/libs/lockfree/test/queue_fixedsize_stress_test.cpp (contents, props changed)
   branches/release/libs/lockfree/test/queue_interprocess_test.cpp (contents, props changed)
   branches/release/libs/lockfree/test/queue_test.cpp (contents, props changed)
   branches/release/libs/lockfree/test/queue_unbounded_stress_test.cpp (contents, props changed)
   branches/release/libs/lockfree/test/spsc_queue_test.cpp (contents, props changed)
   branches/release/libs/lockfree/test/stack_bounded_stress_test.cpp (contents, props changed)
   branches/release/libs/lockfree/test/stack_fixedsize_stress_test.cpp (contents, props changed)
   branches/release/libs/lockfree/test/stack_interprocess_test.cpp (contents, props changed)
   branches/release/libs/lockfree/test/stack_test.cpp (contents, props changed)
   branches/release/libs/lockfree/test/stack_unbounded_stress_test.cpp (contents, props changed)
   branches/release/libs/lockfree/test/tagged_ptr_test.cpp (contents, props changed)
   branches/release/libs/lockfree/test/test_common.hpp (contents, props changed)
   branches/release/libs/lockfree/test/test_helpers.hpp (contents, props changed)
Text files modified:
   branches/release/doc/Jamfile.v2 | 5 +++++
   branches/release/doc/src/boost.xml | 4 ++++
   branches/release/libs/libraries.htm | 4 ++++
   branches/release/libs/maintainers.txt | 2 ++
   branches/release/status/Jamfile.v2 | 2 ++
   5 files changed, 17 insertions(+), 0 deletions(-)

Added: branches/release/boost/atomic.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,18 @@
+#ifndef BOOST_ATOMIC_HPP
+#define BOOST_ATOMIC_HPP
+
+// Copyright (c) 2011 Helge Bahmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+// This header includes all Boost.Atomic public headers
+
+#include <boost/atomic/atomic.hpp>
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+#endif

Added: branches/release/boost/atomic/atomic.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/atomic.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,162 @@
+#ifndef BOOST_ATOMIC_ATOMIC_HPP
+#define BOOST_ATOMIC_ATOMIC_HPP
+
+// Copyright (c) 2011 Helge Bahmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <cstddef>
+#include <boost/cstdint.hpp>
+
+#include <boost/memory_order.hpp>
+
+#include <boost/atomic/detail/config.hpp>
+#include <boost/atomic/detail/platform.hpp>
+#include <boost/atomic/detail/type-classification.hpp>
+#include <boost/type_traits/is_signed.hpp>
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+namespace boost {
+
+#ifndef BOOST_ATOMIC_CHAR_LOCK_FREE
+#define BOOST_ATOMIC_CHAR_LOCK_FREE 0
+#endif
+
+#ifndef BOOST_ATOMIC_CHAR16_T_LOCK_FREE
+#define BOOST_ATOMIC_CHAR16_T_LOCK_FREE 0
+#endif
+
+#ifndef BOOST_ATOMIC_CHAR32_T_LOCK_FREE
+#define BOOST_ATOMIC_CHAR32_T_LOCK_FREE 0
+#endif
+
+#ifndef BOOST_ATOMIC_WCHAR_T_LOCK_FREE
+#define BOOST_ATOMIC_WCHAR_T_LOCK_FREE 0
+#endif
+
+#ifndef BOOST_ATOMIC_SHORT_LOCK_FREE
+#define BOOST_ATOMIC_SHORT_LOCK_FREE 0
+#endif
+
+#ifndef BOOST_ATOMIC_INT_LOCK_FREE
+#define BOOST_ATOMIC_INT_LOCK_FREE 0
+#endif
+
+#ifndef BOOST_ATOMIC_LONG_LOCK_FREE
+#define BOOST_ATOMIC_LONG_LOCK_FREE 0
+#endif
+
+#ifndef BOOST_ATOMIC_LLONG_LOCK_FREE
+#define BOOST_ATOMIC_LLONG_LOCK_FREE 0
+#endif
+
+#ifndef BOOST_ATOMIC_POINTER_LOCK_FREE
+#define BOOST_ATOMIC_POINTER_LOCK_FREE 0
+#endif
+
+#define BOOST_ATOMIC_ADDRESS_LOCK_FREE BOOST_ATOMIC_POINTER_LOCK_FREE
+
+#ifndef BOOST_ATOMIC_BOOL_LOCK_FREE
+#define BOOST_ATOMIC_BOOL_LOCK_FREE 0
+#endif
+
+#ifndef BOOST_ATOMIC_THREAD_FENCE
+#define BOOST_ATOMIC_THREAD_FENCE 0
+inline void atomic_thread_fence(memory_order)
+{
+}
+#endif
+
+#ifndef BOOST_ATOMIC_SIGNAL_FENCE
+#define BOOST_ATOMIC_SIGNAL_FENCE 0
+inline void atomic_signal_fence(memory_order order)
+{
+ atomic_thread_fence(order);
+}
+#endif
+
+template<typename T>
+class atomic :
+ public atomics::detail::base_atomic<T, typename atomics::detail::classify<T>::type, atomics::detail::storage_size_of<T>::value, boost::is_signed<T>::value >
+{
+private:
+ typedef T value_type;
+ typedef atomics::detail::base_atomic<T, typename atomics::detail::classify<T>::type, atomics::detail::storage_size_of<T>::value, boost::is_signed<T>::value > super;
+public:
+ atomic(void) : super() {}
+ explicit atomic(const value_type & v) : super(v) {}
+
+ atomic & operator=(value_type v) volatile
+ {
+ super::operator=(v);
+ return *const_cast<atomic *>(this);
+ }
+private:
+ atomic(const atomic &) /* =delete */ ;
+ atomic & operator=(const atomic &) /* =delete */ ;
+};
+
+typedef atomic<char> atomic_char;
+typedef atomic<unsigned char> atomic_uchar;
+typedef atomic<signed char> atomic_schar;
+typedef atomic<uint8_t> atomic_uint8_t;
+typedef atomic<int8_t> atomic_int8_t;
+typedef atomic<unsigned short> atomic_ushort;
+typedef atomic<short> atomic_short;
+typedef atomic<uint16_t> atomic_uint16_t;
+typedef atomic<int16_t> atomic_int16_t;
+typedef atomic<unsigned int> atomic_uint;
+typedef atomic<int> atomic_int;
+typedef atomic<uint32_t> atomic_uint32_t;
+typedef atomic<int32_t> atomic_int32_t;
+typedef atomic<unsigned long> atomic_ulong;
+typedef atomic<long> atomic_long;
+typedef atomic<uint64_t> atomic_uint64_t;
+typedef atomic<int64_t> atomic_int64_t;
+#ifdef BOOST_HAS_LONG_LONG
+typedef atomic<boost::ulong_long_type> atomic_ullong;
+typedef atomic<boost::long_long_type> atomic_llong;
+#endif
+typedef atomic<void*> atomic_address;
+typedef atomic<bool> atomic_bool;
+typedef atomic<wchar_t> atomic_wchar_t;
+#if !defined(BOOST_NO_CXX11_CHAR16_T)
+typedef atomic<char16_t> atomic_char16_t;
+#endif
+#if !defined(BOOST_NO_CXX11_CHAR32_T)
+typedef atomic<char32_t> atomic_char32_t;
+#endif
+
+#ifndef BOOST_ATOMIC_FLAG_LOCK_FREE
+#define BOOST_ATOMIC_FLAG_LOCK_FREE 0
+class atomic_flag
+{
+public:
+ atomic_flag(void) : v_(false) {}
+
+ bool
+ test_and_set(memory_order order = memory_order_seq_cst)
+ {
+ return v_.exchange(true, order);
+ }
+
+ void
+ clear(memory_order order = memory_order_seq_cst) volatile
+ {
+ v_.store(false, order);
+ }
+private:
+ atomic_flag(const atomic_flag &) /* = delete */ ;
+ atomic_flag & operator=(const atomic_flag &) /* = delete */ ;
+ atomic<bool> v_;
+};
+#endif
+
+}
+
+#endif

Added: branches/release/boost/atomic/detail/base.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/detail/base.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,519 @@
+#ifndef BOOST_ATOMIC_DETAIL_BASE_HPP
+#define BOOST_ATOMIC_DETAIL_BASE_HPP
+
+// Copyright (c) 2009 Helge Bahmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+// Base class definition and fallback implementation.
+// To be overridden (through partial specialization) by
+// platform implementations.
+
+#include <string.h>
+
+#include <cstddef>
+#include <boost/cstdint.hpp>
+#include <boost/atomic/detail/config.hpp>
+#include <boost/atomic/detail/lockpool.hpp>
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+#define BOOST_ATOMIC_DECLARE_BASE_OPERATORS \
+ operator value_type(void) volatile const \
+ { \
+ return load(memory_order_seq_cst); \
+ } \
+ \
+ this_type & \
+ operator=(value_type v) volatile \
+ { \
+ store(v, memory_order_seq_cst); \
+ return *const_cast<this_type *>(this); \
+ } \
+ \
+ bool \
+ compare_exchange_strong( \
+ value_type & expected, \
+ value_type desired, \
+ memory_order order = memory_order_seq_cst) volatile \
+ { \
+ return compare_exchange_strong(expected, desired, order, calculate_failure_order(order)); \
+ } \
+ \
+ bool \
+ compare_exchange_weak( \
+ value_type & expected, \
+ value_type desired, \
+ memory_order order = memory_order_seq_cst) volatile \
+ { \
+ return compare_exchange_weak(expected, desired, order, calculate_failure_order(order)); \
+ } \
+ \
+
+#define BOOST_ATOMIC_DECLARE_ADDITIVE_OPERATORS \
+ value_type \
+ operator++(int) volatile \
+ { \
+ return fetch_add(1); \
+ } \
+ \
+ value_type \
+ operator++(void) volatile \
+ { \
+ return fetch_add(1) + 1; \
+ } \
+ \
+ value_type \
+ operator--(int) volatile \
+ { \
+ return fetch_sub(1); \
+ } \
+ \
+ value_type \
+ operator--(void) volatile \
+ { \
+ return fetch_sub(1) - 1; \
+ } \
+ \
+ value_type \
+ operator+=(difference_type v) volatile \
+ { \
+ return fetch_add(v) + v; \
+ } \
+ \
+ value_type \
+ operator-=(difference_type v) volatile \
+ { \
+ return fetch_sub(v) - v; \
+ } \
+
+#define BOOST_ATOMIC_DECLARE_BIT_OPERATORS \
+ value_type \
+ operator&=(difference_type v) volatile \
+ { \
+ return fetch_and(v) & v; \
+ } \
+ \
+ value_type \
+ operator|=(difference_type v) volatile \
+ { \
+ return fetch_or(v) | v; \
+ } \
+ \
+ value_type \
+ operator^=(difference_type v) volatile \
+ { \
+ return fetch_xor(v) ^ v; \
+ } \
+
+#define BOOST_ATOMIC_DECLARE_POINTER_OPERATORS \
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS \
+ BOOST_ATOMIC_DECLARE_ADDITIVE_OPERATORS \
+
+#define BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS \
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS \
+ BOOST_ATOMIC_DECLARE_ADDITIVE_OPERATORS \
+ BOOST_ATOMIC_DECLARE_BIT_OPERATORS \
+
+namespace boost {
+namespace atomics {
+namespace detail {
+
+inline memory_order
+calculate_failure_order(memory_order order)
+{
+ switch(order) {
+ case memory_order_acq_rel:
+ return memory_order_acquire;
+ case memory_order_release:
+ return memory_order_relaxed;
+ default:
+ return order;
+ }
+}
+
+template<typename T, typename C, unsigned int Size, bool Sign>
+class base_atomic {
+private:
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef lockpool::scoped_lock guard_type;
+public:
+ base_atomic(void) {}
+
+ explicit base_atomic(const value_type & v)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+
+ void
+ store(value_type const& v, memory_order /*order*/ = memory_order_seq_cst) volatile
+ {
+ guard_type guard(const_cast<char *>(v_));
+
+ memcpy(const_cast<char *>(v_), &v, sizeof(value_type));
+ }
+
+ value_type
+ load(memory_order /*order*/ = memory_order_seq_cst) volatile const
+ {
+ guard_type guard(const_cast<const char *>(v_));
+
+ value_type v;
+ memcpy(&v, const_cast<const char *>(v_), sizeof(value_type));
+ return v;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order /*success_order*/,
+ memory_order /*failure_order*/) volatile
+ {
+ guard_type guard(const_cast<char *>(v_));
+
+ if (memcmp(const_cast<char *>(v_), &expected, sizeof(value_type)) == 0) {
+ memcpy(const_cast<char *>(v_), &desired, sizeof(value_type));
+ return true;
+ } else {
+ memcpy(&expected, const_cast<char *>(v_), sizeof(value_type));
+ return false;
+ }
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order /*order*/=memory_order_seq_cst) volatile
+ {
+ guard_type guard(const_cast<char *>(v_));
+
+ value_type tmp;
+ memcpy(&tmp, const_cast<char *>(v_), sizeof(value_type));
+
+ memcpy(const_cast<char *>(v_), &v, sizeof(value_type));
+ return tmp;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return false;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+
+ char v_[sizeof(value_type)];
+};
+
+template<typename T, unsigned int Size, bool Sign>
+class base_atomic<T, int, Size, Sign> {
+private:
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+ typedef lockpool::scoped_lock guard_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order /*order*/ = memory_order_seq_cst) volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+
+ v_ = v;
+ }
+
+ value_type
+ load(memory_order /*order*/ = memory_order_seq_cst) const volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+
+ value_type v = const_cast<const volatile value_type &>(v_);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order /*order*/ = memory_order_seq_cst) volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+
+ value_type old = v_;
+ v_ = v;
+ return old;
+ }
+
+ bool
+ compare_exchange_strong(value_type & expected, value_type desired,
+ memory_order /*success_order*/,
+ memory_order /*failure_order*/) volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+
+ if (v_ == expected) {
+ v_ = desired;
+ return true;
+ } else {
+ expected = v_;
+ return false;
+ }
+ }
+
+ bool
+ compare_exchange_weak(value_type & expected, value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ fetch_add(difference_type v, memory_order /*order*/ = memory_order_seq_cst) volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+
+ value_type old = v_;
+ v_ += v;
+ return old;
+ }
+
+ value_type
+ fetch_sub(difference_type v, memory_order /*order*/ = memory_order_seq_cst) volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+
+ value_type old = v_;
+ v_ -= v;
+ return old;
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order /*order*/ = memory_order_seq_cst) volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+
+ value_type old = v_;
+ v_ &= v;
+ return old;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order /*order*/ = memory_order_seq_cst) volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+
+ value_type old = v_;
+ v_ |= v;
+ return old;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order /*order*/ = memory_order_seq_cst) volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+
+ value_type old = v_;
+ v_ ^= v;
+ return old;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return false;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+template<typename T, unsigned int Size, bool Sign>
+class base_atomic<T *, void *, Size, Sign> {
+private:
+ typedef base_atomic this_type;
+ typedef T * value_type;
+ typedef ptrdiff_t difference_type;
+ typedef lockpool::scoped_lock guard_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order /*order*/ = memory_order_seq_cst) volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+ v_ = v;
+ }
+
+ value_type
+ load(memory_order /*order*/ = memory_order_seq_cst) const volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+
+ value_type v = const_cast<const volatile value_type &>(v_);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order /*order*/ = memory_order_seq_cst) volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+
+ value_type old = v_;
+ v_ = v;
+ return old;
+ }
+
+ bool
+ compare_exchange_strong(value_type & expected, value_type desired,
+ memory_order /*success_order*/,
+ memory_order /*failure_order*/) volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+
+ if (v_ == expected) {
+ v_ = desired;
+ return true;
+ } else {
+ expected = v_;
+ return false;
+ }
+ }
+
+ bool
+ compare_exchange_weak(value_type & expected, value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type fetch_add(difference_type v, memory_order /*order*/ = memory_order_seq_cst) volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+
+ value_type old = v_;
+ v_ += v;
+ return old;
+ }
+
+ value_type fetch_sub(difference_type v, memory_order /*order*/ = memory_order_seq_cst) volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+
+ value_type old = v_;
+ v_ -= v;
+ return old;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return false;
+ }
+
+ BOOST_ATOMIC_DECLARE_POINTER_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+template<unsigned int Size, bool Sign>
+class base_atomic<void *, void *, Size, Sign> {
+private:
+ typedef base_atomic this_type;
+ typedef void * value_type;
+ typedef lockpool::scoped_lock guard_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order /*order*/ = memory_order_seq_cst) volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+ v_ = v;
+ }
+
+ value_type
+ load(memory_order /*order*/ = memory_order_seq_cst) const volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+
+ value_type v = const_cast<const volatile value_type &>(v_);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order /*order*/ = memory_order_seq_cst) volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+
+ value_type old = v_;
+ v_ = v;
+ return old;
+ }
+
+ bool
+ compare_exchange_strong(value_type & expected, value_type desired,
+ memory_order /*success_order*/,
+ memory_order /*failure_order*/) volatile
+ {
+ guard_type guard(const_cast<value_type *>(&v_));
+
+ if (v_ == expected) {
+ v_ = desired;
+ return true;
+ } else {
+ expected = v_;
+ return false;
+ }
+ }
+
+ bool
+ compare_exchange_weak(value_type & expected, value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return false;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+}
+}
+}
+
+#endif

Added: branches/release/boost/atomic/detail/cas32strong.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/detail/cas32strong.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,872 @@
+#ifndef BOOST_ATOMIC_DETAIL_CAS32STRONG_HPP
+#define BOOST_ATOMIC_DETAIL_CAS32STRONG_HPP
+
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+//
+// Copyright (c) 2011 Helge Bahmann
+
+// Build 8-, 16- and 32-bit atomic operations from
+// a platform_cmpxchg32_strong primitive.
+
+#include <cstddef>
+#include <boost/cstdint.hpp>
+#include <boost/memory_order.hpp>
+#include <boost/atomic/detail/config.hpp>
+#include <boost/atomic/detail/base.hpp>
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+namespace boost {
+namespace atomics {
+namespace detail {
+
+/* integral types */
+
+template<typename T, bool Sign>
+class base_atomic<T, int, 1, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before_store(order);
+ const_cast<volatile storage_type &>(v_) = v;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+
+ storage_type expected_s = (storage_type) expected;
+ storage_type desired_s = (storage_type) desired;
+
+ bool success = platform_cmpxchg32_strong(expected_s, desired_s, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ expected = (value_type) expected_s;
+ }
+
+ return success;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original + v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original - v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original & v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original | v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original ^ v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, int, 2, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before_store(order);
+ const_cast<volatile storage_type &>(v_) = v;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+
+ storage_type expected_s = (storage_type) expected;
+ storage_type desired_s = (storage_type) desired;
+
+ bool success = platform_cmpxchg32_strong(expected_s, desired_s, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ expected = (value_type) expected_s;
+ }
+
+ return success;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original + v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original - v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original & v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original | v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original ^ v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, int, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before_store(order);
+ const_cast<volatile value_type &>(v_) = v;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+
+ bool success = platform_cmpxchg32_strong(expected, desired, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ }
+
+ return success;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original + v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original - v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original & v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original | v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original ^ v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+/* pointer types */
+
+template<bool Sign>
+class base_atomic<void *, void *, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef void * value_type;
+ typedef ptrdiff_t difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before_store(order);
+ const_cast<volatile value_type &>(v_) = v;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+
+ bool success = platform_cmpxchg32_strong(expected, desired, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ }
+
+ return success;
+ }
+
+ value_type
+ fetch_add(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original + v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_sub(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original - v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T *, void *, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef T * value_type;
+ typedef ptrdiff_t difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before_store(order);
+ const_cast<volatile value_type &>(v_) = v;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+
+ bool success = platform_cmpxchg32_strong(expected, desired, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ }
+
+ return success;
+ }
+
+ value_type
+ fetch_add(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original + v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_sub(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original - v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_POINTER_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+/* generic types */
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 1, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type const& v)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before_store(order);
+ const_cast<volatile storage_type &>(v_) = tmp;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp = const_cast<const volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+
+ platform_fence_before(success_order);
+ bool success = platform_cmpxchg32_strong(expected_s, desired_s, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ memcpy(&expected, &expected_s, sizeof(value_type));
+ }
+
+ return success;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 2, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type const& v)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before_store(order);
+ const_cast<volatile storage_type &>(v_) = tmp;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp = const_cast<const volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+
+ platform_fence_before(success_order);
+ bool success = platform_cmpxchg32_strong(expected_s, desired_s, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ memcpy(&expected, &expected_s, sizeof(value_type));
+ }
+
+ return success;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type const& v) : v_(0)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before_store(order);
+ const_cast<volatile storage_type &>(v_) = tmp;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp = const_cast<const volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+
+ platform_fence_before(success_order);
+ bool success = platform_cmpxchg32_strong(expected_s, desired_s, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ memcpy(&expected, &expected_s, sizeof(value_type));
+ }
+
+ return success;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+}
+}
+}
+
+#endif

Added: branches/release/boost/atomic/detail/cas32weak.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/detail/cas32weak.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,916 @@
+#ifndef BOOST_ATOMIC_DETAIL_CAS32WEAK_HPP
+#define BOOST_ATOMIC_DETAIL_CAS32WEAK_HPP
+
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+//
+// Copyright (c) 2011 Helge Bahmann
+
+#include <cstddef>
+#include <boost/cstdint.hpp>
+#include <boost/memory_order.hpp>
+#include <boost/atomic/detail/config.hpp>
+#include <boost/atomic/detail/base.hpp>
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+namespace boost {
+namespace atomics {
+namespace detail {
+
+/* integral types */
+
+template<typename T, bool Sign>
+class base_atomic<T, int, 1, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before_store(order);
+ const_cast<volatile storage_type &>(v_) = v;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+
+ storage_type expected_s = (storage_type) expected;
+ storage_type desired_s = (storage_type) desired;
+
+ bool success = platform_cmpxchg32(expected_s, desired_s, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ expected = (value_type) expected_s;
+ }
+
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ for(;;) {
+ value_type tmp = expected;
+ if (compare_exchange_weak(tmp, desired, success_order, failure_order))
+ return true;
+ if (tmp != expected) {
+ expected = tmp;
+ return false;
+ }
+ }
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original + v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original - v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original & v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original | v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original ^ v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, int, 2, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before_store(order);
+ const_cast<volatile storage_type &>(v_) = v;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+
+ storage_type expected_s = (storage_type) expected;
+ storage_type desired_s = (storage_type) desired;
+
+ bool success = platform_cmpxchg32(expected_s, desired_s, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ expected = (value_type) expected_s;
+ }
+
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ for(;;) {
+ value_type tmp = expected;
+ if (compare_exchange_weak(tmp, desired, success_order, failure_order))
+ return true;
+ if (tmp != expected) {
+ expected = tmp;
+ return false;
+ }
+ }
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original + v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original - v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original & v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original | v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original ^ v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, int, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before_store(order);
+ const_cast<volatile value_type &>(v_) = v;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+
+ bool success = platform_cmpxchg32(expected, desired, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ }
+
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ for(;;) {
+ value_type tmp = expected;
+ if (compare_exchange_weak(tmp, desired, success_order, failure_order))
+ return true;
+ if (tmp != expected) {
+ expected = tmp;
+ return false;
+ }
+ }
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original + v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original - v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original & v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original | v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original ^ v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+/* pointer types */
+
+template<bool Sign>
+class base_atomic<void *, void *, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef void * value_type;
+ typedef ptrdiff_t difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before_store(order);
+ const_cast<volatile value_type &>(v_) = v;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+
+ bool success = platform_cmpxchg32(expected, desired, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ }
+
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ for(;;) {
+ value_type tmp = expected;
+ if (compare_exchange_weak(tmp, desired, success_order, failure_order))
+ return true;
+ if (tmp != expected) {
+ expected = tmp;
+ return false;
+ }
+ }
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T *, void *, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef T * value_type;
+ typedef ptrdiff_t difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before_store(order);
+ const_cast<volatile value_type &>(v_) = v;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+
+ bool success = platform_cmpxchg32(expected, desired, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ }
+
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ for(;;) {
+ value_type tmp = expected;
+ if (compare_exchange_weak(tmp, desired, success_order, failure_order))
+ return true;
+ if (tmp != expected) {
+ expected = tmp;
+ return false;
+ }
+ }
+ }
+
+ value_type
+ fetch_add(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original + v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_sub(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original - v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_POINTER_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+/* generic types */
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 1, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type const& v) : v_(0)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before_store(order);
+ const_cast<volatile storage_type &>(v_) = tmp;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp = const_cast<const volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+
+ platform_fence_before(success_order);
+
+ bool success = platform_cmpxchg32(expected_s, desired_s, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ memcpy(&expected, &expected_s, sizeof(value_type));
+ }
+
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ for(;;) {
+ value_type tmp = expected;
+ if (compare_exchange_weak(tmp, desired, success_order, failure_order))
+ return true;
+ if (tmp != expected) {
+ expected = tmp;
+ return false;
+ }
+ }
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 2, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type const& v) : v_(0)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before_store(order);
+ const_cast<volatile storage_type &>(v_) = tmp;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp = const_cast<const volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+
+ platform_fence_before(success_order);
+
+ bool success = platform_cmpxchg32(expected_s, desired_s, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ memcpy(&expected, &expected_s, sizeof(value_type));
+ }
+
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ for(;;) {
+ value_type tmp = expected;
+ if (compare_exchange_weak(tmp, desired, success_order, failure_order))
+ return true;
+ if (tmp != expected) {
+ expected = tmp;
+ return false;
+ }
+ }
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type const& v) : v_(0)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before_store(order);
+ const_cast<volatile storage_type &>(v_) = tmp;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp = const_cast<const volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+
+ platform_fence_before(success_order);
+
+ bool success = platform_cmpxchg32(expected_s, desired_s, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ memcpy(&expected, &expected_s, sizeof(value_type));
+ }
+
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ for(;;) {
+ value_type tmp = expected;
+ if (compare_exchange_weak(tmp, desired, success_order, failure_order))
+ return true;
+ if (tmp != expected) {
+ expected = tmp;
+ return false;
+ }
+ }
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+}
+}
+}
+
+#endif

Added: branches/release/boost/atomic/detail/cas64strong.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/detail/cas64strong.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,438 @@
+#ifndef BOOST_ATOMIC_DETAIL_CAS64STRONG_HPP
+#define BOOST_ATOMIC_DETAIL_CAS64STRONG_HPP
+
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+//
+// Copyright (c) 2011 Helge Bahmann
+
+// Build 64-bit atomic operation from platform_cmpxchg64_strong
+// primitive. It is assumed that 64-bit loads/stores are not
+// atomic, so they are funnelled through cmpxchg as well.
+
+#include <cstddef>
+#include <boost/cstdint.hpp>
+#include <boost/memory_order.hpp>
+#include <boost/atomic/detail/config.hpp>
+#include <boost/atomic/detail/base.hpp>
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+namespace boost {
+namespace atomics {
+namespace detail {
+
+/* integral types */
+
+template<typename T, bool Sign>
+class base_atomic<T, int, 8, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before_store(order);
+ platform_store64(v, &v_);
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = platform_load64(&v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+
+ bool success = platform_cmpxchg64_strong(expected, desired, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ }
+
+ return success;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original + v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original - v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original & v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original | v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original ^ v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+/* pointer types */
+
+template<bool Sign>
+class base_atomic<void *, void *, 8, Sign> {
+ typedef base_atomic this_type;
+ typedef void * value_type;
+ typedef ptrdiff_t difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before_store(order);
+ platform_store64(v, &v_);
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = platform_load64(&v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+
+ bool success = platform_cmpxchg64_strong(expected, desired, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ }
+
+ return success;
+ }
+
+ value_type
+ fetch_add(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original + v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_sub(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original - v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T *, void *, 8, Sign> {
+ typedef base_atomic this_type;
+ typedef T * value_type;
+ typedef ptrdiff_t difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before_store(order);
+ platform_store64(v, &v_);
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = platform_load64(&v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+
+ bool success = platform_cmpxchg64_strong(expected, desired, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ }
+
+ return success;
+ }
+
+ value_type
+ fetch_add(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original + v, order, memory_order_relaxed));
+ return original;
+ }
+
+ value_type
+ fetch_sub(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, original - v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_POINTER_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+/* generic types */
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 8, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint64_t storage_type;
+public:
+ explicit base_atomic(value_type const& v) : v_(0)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& value, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type value_s = 0;
+ memcpy(&value_s, &value, sizeof(value_s));
+ platform_fence_before_store(order);
+ platform_store64(value_s, &v_);
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type value_s = platform_load64(&v_);
+ platform_fence_after_load(order);
+ value_type value;
+ memcpy(&value, &value_s, sizeof(value_s));
+ return value;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original = load(memory_order_relaxed);
+ do {
+ } while (!compare_exchange_weak(original, v, order, memory_order_relaxed));
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+
+ platform_fence_before(success_order);
+ bool success = platform_cmpxchg64_strong(expected_s, desired_s, &v_);
+
+ if (success) {
+ platform_fence_after(success_order);
+ } else {
+ platform_fence_after(failure_order);
+ memcpy(&expected, &expected_s, sizeof(value_type));
+ }
+
+ return success;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+}
+}
+}
+
+#endif

Added: branches/release/boost/atomic/detail/config.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/detail/config.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,54 @@
+#ifndef BOOST_ATOMIC_DETAIL_CONFIG_HPP
+#define BOOST_ATOMIC_DETAIL_CONFIG_HPP
+
+// Copyright (c) 2012 Hartmut Kaiser
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <boost/config.hpp>
+
+#if (defined(_MSC_VER) && (_MSC_VER >= 1020)) || defined(__GNUC__) || defined(BOOST_CLANG) || defined(BOOST_INTEL) || defined(__COMO__) || defined(__DMC__)
+#define BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#endif
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+///////////////////////////////////////////////////////////////////////////////
+// Set up dll import/export options
+#if (defined(BOOST_ATOMIC_DYN_LINK) || defined(BOOST_ALL_DYN_LINK)) && \
+ !defined(BOOST_ATOMIC_STATIC_LINK)
+
+#if defined(BOOST_ATOMIC_SOURCE)
+#define BOOST_ATOMIC_DECL BOOST_SYMBOL_EXPORT
+#define BOOST_ATOMIC_BUILD_DLL
+#else
+#define BOOST_ATOMIC_DECL BOOST_SYMBOL_IMPORT
+#endif
+
+#endif // building a shared library
+
+#ifndef BOOST_ATOMIC_DECL
+#define BOOST_ATOMIC_DECL
+#endif
+
+///////////////////////////////////////////////////////////////////////////////
+// Auto library naming
+#if !defined(BOOST_ATOMIC_SOURCE) && !defined(BOOST_ALL_NO_LIB) && \
+ !defined(BOOST_ATOMIC_NO_LIB)
+
+#define BOOST_LIB_NAME boost_atomic
+
+// tell the auto-link code to select a dll when required:
+#if defined(BOOST_ALL_DYN_LINK) || defined(BOOST_ATOMIC_DYN_LINK)
+#define BOOST_DYN_LINK
+#endif
+
+#include <boost/config/auto_link.hpp>
+
+#endif // auto-linking disabled
+
+#endif

Added: branches/release/boost/atomic/detail/gcc-alpha.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/detail/gcc-alpha.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,359 @@
+#ifndef BOOST_ATOMIC_DETAIL_GCC_ALPHA_HPP
+#define BOOST_ATOMIC_DETAIL_GCC_ALPHA_HPP
+
+// Copyright (c) 2009 Helge Bahmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <boost/atomic/detail/config.hpp>
+#include <boost/atomic/detail/base.hpp>
+#include <boost/atomic/detail/builder.hpp>
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+/*
+ Refer to http://h71000.www7.hp.com/doc/82final/5601/5601pro_004.html
+ (HP OpenVMS systems documentation) and the alpha reference manual.
+ */
+
+/*
+ NB: The most natural thing would be to write the increment/decrement
+ operators along the following lines:
+
+ __asm__ __volatile__(
+ "1: ldl_l %0,%1 \n"
+ "addl %0,1,%0 \n"
+ "stl_c %0,%1 \n"
+ "beq %0,1b\n"
+ : "=&b" (tmp)
+ : "m" (value)
+ : "cc"
+ );
+
+ However according to the comments on the HP website and matching
+ comments in the Linux kernel sources this defies branch prediction,
+ as the cpu assumes that backward branches are always taken; so
+ instead copy the trick from the Linux kernel, introduce a forward
+ branch and back again.
+
+ I have, however, had a hard time measuring the difference between
+ the two versions in microbenchmarks -- I am leaving it in nevertheless
+ as it apparently does not hurt either.
+*/
+
+namespace boost {
+namespace atomics {
+namespace detail {
+
+inline void fence_before(memory_order order)
+{
+ switch(order) {
+ case memory_order_consume:
+ case memory_order_release:
+ case memory_order_acq_rel:
+ case memory_order_seq_cst:
+ __asm__ __volatile__ ("mb" ::: "memory");
+ default:;
+ }
+}
+
+inline void fence_after(memory_order order)
+{
+ switch(order) {
+ case memory_order_acquire:
+ case memory_order_acq_rel:
+ case memory_order_seq_cst:
+ __asm__ __volatile__ ("mb" ::: "memory");
+ default:;
+ }
+}
+
+template<>
+inline void platform_atomic_thread_fence(memory_order order)
+{
+ switch(order) {
+ case memory_order_acquire:
+ case memory_order_consume:
+ case memory_order_release:
+ case memory_order_acq_rel:
+ case memory_order_seq_cst:
+ __asm__ __volatile__ ("mb" ::: "memory");
+ default:;
+ }
+}
+
+template<typename T>
+class atomic_alpha_32 {
+public:
+ typedef T integral_type;
+ explicit atomic_alpha_32(T v) : i(v) {}
+ atomic_alpha_32() {}
+ T load(memory_order order=memory_order_seq_cst) const volatile
+ {
+ T v=*reinterpret_cast<volatile const int *>(&i);
+ fence_after(order);
+ return v;
+ }
+ void store(T v, memory_order order=memory_order_seq_cst) volatile
+ {
+ fence_before(order);
+ *reinterpret_cast<volatile int *>(&i)=(int)v;
+ }
+ bool compare_exchange_weak(
+ T &expected,
+ T desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ fence_before(success_order);
+ int current, success;
+ __asm__ __volatile__(
+ "1: ldl_l %2, %4\n"
+ "cmpeq %2, %0, %3\n"
+ "mov %2, %0\n"
+ "beq %3, 3f\n"
+ "stl_c %1, %4\n"
+ "2:\n"
+
+ ".subsection 2\n"
+ "3: mov %3, %1\n"
+ "br 2b\n"
+ ".previous\n"
+
+ : "+&r" (expected), "+&r" (desired), "=&r"(current), "=&r"(success)
+ : "m" (i)
+ :
+ );
+ if (desired) fence_after(success_order);
+ else fence_after(failure_order);
+ return desired;
+ }
+
+ bool is_lock_free(void) const volatile {return true;}
+protected:
+ inline T fetch_add_var(T c, memory_order order) volatile
+ {
+ fence_before(order);
+ T original, modified;
+ __asm__ __volatile__(
+ "1: ldl_l %0, %2\n"
+ "addl %0, %3, %1\n"
+ "stl_c %1, %2\n"
+ "beq %1, 2f\n"
+
+ ".subsection 2\n"
+ "2: br 1b\n"
+ ".previous\n"
+
+ : "=&r" (original), "=&r" (modified)
+ : "m" (i), "r" (c)
+ :
+ );
+ fence_after(order);
+ return original;
+ }
+ inline T fetch_inc(memory_order order) volatile
+ {
+ fence_before(order);
+ int original, modified;
+ __asm__ __volatile__(
+ "1: ldl_l %0, %2\n"
+ "addl %0, 1, %1\n"
+ "stl_c %1, %2\n"
+ "beq %1, 2f\n"
+
+ ".subsection 2\n"
+ "2: br 1b\n"
+ ".previous\n"
+
+ : "=&r" (original), "=&r" (modified)
+ : "m" (i)
+ :
+ );
+ fence_after(order);
+ return original;
+ }
+ inline T fetch_dec(memory_order order) volatile
+ {
+ fence_before(order);
+ int original, modified;
+ __asm__ __volatile__(
+ "1: ldl_l %0, %2\n"
+ "subl %0, 1, %1\n"
+ "stl_c %1, %2\n"
+ "beq %1, 2f\n"
+
+ ".subsection 2\n"
+ "2: br 1b\n"
+ ".previous\n"
+
+ : "=&r" (original), "=&r" (modified)
+ : "m" (i)
+ :
+ );
+ fence_after(order);
+ return original;
+ }
+private:
+ T i;
+};
+
+template<typename T>
+class atomic_alpha_64 {
+public:
+ typedef T integral_type;
+ explicit atomic_alpha_64(T v) : i(v) {}
+ atomic_alpha_64() {}
+ T load(memory_order order=memory_order_seq_cst) const volatile
+ {
+ T v=*reinterpret_cast<volatile const T *>(&i);
+ fence_after(order);
+ return v;
+ }
+ void store(T v, memory_order order=memory_order_seq_cst) volatile
+ {
+ fence_before(order);
+ *reinterpret_cast<volatile T *>(&i)=v;
+ }
+ bool compare_exchange_weak(
+ T &expected,
+ T desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ fence_before(success_order);
+ int current, success;
+ __asm__ __volatile__(
+ "1: ldq_l %2, %4\n"
+ "cmpeq %2, %0, %3\n"
+ "mov %2, %0\n"
+ "beq %3, 3f\n"
+ "stq_c %1, %4\n"
+ "2:\n"
+
+ ".subsection 2\n"
+ "3: mov %3, %1\n"
+ "br 2b\n"
+ ".previous\n"
+
+ : "+&r" (expected), "+&r" (desired), "=&r"(current), "=&r"(success)
+ : "m" (i)
+ :
+ );
+ if (desired) fence_after(success_order);
+ else fence_after(failure_order);
+ return desired;
+ }
+
+ bool is_lock_free(void) const volatile {return true;}
+protected:
+ inline T fetch_add_var(T c, memory_order order) volatile
+ {
+ fence_before(order);
+ T original, modified;
+ __asm__ __volatile__(
+ "1: ldq_l %0, %2\n"
+ "addq %0, %3, %1\n"
+ "stq_c %1, %2\n"
+ "beq %1, 2f\n"
+
+ ".subsection 2\n"
+ "2: br 1b\n"
+ ".previous\n"
+
+ : "=&r" (original), "=&r" (modified)
+ : "m" (i), "r" (c)
+ :
+ );
+ fence_after(order);
+ return original;
+ }
+ inline T fetch_inc(memory_order order) volatile
+ {
+ fence_before(order);
+ T original, modified;
+ __asm__ __volatile__(
+ "1: ldq_l %0, %2\n"
+ "addq %0, 1, %1\n"
+ "stq_c %1, %2\n"
+ "beq %1, 2f\n"
+
+ ".subsection 2\n"
+ "2: br 1b\n"
+ ".previous\n"
+
+ : "=&r" (original), "=&r" (modified)
+ : "m" (i)
+ :
+ );
+ fence_after(order);
+ return original;
+ }
+ inline T fetch_dec(memory_order order) volatile
+ {
+ fence_before(order);
+ T original, modified;
+ __asm__ __volatile__(
+ "1: ldq_l %0, %2\n"
+ "subq %0, 1, %1\n"
+ "stq_c %1, %2\n"
+ "beq %1, 2f\n"
+
+ ".subsection 2\n"
+ "2: br 1b\n"
+ ".previous\n"
+
+ : "=&r" (original), "=&r" (modified)
+ : "m" (i)
+ :
+ );
+ fence_after(order);
+ return original;
+ }
+private:
+ T i;
+};
+
+template<typename T>
+class platform_atomic_integral<T, 4> : public build_atomic_from_typical<build_exchange<atomic_alpha_32<T> > > {
+public:
+ typedef build_atomic_from_typical<build_exchange<atomic_alpha_32<T> > > super;
+ explicit platform_atomic_integral(T v) : super(v) {}
+ platform_atomic_integral(void) {}
+};
+
+template<typename T>
+class platform_atomic_integral<T, 8> : public build_atomic_from_typical<build_exchange<atomic_alpha_64<T> > > {
+public:
+ typedef build_atomic_from_typical<build_exchange<atomic_alpha_64<T> > > super;
+ explicit platform_atomic_integral(T v) : super(v) {}
+ platform_atomic_integral(void) {}
+};
+
+template<typename T>
+class platform_atomic_integral<T, 1>: public build_atomic_from_larger_type<atomic_alpha_32<uint32_t>, T> {
+public:
+ typedef build_atomic_from_larger_type<atomic_alpha_32<uint32_t>, T> super;
+
+ explicit platform_atomic_integral(T v) : super(v) {}
+ platform_atomic_integral(void) {}
+};
+
+template<typename T>
+class platform_atomic_integral<T, 2>: public build_atomic_from_larger_type<atomic_alpha_32<uint32_t>, T> {
+public:
+ typedef build_atomic_from_larger_type<atomic_alpha_32<uint32_t>, T> super;
+
+ explicit platform_atomic_integral(T v) : super(v) {}
+ platform_atomic_integral(void) {}
+};
+
+}
+}
+}
+
+#endif

Added: branches/release/boost/atomic/detail/gcc-armv6plus.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/detail/gcc-armv6plus.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,250 @@
+#ifndef BOOST_ATOMIC_DETAIL_GCC_ARMV6PLUS_HPP
+#define BOOST_ATOMIC_DETAIL_GCC_ARMV6PLUS_HPP
+
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+//
+// Copyright (c) 2009 Helge Bahmann
+// Copyright (c) 2009 Phil Endecott
+// ARM Code by Phil Endecott, based on other architectures.
+
+#include <cstddef>
+#include <boost/cstdint.hpp>
+#include <boost/atomic/detail/config.hpp>
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+// From the ARM Architecture Reference Manual for architecture v6:
+//
+// LDREX{<cond>} <Rd>, [<Rn>]
+// <Rd> Specifies the destination register for the memory word addressed by <Rd>
+// <Rn> Specifies the register containing the address.
+//
+// STREX{<cond>} <Rd>, <Rm>, [<Rn>]
+// <Rd> Specifies the destination register for the returned status value.
+// 0 if the operation updates memory
+// 1 if the operation fails to update memory
+// <Rm> Specifies the register containing the word to be stored to memory.
+// <Rn> Specifies the register containing the address.
+// Rd must not be the same register as Rm or Rn.
+//
+// ARM v7 is like ARM v6 plus:
+// There are half-word and byte versions of the LDREX and STREX instructions,
+// LDREXH, LDREXB, STREXH and STREXB.
+// There are also double-word versions, LDREXD and STREXD.
+// (Actually it looks like these are available from version 6k onwards.)
+// FIXME these are not yet used; should be mostly a matter of copy-and-paste.
+// I think you can supply an immediate offset to the address.
+//
+// A memory barrier is effected using a "co-processor 15" instruction,
+// though a separate assembler mnemonic is available for it in v7.
+
+namespace boost {
+namespace atomics {
+namespace detail {
+
+// "Thumb 1" is a subset of the ARM instruction set that uses a 16-bit encoding. It
+// doesn't include all instructions and in particular it doesn't include the co-processor
+// instruction used for the memory barrier or the load-locked/store-conditional
+// instructions. So, if we're compiling in "Thumb 1" mode, we need to wrap all of our
+// asm blocks with code to temporarily change to ARM mode.
+//
+// You can only change between ARM and Thumb modes when branching using the bx instruction.
+// bx takes an address specified in a register. The least significant bit of the address
+// indicates the mode, so 1 is added to indicate that the destination code is Thumb.
+// A temporary register is needed for the address and is passed as an argument to these
+// macros. It must be one of the "low" registers accessible to Thumb code, specified
+// usng the "l" attribute in the asm statement.
+//
+// Architecture v7 introduces "Thumb 2", which does include (almost?) all of the ARM
+// instruction set. So in v7 we don't need to change to ARM mode; we can write "universal
+// assembler" which will assemble to Thumb 2 or ARM code as appropriate. The only thing
+// we need to do to make this "universal" assembler mode work is to insert "IT" instructions
+// to annotate the conditional instructions. These are ignored in other modes (e.g. v6),
+// so they can always be present.
+
+#if defined(__thumb__) && !defined(__ARM_ARCH_7A__)
+// FIXME also other v7 variants.
+#define BOOST_ATOMIC_ARM_ASM_START(TMPREG) "adr " #TMPREG ", 1f\n" "bx " #TMPREG "\n" ".arm\n" ".align 4\n" "1: "
+#define BOOST_ATOMIC_ARM_ASM_END(TMPREG) "adr " #TMPREG ", 1f + 1\n" "bx " #TMPREG "\n" ".thumb\n" ".align 2\n" "1: "
+
+#else
+// The tmpreg is wasted in this case, which is non-optimal.
+#define BOOST_ATOMIC_ARM_ASM_START(TMPREG)
+#define BOOST_ATOMIC_ARM_ASM_END(TMPREG)
+#endif
+
+#if defined(__ARM_ARCH_7A__)
+// FIXME ditto.
+#define BOOST_ATOMIC_ARM_DMB "dmb\n"
+#else
+#define BOOST_ATOMIC_ARM_DMB "mcr\tp15, 0, r0, c7, c10, 5\n"
+#endif
+
+inline void
+arm_barrier(void)
+{
+ int brtmp;
+ __asm__ __volatile__ (
+ BOOST_ATOMIC_ARM_ASM_START(%0)
+ BOOST_ATOMIC_ARM_DMB
+ BOOST_ATOMIC_ARM_ASM_END(%0)
+ : "=&l" (brtmp) :: "memory"
+ );
+}
+
+inline void
+platform_fence_before(memory_order order)
+{
+ switch(order) {
+ case memory_order_release:
+ case memory_order_acq_rel:
+ case memory_order_seq_cst:
+ arm_barrier();
+ case memory_order_consume:
+ default:;
+ }
+}
+
+inline void
+platform_fence_after(memory_order order)
+{
+ switch(order) {
+ case memory_order_acquire:
+ case memory_order_acq_rel:
+ case memory_order_seq_cst:
+ arm_barrier();
+ default:;
+ }
+}
+
+inline void
+platform_fence_before_store(memory_order order)
+{
+ platform_fence_before(order);
+}
+
+inline void
+platform_fence_after_store(memory_order order)
+{
+ if (order == memory_order_seq_cst)
+ arm_barrier();
+}
+
+inline void
+platform_fence_after_load(memory_order order)
+{
+ platform_fence_after(order);
+}
+
+template<typename T>
+inline bool
+platform_cmpxchg32(T & expected, T desired, volatile T * ptr)
+{
+ int success;
+ int tmp;
+ __asm__ (
+ BOOST_ATOMIC_ARM_ASM_START(%2)
+ "mov %1, #0\n" // success = 0
+ "ldrex %0, %3\n" // expected' = *(&i)
+ "teq %0, %4\n" // flags = expected'==expected
+ "ittt eq\n"
+ "strexeq %2, %5, %3\n" // if (flags.equal) *(&i) = desired, tmp = !OK
+ "teqeq %2, #0\n" // if (flags.equal) flags = tmp==0
+ "moveq %1, #1\n" // if (flags.equal) success = 1
+ BOOST_ATOMIC_ARM_ASM_END(%2)
+ : "=&r" (expected), // %0
+ "=&r" (success), // %1
+ "=&l" (tmp), // %2
+ "+Q" (*ptr) // %3
+ : "r" (expected), // %4
+ "r" (desired) // %5
+ : "cc"
+ );
+ return success;
+}
+
+}
+}
+
+#define BOOST_ATOMIC_THREAD_FENCE 2
+inline void
+atomic_thread_fence(memory_order order)
+{
+ switch(order) {
+ case memory_order_acquire:
+ case memory_order_release:
+ case memory_order_acq_rel:
+ case memory_order_seq_cst:
+ atomics::detail::arm_barrier();
+ default:;
+ }
+}
+
+#define BOOST_ATOMIC_SIGNAL_FENCE 2
+inline void
+atomic_signal_fence(memory_order)
+{
+ __asm__ __volatile__ ("" ::: "memory");
+}
+
+class atomic_flag {
+private:
+ atomic_flag(const atomic_flag &) /* = delete */ ;
+ atomic_flag & operator=(const atomic_flag &) /* = delete */ ;
+ uint32_t v_;
+public:
+ atomic_flag(void) : v_(false) {}
+
+ void
+ clear(memory_order order = memory_order_seq_cst) volatile
+ {
+ atomics::detail::platform_fence_before_store(order);
+ const_cast<volatile uint32_t &>(v_) = 0;
+ atomics::detail::platform_fence_after_store(order);
+ }
+
+ bool
+ test_and_set(memory_order order = memory_order_seq_cst) volatile
+ {
+ atomics::detail::platform_fence_before(order);
+ uint32_t expected = v_;
+ do {
+ if (expected == 1)
+ break;
+ } while (!atomics::detail::platform_cmpxchg32(expected, (uint32_t)1, &v_));
+ atomics::detail::platform_fence_after(order);
+ return expected;
+ }
+};
+#define BOOST_ATOMIC_FLAG_LOCK_FREE 2
+
+}
+
+#undef BOOST_ATOMIC_ARM_ASM_START
+#undef BOOST_ATOMIC_ARM_ASM_END
+
+#include <boost/atomic/detail/base.hpp>
+
+#if !defined(BOOST_ATOMIC_FORCE_FALLBACK)
+
+#define BOOST_ATOMIC_CHAR_LOCK_FREE 2
+#define BOOST_ATOMIC_CHAR16_T_LOCK_FREE 2
+#define BOOST_ATOMIC_CHAR32_T_LOCK_FREE 2
+#define BOOST_ATOMIC_WCHAR_T_LOCK_FREE 2
+#define BOOST_ATOMIC_SHORT_LOCK_FREE 2
+#define BOOST_ATOMIC_INT_LOCK_FREE 2
+#define BOOST_ATOMIC_LONG_LOCK_FREE 2
+#define BOOST_ATOMIC_LLONG_LOCK_FREE 0
+#define BOOST_ATOMIC_POINTER_LOCK_FREE 2
+#define BOOST_ATOMIC_BOOL_LOCK_FREE 2
+
+#include <boost/atomic/detail/cas32weak.hpp>
+
+#endif /* !defined(BOOST_ATOMIC_FORCE_FALLBACK) */
+
+#endif
+

Added: branches/release/boost/atomic/detail/gcc-cas.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/detail/gcc-cas.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,155 @@
+// Copyright (c) 2011 Helge Bahmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+// Use the gnu builtin __sync_val_compare_and_swap to build
+// atomic operations for 32 bit and smaller.
+
+#ifndef BOOST_ATOMIC_DETAIL_GENERIC_CAS_HPP
+#define BOOST_ATOMIC_DETAIL_GENERIC_CAS_HPP
+
+#include <cstddef>
+#include <boost/cstdint.hpp>
+#include <boost/atomic/detail/config.hpp>
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+namespace boost {
+
+#define BOOST_ATOMIC_THREAD_FENCE 2
+inline void
+atomic_thread_fence(memory_order order)
+{
+ switch(order) {
+ case memory_order_relaxed:
+ break;
+ case memory_order_release:
+ case memory_order_consume:
+ case memory_order_acquire:
+ case memory_order_acq_rel:
+ case memory_order_seq_cst:
+ __sync_synchronize();
+ break;
+ }
+}
+
+namespace atomics {
+namespace detail {
+
+inline void
+platform_fence_before(memory_order)
+{
+ /* empty, as compare_and_swap is synchronizing already */
+}
+
+inline void
+platform_fence_after(memory_order)
+{
+ /* empty, as compare_and_swap is synchronizing already */
+}
+
+inline void
+platform_fence_before_store(memory_order order)
+{
+ switch(order) {
+ case memory_order_relaxed:
+ case memory_order_acquire:
+ case memory_order_consume:
+ break;
+ case memory_order_release:
+ case memory_order_acq_rel:
+ case memory_order_seq_cst:
+ __sync_synchronize();
+ break;
+ }
+}
+
+inline void
+platform_fence_after_store(memory_order order)
+{
+ if (order == memory_order_seq_cst)
+ __sync_synchronize();
+}
+
+inline void
+platform_fence_after_load(memory_order order)
+{
+ switch(order) {
+ case memory_order_relaxed:
+ case memory_order_release:
+ break;
+ case memory_order_consume:
+ case memory_order_acquire:
+ case memory_order_acq_rel:
+ case memory_order_seq_cst:
+ __sync_synchronize();
+ break;
+ }
+}
+
+template<typename T>
+inline bool
+platform_cmpxchg32_strong(T & expected, T desired, volatile T * ptr)
+{
+ T found = __sync_val_compare_and_swap(ptr, expected, desired);
+ bool success = (found == expected);
+ expected = found;
+ return success;
+}
+
+class atomic_flag {
+private:
+ atomic_flag(const atomic_flag &) /* = delete */ ;
+ atomic_flag & operator=(const atomic_flag &) /* = delete */ ;
+ uint32_t v_;
+public:
+ atomic_flag(void) : v_(false) {}
+
+ void
+ clear(memory_order order = memory_order_seq_cst) volatile
+ {
+ atomics::detail::platform_fence_before_store(order);
+ const_cast<volatile uint32_t &>(v_) = 0;
+ atomics::detail::platform_fence_after_store(order);
+ }
+
+ bool
+ test_and_set(memory_order order = memory_order_seq_cst) volatile
+ {
+ atomics::detail::platform_fence_before(order);
+ uint32_t expected = v_;
+ do {
+ if (expected == 1)
+ break;
+ } while (!atomics::detail::platform_cmpxchg32(expected, (uint32_t)1, &v_));
+ atomics::detail::platform_fence_after(order);
+ return expected;
+ }
+};
+#define BOOST_ATOMIC_FLAG_LOCK_FREE 2
+
+}
+}
+}
+
+#include <boost/atomic/detail/base.hpp>
+
+#if !defined(BOOST_ATOMIC_FORCE_FALLBACK)
+
+#define BOOST_ATOMIC_CHAR_LOCK_FREE 2
+#define BOOST_ATOMIC_SHORT_LOCK_FREE 2
+#define BOOST_ATOMIC_INT_LOCK_FREE 2
+#define BOOST_ATOMIC_LONG_LOCK_FREE (sizeof(long) <= 4 ? 2 : 0)
+#define BOOST_ATOMIC_LLONG_LOCK_FREE (sizeof(long long) <= 4 ? 2 : 0)
+#define BOOST_ATOMIC_POINTER_LOCK_FREE (sizeof(void *) <= 4 ? 2 : 0)
+#define BOOST_ATOMIC_BOOL_LOCK_FREE 2
+
+#include <boost/atomic/detail/cas32strong.hpp>
+
+#endif /* !defined(BOOST_ATOMIC_FORCE_FALLBACK) */
+
+#endif

Added: branches/release/boost/atomic/detail/gcc-ppc.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/detail/gcc-ppc.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,2757 @@
+#ifndef BOOST_ATOMIC_DETAIL_GCC_PPC_HPP
+#define BOOST_ATOMIC_DETAIL_GCC_PPC_HPP
+
+// Copyright (c) 2009 Helge Bahmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <cstddef>
+#include <boost/cstdint.hpp>
+#include <boost/atomic/detail/config.hpp>
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+/*
+ Refer to: Motorola: "Programming Environments Manual for 32-Bit
+ Implementations of the PowerPC Architecture", Appendix E:
+ "Synchronization Programming Examples" for an explanation of what is
+ going on here (can be found on the web at various places by the
+ name "MPCFPE32B.pdf", Google is your friend...)
+
+ Most of the atomic operations map to instructions in a relatively
+ straight-forward fashion, but "load"s may at first glance appear
+ a bit strange as they map to:
+
+ lwz %rX, addr
+ cmpw %rX, %rX
+ bne- 1f
+ 1:
+
+ That is, the CPU is forced to perform a branch that "formally" depends
+ on the value retrieved from memory. This scheme has an overhead of
+ about 1-2 clock cycles per load, but it allows to map "acquire" to
+ the "isync" instruction instead of "sync" uniformly and for all type
+ of atomic operations. Since "isync" has a cost of about 15 clock
+ cycles, while "sync" hast a cost of about 50 clock cycles, the small
+ penalty to atomic loads more than compensates for this.
+
+ Byte- and halfword-sized atomic values are realized by encoding the
+ value to be represented into a word, performing sign/zero extension
+ as appropriate. This means that after add/sub operations the value
+ needs fixing up to accurately preserve the wrap-around semantic of
+ the smaller type. (Nothing special needs to be done for the bit-wise
+ and the "exchange type" operators as the compiler already sees to
+ it that values carried in registers are extended appropriately and
+ everything falls into place naturally).
+
+ The register constrant "b" instructs gcc to use any register
+ except r0; this is sometimes required because the encoding for
+ r0 is used to signify "constant zero" in a number of instructions,
+ making r0 unusable in this place. For simplicity this constraint
+ is used everywhere since I am to lazy to look this up on a
+ per-instruction basis, and ppc has enough registers for this not
+ to pose a problem.
+*/
+
+namespace boost {
+namespace atomics {
+namespace detail {
+
+inline void
+ppc_fence_before(memory_order order)
+{
+ switch(order) {
+ case memory_order_release:
+ case memory_order_acq_rel:
+#if defined(__powerpc64__)
+ __asm__ __volatile__ ("lwsync" ::: "memory");
+ break;
+#endif
+ case memory_order_seq_cst:
+ __asm__ __volatile__ ("sync" ::: "memory");
+ default:;
+ }
+}
+
+inline void
+ppc_fence_after(memory_order order)
+{
+ switch(order) {
+ case memory_order_acquire:
+ case memory_order_acq_rel:
+ case memory_order_seq_cst:
+ __asm__ __volatile__ ("isync");
+ case memory_order_consume:
+ __asm__ __volatile__ ("" ::: "memory");
+ default:;
+ }
+}
+
+inline void
+ppc_fence_after_store(memory_order order)
+{
+ switch(order) {
+ case memory_order_seq_cst:
+ __asm__ __volatile__ ("sync");
+ default:;
+ }
+}
+
+}
+}
+
+class atomic_flag {
+private:
+ atomic_flag(const atomic_flag &) /* = delete */ ;
+ atomic_flag & operator=(const atomic_flag &) /* = delete */ ;
+ uint32_t v_;
+public:
+ atomic_flag(void) : v_(false) {}
+
+ void
+ clear(memory_order order = memory_order_seq_cst) volatile
+ {
+ atomics::detail::ppc_fence_before(order);
+ const_cast<volatile uint32_t &>(v_) = 0;
+ atomics::detail::ppc_fence_after_store(order);
+ }
+
+ bool
+ test_and_set(memory_order order = memory_order_seq_cst) volatile
+ {
+ uint32_t original;
+ atomics::detail::ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y1\n"
+ "stwcx. %2,%y1\n"
+ "bne- 1b\n"
+ : "=&b" (original), "+Z"(v_)
+ : "b" (1)
+ : "cr0"
+ );
+ atomics::detail::ppc_fence_after(order);
+ return original;
+ }
+};
+
+} /* namespace boost */
+
+#define BOOST_ATOMIC_FLAG_LOCK_FREE 2
+
+#include <boost/atomic/detail/base.hpp>
+
+#if !defined(BOOST_ATOMIC_FORCE_FALLBACK)
+
+#define BOOST_ATOMIC_CHAR_LOCK_FREE 2
+#define BOOST_ATOMIC_CHAR16_T_LOCK_FREE 2
+#define BOOST_ATOMIC_CHAR32_T_LOCK_FREE 2
+#define BOOST_ATOMIC_WCHAR_T_LOCK_FREE 2
+#define BOOST_ATOMIC_SHORT_LOCK_FREE 2
+#define BOOST_ATOMIC_INT_LOCK_FREE 2
+#define BOOST_ATOMIC_LONG_LOCK_FREE 2
+#define BOOST_ATOMIC_POINTER_LOCK_FREE 2
+#if defined(__powerpc64__)
+#define BOOST_ATOMIC_LLONG_LOCK_FREE 2
+#else
+#define BOOST_ATOMIC_LLONG_LOCK_FREE 0
+#endif
+#define BOOST_ATOMIC_BOOL_LOCK_FREE 2
+
+/* Would like to move the slow-path of failed compare_exchange
+(that clears the "success" bit) out-of-line. gcc can in
+principle do that using ".subsection"/".previous", but Apple's
+binutils seemingly does not understand that. Therefore wrap
+the "clear" of the flag in a macro and let it remain
+in-line for Apple
+*/
+
+#if !defined(__APPLE__)
+
+#define BOOST_ATOMIC_ASM_SLOWPATH_CLEAR \
+ "9:\n" \
+ ".subsection 2\n" \
+ "2: addi %1,0,0\n" \
+ "b 9b\n" \
+ ".previous\n" \
+
+#else
+
+#define BOOST_ATOMIC_ASM_SLOWPATH_CLEAR \
+ "b 9f\n" \
+ "2: addi %1,0,0\n" \
+ "9:\n" \
+
+#endif
+
+namespace boost {
+namespace atomics {
+namespace detail {
+
+/* integral types */
+
+template<typename T>
+class base_atomic<T, int, 1, true> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef int32_t storage_type;
+ typedef T difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ ppc_fence_before(order);
+ __asm__ (
+ "stw %1, %0\n"
+ : "+m"(v_)
+ : "r" (v)
+ );
+ ppc_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v;
+ __asm__ __volatile__ (
+ "lwz %0, %1\n"
+ "cmpw %0, %0\n"
+ "bne- 1f\n"
+ "1:\n"
+ : "=&r" (v)
+ : "m" (v_)
+ );
+ ppc_fence_after(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y1\n"
+ "stwcx. %2,%y1\n"
+ "bne- 1b\n"
+ : "=&b" (original), "+Z"(v_)
+ : "b" (v)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 2f\n"
+ "addi %1,0,1\n"
+ "1:"
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "0: lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 0b\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "add %1,%0,%3\n"
+ "extsb %1, %1\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "sub %1,%0,%3\n"
+ "extsb %1, %1\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "and %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "or %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "xor %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T>
+class base_atomic<T, int, 1, false> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint32_t storage_type;
+ typedef T difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ ppc_fence_before(order);
+ __asm__ (
+ "stw %1, %0\n"
+ : "+m"(v_)
+ : "r" (v)
+ );
+ ppc_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v;
+ __asm__ __volatile__ (
+ "lwz %0, %1\n"
+ "cmpw %0, %0\n"
+ "bne- 1f\n"
+ "1:\n"
+ : "=&r" (v)
+ : "m" (v_)
+ );
+ ppc_fence_after(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y1\n"
+ "stwcx. %2,%y1\n"
+ "bne- 1b\n"
+ : "=&b" (original), "+Z"(v_)
+ : "b" (v)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 2f\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "0: lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 0b\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "add %1,%0,%3\n"
+ "rlwinm %1, %1, 0, 0xff\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "sub %1,%0,%3\n"
+ "rlwinm %1, %1, 0, 0xff\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "and %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "or %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "xor %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T>
+class base_atomic<T, int, 2, true> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef int32_t storage_type;
+ typedef T difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ ppc_fence_before(order);
+ __asm__ (
+ "stw %1, %0\n"
+ : "+m"(v_)
+ : "r" (v)
+ );
+ ppc_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v;
+ __asm__ __volatile__ (
+ "lwz %0, %1\n"
+ "cmpw %0, %0\n"
+ "bne- 1f\n"
+ "1:\n"
+ : "=&r" (v)
+ : "m" (v_)
+ );
+ ppc_fence_after(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y1\n"
+ "stwcx. %2,%y1\n"
+ "bne- 1b\n"
+ : "=&b" (original), "+Z"(v_)
+ : "b" (v)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 2f\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "0: lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 0b\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "add %1,%0,%3\n"
+ "extsh %1, %1\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "sub %1,%0,%3\n"
+ "extsh %1, %1\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "and %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "or %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "xor %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T>
+class base_atomic<T, int, 2, false> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint32_t storage_type;
+ typedef T difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ ppc_fence_before(order);
+ __asm__ (
+ "stw %1, %0\n"
+ : "+m"(v_)
+ : "r" (v)
+ );
+ ppc_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v;
+ __asm__ __volatile__ (
+ "lwz %0, %1\n"
+ "cmpw %0, %0\n"
+ "bne- 1f\n"
+ "1:\n"
+ : "=&r" (v)
+ : "m" (v_)
+ );
+ ppc_fence_after(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y1\n"
+ "stwcx. %2,%y1\n"
+ "bne- 1b\n"
+ : "=&b" (original), "+Z"(v_)
+ : "b" (v)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 2f\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "0: lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 0b\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "add %1,%0,%3\n"
+ "rlwinm %1, %1, 0, 0xffff\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "sub %1,%0,%3\n"
+ "rlwinm %1, %1, 0, 0xffff\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "and %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "or %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "xor %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, int, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ ppc_fence_before(order);
+ const_cast<volatile value_type &>(v_) = v;
+ ppc_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ __asm__ __volatile__ (
+ "cmpw %0, %0\n"
+ "bne- 1f\n"
+ "1:\n"
+ : "+b"(v)
+ :
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y1\n"
+ "stwcx. %2,%y1\n"
+ "bne- 1b\n"
+ : "=&b" (original), "+Z"(v_)
+ : "b" (v)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 2f\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "0: lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 0b\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "add %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "sub %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "and %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "or %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "xor %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+#if defined(__powerpc64__)
+
+template<typename T, bool Sign>
+class base_atomic<T, int, 8, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ ppc_fence_before(order);
+ const_cast<volatile value_type &>(v_) = v;
+ ppc_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ __asm__ __volatile__ (
+ "cmpd %0, %0\n"
+ "bne- 1f\n"
+ "1:\n"
+ : "+b"(v)
+ :
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "ldarx %0,%y1\n"
+ "stdcx. %2,%y1\n"
+ "bne- 1b\n"
+ : "=&b" (original), "+Z"(v_)
+ : "b" (v)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "ldarx %0,%y2\n"
+ "cmpd %0, %3\n"
+ "bne- 2f\n"
+ "stdcx. %4,%y2\n"
+ "bne- 2f\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "0: ldarx %0,%y2\n"
+ "cmpd %0, %3\n"
+ "bne- 2f\n"
+ "stdcx. %4,%y2\n"
+ "bne- 0b\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "ldarx %0,%y2\n"
+ "add %1,%0,%3\n"
+ "stdcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "ldarx %0,%y2\n"
+ "sub %1,%0,%3\n"
+ "stdcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "ldarx %0,%y2\n"
+ "and %1,%0,%3\n"
+ "stdcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "ldarx %0,%y2\n"
+ "or %1,%0,%3\n"
+ "stdcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "ldarx %0,%y2\n"
+ "xor %1,%0,%3\n"
+ "stdcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+#endif
+
+/* pointer types */
+
+#if !defined(__powerpc64__)
+
+template<bool Sign>
+class base_atomic<void *, void *, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef void * value_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ ppc_fence_before(order);
+ __asm__ (
+ "stw %1, %0\n"
+ : "+m" (v_)
+ : "r" (v)
+ );
+ ppc_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v;
+ __asm__ (
+ "lwz %0, %1\n"
+ "cmpw %0, %0\n"
+ "bne- 1f\n"
+ "1:\n"
+ : "=r"(v)
+ : "m"(v_)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y1\n"
+ "stwcx. %2,%y1\n"
+ "bne- 1b\n"
+ : "=&b" (original), "+Z"(v_)
+ : "b" (v)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 2f\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "0: lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 0b\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T *, void *, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef T * value_type;
+ typedef ptrdiff_t difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ ppc_fence_before(order);
+ __asm__ (
+ "stw %1, %0\n"
+ : "+m" (v_)
+ : "r" (v)
+ );
+ ppc_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v;
+ __asm__ (
+ "lwz %0, %1\n"
+ "cmpw %0, %0\n"
+ "bne- 1f\n"
+ "1:\n"
+ : "=r"(v)
+ : "m"(v_)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y1\n"
+ "stwcx. %2,%y1\n"
+ "bne- 1b\n"
+ : "=&b" (original), "+Z"(v_)
+ : "b" (v)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 2f\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "0: lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 0b\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ value_type
+ fetch_add(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ v = v * sizeof(*v_);
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "add %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_sub(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ v = v * sizeof(*v_);
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y2\n"
+ "sub %1,%0,%3\n"
+ "stwcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_POINTER_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+#else
+
+template<bool Sign>
+class base_atomic<void *, void *, 8, Sign> {
+ typedef base_atomic this_type;
+ typedef void * value_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ ppc_fence_before(order);
+ __asm__ (
+ "std %1, %0\n"
+ : "+m" (v_)
+ : "r" (v)
+ );
+ ppc_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v;
+ __asm__ (
+ "ld %0, %1\n"
+ "cmpd %0, %0\n"
+ "bne- 1f\n"
+ "1:\n"
+ : "=r"(v)
+ : "m"(v_)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "ldarx %0,%y1\n"
+ "stdcx. %2,%y1\n"
+ "bne- 1b\n"
+ : "=&b" (original), "+Z"(v_)
+ : "b" (v)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "ldarx %0,%y2\n"
+ "cmpd %0, %3\n"
+ "bne- 2f\n"
+ "stdcx. %4,%y2\n"
+ "bne- 2f\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "0: ldarx %0,%y2\n"
+ "cmpd %0, %3\n"
+ "bne- 2f\n"
+ "stdcx. %4,%y2\n"
+ "bne- 0b\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T *, void *, 8, Sign> {
+ typedef base_atomic this_type;
+ typedef T * value_type;
+ typedef ptrdiff_t difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ ppc_fence_before(order);
+ __asm__ (
+ "std %1, %0\n"
+ : "+m" (v_)
+ : "r" (v)
+ );
+ ppc_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v;
+ __asm__ (
+ "ld %0, %1\n"
+ "cmpd %0, %0\n"
+ "bne- 1f\n"
+ "1:\n"
+ : "=r"(v)
+ : "m"(v_)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type original;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "ldarx %0,%y1\n"
+ "stdcx. %2,%y1\n"
+ "bne- 1b\n"
+ : "=&b" (original), "+Z"(v_)
+ : "b" (v)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ return original;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "ldarx %0,%y2\n"
+ "cmpd %0, %3\n"
+ "bne- 2f\n"
+ "stdcx. %4,%y2\n"
+ "bne- 2f\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "0: ldarx %0,%y2\n"
+ "cmpd %0, %3\n"
+ "bne- 2f\n"
+ "stdcx. %4,%y2\n"
+ "bne- 0b\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected), "=&b" (success), "+Z"(v_)
+ : "b" (expected), "b" (desired)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ return success;
+ }
+
+ value_type
+ fetch_add(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ v = v * sizeof(*v_);
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "ldarx %0,%y2\n"
+ "add %1,%0,%3\n"
+ "stdcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ value_type
+ fetch_sub(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ v = v * sizeof(*v_);
+ value_type original, tmp;
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "ldarx %0,%y2\n"
+ "sub %1,%0,%3\n"
+ "stdcx. %1,%y2\n"
+ "bne- 1b\n"
+ : "=&b" (original), "=&b" (tmp), "+Z"(v_)
+ : "b" (v)
+ : "cc");
+ ppc_fence_after(order);
+ return original;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_POINTER_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+#endif
+
+/* generic */
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 1, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type const& v)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ ppc_fence_before(order);
+ __asm__ (
+ "stw %1, %0\n"
+ : "+m" (v_)
+ : "r" (tmp)
+ );
+ ppc_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp;
+ __asm__ __volatile__ (
+ "lwz %0, %1\n"
+ "cmpw %0, %0\n"
+ "bne- 1f\n"
+ "1:\n"
+ : "=r"(tmp)
+ : "m"(v_)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0, original;
+ memcpy(&tmp, &v, sizeof(value_type));
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y1\n"
+ "stwcx. %2,%y1\n"
+ "bne- 1b\n"
+ : "=&b" (original), "+Z"(v_)
+ : "b" (tmp)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ value_type res;
+ memcpy(&res, &original, sizeof(value_type));
+ return res;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 2f\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected_s), "=&b" (success), "+Z"(v_)
+ : "b" (expected_s), "b" (desired_s)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ memcpy(&expected, &expected_s, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "0: lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 0b\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected_s), "=&b" (success), "+Z"(v_)
+ : "b" (expected_s), "b" (desired_s)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ memcpy(&expected, &expected_s, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 2, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type const& v)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ ppc_fence_before(order);
+ __asm__ (
+ "stw %1, %0\n"
+ : "+m" (v_)
+ : "r" (tmp)
+ );
+ ppc_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp;
+ __asm__ __volatile__ (
+ "lwz %0, %1\n"
+ "cmpw %0, %0\n"
+ "bne- 1f\n"
+ "1:\n"
+ : "=r"(tmp)
+ : "m"(v_)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0, original;
+ memcpy(&tmp, &v, sizeof(value_type));
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y1\n"
+ "stwcx. %2,%y1\n"
+ "bne- 1b\n"
+ : "=&b" (original), "+Z"(v_)
+ : "b" (tmp)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ value_type res;
+ memcpy(&res, &original, sizeof(value_type));
+ return res;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 2f\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected_s), "=&b" (success), "+Z"(v_)
+ : "b" (expected_s), "b" (desired_s)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ memcpy(&expected, &expected_s, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "0: lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 0b\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected_s), "=&b" (success), "+Z"(v_)
+ : "b" (expected_s), "b" (desired_s)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ memcpy(&expected, &expected_s, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type const& v) : v_(0)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ ppc_fence_before(order);
+ __asm__ (
+ "stw %1, %0\n"
+ : "+m" (v_)
+ : "r" (tmp)
+ );
+ ppc_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp;
+ __asm__ __volatile__ (
+ "lwz %0, %1\n"
+ "cmpw %0, %0\n"
+ "bne- 1f\n"
+ "1:\n"
+ : "=r"(tmp)
+ : "m"(v_)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0, original;
+ memcpy(&tmp, &v, sizeof(value_type));
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "lwarx %0,%y1\n"
+ "stwcx. %2,%y1\n"
+ "bne- 1b\n"
+ : "=&b" (original), "+Z"(v_)
+ : "b" (tmp)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ value_type res;
+ memcpy(&res, &original, sizeof(value_type));
+ return res;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 2f\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected_s), "=&b" (success), "+Z"(v_)
+ : "b" (expected_s), "b" (desired_s)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ memcpy(&expected, &expected_s, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "0: lwarx %0,%y2\n"
+ "cmpw %0, %3\n"
+ "bne- 2f\n"
+ "stwcx. %4,%y2\n"
+ "bne- 0b\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected_s), "=&b" (success), "+Z"(v_)
+ : "b" (expected_s), "b" (desired_s)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ memcpy(&expected, &expected_s, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+#if defined(__powerpc64__)
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 8, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint64_t storage_type;
+public:
+ explicit base_atomic(value_type const& v) : v_(0)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp;
+ memcpy(&tmp, &v, sizeof(value_type));
+ ppc_fence_before(order);
+ __asm__ (
+ "std %1, %0\n"
+ : "+m" (v_)
+ : "r" (tmp)
+ );
+ ppc_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp;
+ __asm__ __volatile__ (
+ "ld %0, %1\n"
+ "cmpd %0, %0\n"
+ "bne- 1f\n"
+ "1:\n"
+ : "=r"(tmp)
+ : "m"(v_)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0, original;
+ memcpy(&tmp, &v, sizeof(value_type));
+ ppc_fence_before(order);
+ __asm__ (
+ "1:\n"
+ "ldarx %0,%y1\n"
+ "stdcx. %2,%y1\n"
+ "bne- 1b\n"
+ : "=&b" (original), "+Z"(v_)
+ : "b" (tmp)
+ : "cr0"
+ );
+ ppc_fence_after(order);
+ value_type res;
+ memcpy(&res, &original, sizeof(value_type));
+ return res;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s, desired_s;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "ldarx %0,%y2\n"
+ "cmpd %0, %3\n"
+ "bne- 2f\n"
+ "stdcx. %4,%y2\n"
+ "bne- 2f\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected_s), "=&b" (success), "+Z"(v_)
+ : "b" (expected_s), "b" (desired_s)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ memcpy(&expected, &expected_s, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s, desired_s;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+
+ int success;
+ ppc_fence_before(success_order);
+ __asm__(
+ "0: ldarx %0,%y2\n"
+ "cmpd %0, %3\n"
+ "bne- 2f\n"
+ "stdcx. %4,%y2\n"
+ "bne- 0b\n"
+ "addi %1,0,1\n"
+ "1:"
+
+ BOOST_ATOMIC_ASM_SLOWPATH_CLEAR
+ : "=&b" (expected_s), "=&b" (success), "+Z"(v_)
+ : "b" (expected_s), "b" (desired_s)
+ : "cr0"
+ );
+ if (success)
+ ppc_fence_after(success_order);
+ else
+ ppc_fence_after(failure_order);
+ memcpy(&expected, &expected_s, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+#endif
+
+}
+}
+
+#define BOOST_ATOMIC_THREAD_FENCE 2
+inline void
+atomic_thread_fence(memory_order order)
+{
+ switch(order) {
+ case memory_order_acquire:
+ __asm__ __volatile__ ("isync" ::: "memory");
+ break;
+ case memory_order_release:
+#if defined(__powerpc64__)
+ __asm__ __volatile__ ("lwsync" ::: "memory");
+ break;
+#endif
+ case memory_order_acq_rel:
+ case memory_order_seq_cst:
+ __asm__ __volatile__ ("sync" ::: "memory");
+ default:;
+ }
+}
+
+#define BOOST_ATOMIC_SIGNAL_FENCE 2
+inline void
+atomic_signal_fence(memory_order order)
+{
+ switch(order) {
+ case memory_order_acquire:
+ case memory_order_release:
+ case memory_order_acq_rel:
+ case memory_order_seq_cst:
+ __asm__ __volatile__ ("" ::: "memory");
+ break;
+ default:;
+ }
+}
+
+}
+
+#endif /* !defined(BOOST_ATOMIC_FORCE_FALLBACK) */
+
+#endif

Added: branches/release/boost/atomic/detail/gcc-sparcv9.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/detail/gcc-sparcv9.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,1229 @@
+#ifndef BOOST_ATOMIC_DETAIL_GCC_SPARC_HPP
+#define BOOST_ATOMIC_DETAIL_GCC_SPARC_HPP
+
+// Copyright (c) 2010 Helge Bahmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <cstddef>
+#include <boost/cstdint.hpp>
+#include <boost/atomic/detail/config.hpp>
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+namespace boost {
+namespace atomics {
+namespace detail {
+
+inline void
+platform_fence_before(memory_order order)
+{
+ switch(order) {
+ case memory_order_relaxed:
+ case memory_order_acquire:
+ case memory_order_consume:
+ break;
+ case memory_order_release:
+ case memory_order_acq_rel:
+ __asm__ __volatile__ ("membar #StoreStore | #LoadStore" ::: "memory");
+ /* release */
+ break;
+ case memory_order_seq_cst:
+ __asm__ __volatile__ ("membar #Sync" ::: "memory");
+ /* seq */
+ break;
+ }
+}
+
+inline void
+platform_fence_after(memory_order order)
+{
+ switch(order) {
+ case memory_order_relaxed:
+ case memory_order_release:
+ break;
+ case memory_order_acquire:
+ case memory_order_acq_rel:
+ __asm__ __volatile__ ("membar #LoadLoad | #LoadStore" ::: "memory");
+ /* acquire */
+ break;
+ case memory_order_consume:
+ /* consume */
+ break;
+ case memory_order_seq_cst:
+ __asm__ __volatile__ ("membar #Sync" ::: "memory");
+ /* seq */
+ break;
+ default:;
+ }
+}
+
+inline void
+platform_fence_after_store(memory_order order)
+{
+ switch(order) {
+ case memory_order_seq_cst:
+ __asm__ __volatile__ ("membar #Sync" ::: "memory");
+ default:;
+ }
+}
+
+
+inline void
+platform_fence_after_load(memory_order order)
+{
+ platform_fence_after(order);
+}
+
+}
+}
+
+class atomic_flag {
+private:
+ atomic_flag(const atomic_flag &) /* = delete */ ;
+ atomic_flag & operator=(const atomic_flag &) /* = delete */ ;
+ uint32_t v_;
+public:
+ atomic_flag(void) : v_(false) {}
+
+ void
+ clear(memory_order order = memory_order_seq_cst) volatile
+ {
+ atomics::detail::platform_fence_before(order);
+ const_cast<volatile uint32_t &>(v_) = 0;
+ atomics::detail::platform_fence_after_store(order);
+ }
+
+ bool
+ test_and_set(memory_order order = memory_order_seq_cst) volatile
+ {
+ atomics::detail::platform_fence_before(order);
+ uint32_t tmp = 1;
+ __asm__ (
+ "cas [%1], %2, %0"
+ : "+r" (tmp)
+ : "r" (&v_), "r" (0)
+ : "memory"
+ );
+ atomics::detail::platform_fence_after(order);
+ return tmp;
+ }
+};
+
+} /* namespace boost */
+
+#define BOOST_ATOMIC_FLAG_LOCK_FREE 2
+
+#include <boost/atomic/detail/base.hpp>
+
+#if !defined(BOOST_ATOMIC_FORCE_FALLBACK)
+
+#define BOOST_ATOMIC_CHAR_LOCK_FREE 2
+#define BOOST_ATOMIC_CHAR16_T_LOCK_FREE 2
+#define BOOST_ATOMIC_CHAR32_T_LOCK_FREE 2
+#define BOOST_ATOMIC_WCHAR_T_LOCK_FREE 2
+#define BOOST_ATOMIC_SHORT_LOCK_FREE 2
+#define BOOST_ATOMIC_INT_LOCK_FREE 2
+#define BOOST_ATOMIC_LONG_LOCK_FREE 2
+#define BOOST_ATOMIC_LLONG_LOCK_FREE 0
+#define BOOST_ATOMIC_POINTER_LOCK_FREE 2
+#define BOOST_ATOMIC_BOOL_LOCK_FREE 2
+
+namespace boost {
+
+#define BOOST_ATOMIC_THREAD_FENCE 2
+inline void
+atomic_thread_fence(memory_order order)
+{
+ switch(order) {
+ case memory_order_relaxed:
+ break;
+ case memory_order_release:
+ __asm__ __volatile__ ("membar #StoreStore | #LoadStore" ::: "memory");
+ break;
+ case memory_order_acquire:
+ __asm__ __volatile__ ("membar #LoadLoad | #LoadStore" ::: "memory");
+ break;
+ case memory_order_acq_rel:
+ __asm__ __volatile__ ("membar #LoadLoad | #LoadStore | #StoreStore" ::: "memory");
+ break;
+ case memory_order_consume:
+ break;
+ case memory_order_seq_cst:
+ __asm__ __volatile__ ("membar #Sync" ::: "memory");
+ break;
+ default:;
+ }
+}
+
+#define BOOST_ATOMIC_SIGNAL_FENCE 2
+inline void
+atomic_signal_fence(memory_order)
+{
+ __asm__ __volatile__ ("" ::: "memory");
+}
+
+namespace atomics {
+namespace detail {
+
+/* integral types */
+
+template<typename T>
+class base_atomic<T, int, 1, true> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+ typedef int32_t storage_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ const_cast<volatile storage_type &>(v_) = v;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp + v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp - v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+ storage_type desired_s = desired;
+ __asm__ (
+ "cas [%1], %2, %0"
+ : "+r" (desired_s)
+ : "r" (&v_), "r" ((storage_type)expected)
+ : "memory"
+ );
+ desired = desired_s;
+ bool success = (desired == expected);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = desired;
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp & v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp | v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp ^ v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T>
+class base_atomic<T, int, 1, false> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ const_cast<volatile storage_type &>(v_) = v;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp + v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp - v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+ storage_type desired_s = desired;
+ __asm__ (
+ "cas [%1], %2, %0"
+ : "+r" (desired_s)
+ : "r" (&v_), "r" ((storage_type)expected)
+ : "memory"
+ );
+ desired = desired_s;
+ bool success = (desired == expected);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = desired;
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp & v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp | v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp ^ v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T>
+class base_atomic<T, int, 2, true> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+ typedef int32_t storage_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ const_cast<volatile storage_type &>(v_) = v;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp + v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp - v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+ storage_type desired_s = desired;
+ __asm__ (
+ "cas [%1], %2, %0"
+ : "+r" (desired_s)
+ : "r" (&v_), "r" ((storage_type)expected)
+ : "memory"
+ );
+ desired = desired_s;
+ bool success = (desired == expected);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = desired;
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp & v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp | v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp ^ v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T>
+class base_atomic<T, int, 2, false> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ const_cast<volatile storage_type &>(v_) = v;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp + v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp - v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+ storage_type desired_s = desired;
+ __asm__ (
+ "cas [%1], %2, %0"
+ : "+r" (desired_s)
+ : "r" (&v_), "r" ((storage_type)expected)
+ : "memory"
+ );
+ desired = desired_s;
+ bool success = (desired == expected);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = desired;
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp & v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp | v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp ^ v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, int, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ const_cast<volatile value_type &>(v_) = v;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp + v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp - v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+ __asm__ (
+ "cas [%1], %2, %0"
+ : "+r" (desired)
+ : "r" (&v_), "r" (expected)
+ : "memory"
+ );
+ bool success = (desired == expected);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = desired;
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp & v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp | v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp ^ v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+/* pointer types */
+
+template<bool Sign>
+class base_atomic<void *, void *, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef void * value_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ const_cast<volatile value_type &>(v_) = v;
+ platform_fence_after_store(order);
+ }
+
+ value_type load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+ __asm__ (
+ "cas [%1], %2, %0"
+ : "+r" (desired)
+ : "r" (&v_), "r" (expected)
+ : "memory"
+ );
+ bool success = (desired == expected);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = desired;
+ return success;
+ }
+
+
+ bool compare_exchange_weak(value_type & expected, value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T *, void *, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef T * value_type;
+ typedef ptrdiff_t difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ const_cast<volatile value_type &>(v_) = v;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ platform_fence_before(success_order);
+ __asm__ (
+ "cas [%1], %2, %0"
+ : "+r" (desired)
+ : "r" (&v_), "r" (expected)
+ : "memory"
+ );
+ bool success = (desired == expected);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = desired;
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ fetch_add(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp + v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_sub(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp - v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_POINTER_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+/* generic types */
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 1, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type const& v) : v_(0)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+ const_cast<volatile storage_type &>(v_) = tmp;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp = const_cast<volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+ platform_fence_before(success_order);
+ __asm__ (
+ "cas [%1], %2, %0"
+ : "+r" (desired_s)
+ : "r" (&v_), "r" (expected_s)
+ : "memory"
+ );
+ bool success = (desired_s == expected_s);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ memcpy(&expected, &desired_s, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 2, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type const& v) : v_(0)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+ const_cast<volatile storage_type &>(v_) = tmp;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp = const_cast<volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+ platform_fence_before(success_order);
+ __asm__ (
+ "cas [%1], %2, %0"
+ : "+r" (desired_s)
+ : "r" (&v_), "r" (expected_s)
+ : "memory"
+ );
+ bool success = (desired_s == expected_s);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ memcpy(&expected, &desired_s, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type const& v) : v_(0)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+ const_cast<volatile storage_type &>(v_) = tmp;
+ platform_fence_after_store(order);
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp = const_cast<volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+ platform_fence_before(success_order);
+ __asm__ (
+ "cas [%1], %2, %0"
+ : "+r" (desired_s)
+ : "r" (&v_), "r" (expected_s)
+ : "memory"
+ );
+ bool success = (desired_s == expected_s);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ memcpy(&expected, &desired_s, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+#endif /* !defined(BOOST_ATOMIC_FORCE_FALLBACK) */
+
+}
+}
+}
+
+#endif

Added: branches/release/boost/atomic/detail/gcc-x86.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/detail/gcc-x86.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,1600 @@
+#ifndef BOOST_ATOMIC_DETAIL_GCC_X86_HPP
+#define BOOST_ATOMIC_DETAIL_GCC_X86_HPP
+
+// Copyright (c) 2009 Helge Bahmann
+// Copyright (c) 2012 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <cstddef>
+#include <boost/cstdint.hpp>
+#include <boost/atomic/detail/config.hpp>
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+namespace boost {
+namespace atomics {
+namespace detail {
+
+#if defined(__x86_64__)
+# define BOOST_ATOMIC_X86_FENCE_INSTR "mfence\n"
+#else
+# define BOOST_ATOMIC_X86_FENCE_INSTR "lock ; addl $0, (%%esp)\n"
+#endif
+
+inline void
+platform_fence_before(memory_order order)
+{
+ switch(order) {
+ case memory_order_relaxed:
+ case memory_order_acquire:
+ case memory_order_consume:
+ break;
+ case memory_order_release:
+ case memory_order_acq_rel:
+ __asm__ __volatile__ ("" ::: "memory");
+ /* release */
+ break;
+ case memory_order_seq_cst:
+ __asm__ __volatile__ ("" ::: "memory");
+ /* seq */
+ break;
+ }
+}
+
+inline void
+platform_fence_after(memory_order order)
+{
+ switch(order) {
+ case memory_order_relaxed:
+ case memory_order_release:
+ break;
+ case memory_order_acquire:
+ case memory_order_acq_rel:
+ __asm__ __volatile__ ("" ::: "memory");
+ /* acquire */
+ break;
+ case memory_order_consume:
+ /* consume */
+ break;
+ case memory_order_seq_cst:
+ __asm__ __volatile__ ("" ::: "memory");
+ /* seq */
+ break;
+ default:;
+ }
+}
+
+inline void
+platform_fence_after_load(memory_order order)
+{
+ switch(order) {
+ case memory_order_relaxed:
+ case memory_order_release:
+ break;
+ case memory_order_acquire:
+ case memory_order_acq_rel:
+ __asm__ __volatile__ ("" ::: "memory");
+ break;
+ case memory_order_consume:
+ break;
+ case memory_order_seq_cst:
+ __asm__ __volatile__ (BOOST_ATOMIC_X86_FENCE_INSTR ::: "memory");
+ break;
+ default:;
+ }
+}
+
+inline void
+platform_fence_before_store(memory_order order)
+{
+ switch(order) {
+ case memory_order_relaxed:
+ case memory_order_acquire:
+ case memory_order_consume:
+ break;
+ case memory_order_release:
+ case memory_order_acq_rel:
+ __asm__ __volatile__ ("" ::: "memory");
+ /* release */
+ break;
+ case memory_order_seq_cst:
+ __asm__ __volatile__ ("" ::: "memory");
+ /* seq */
+ break;
+ }
+}
+
+inline void
+platform_fence_after_store(memory_order order)
+{
+ switch(order) {
+ case memory_order_relaxed:
+ case memory_order_release:
+ break;
+ case memory_order_acquire:
+ case memory_order_acq_rel:
+ __asm__ __volatile__ ("" ::: "memory");
+ /* acquire */
+ break;
+ case memory_order_consume:
+ /* consume */
+ break;
+ case memory_order_seq_cst:
+ __asm__ __volatile__ ("" ::: "memory");
+ /* seq */
+ break;
+ default:;
+ }
+}
+
+}
+}
+
+class atomic_flag {
+private:
+ atomic_flag(const atomic_flag &) /* = delete */ ;
+ atomic_flag & operator=(const atomic_flag &) /* = delete */ ;
+ uint32_t v_;
+public:
+ atomic_flag(void) : v_(false) {}
+
+ bool
+ test_and_set(memory_order order = memory_order_seq_cst) volatile
+ {
+ uint32_t v = 1;
+ atomics::detail::platform_fence_before(order);
+ __asm__ __volatile__ (
+ "xchgl %0, %1"
+ : "+r" (v), "+m" (v_)
+ );
+ atomics::detail::platform_fence_after(order);
+ return v;
+ }
+
+ void
+ clear(memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order == memory_order_seq_cst) {
+ uint32_t v = 0;
+ __asm__ __volatile__ (
+ "xchgl %0, %1"
+ : "+r" (v), "+m" (v_)
+ );
+ } else {
+ atomics::detail::platform_fence_before(order);
+ v_ = 0;
+ }
+ }
+};
+
+} /* namespace boost */
+
+#define BOOST_ATOMIC_FLAG_LOCK_FREE 2
+
+#include <boost/atomic/detail/base.hpp>
+
+#if !defined(BOOST_ATOMIC_FORCE_FALLBACK)
+
+#define BOOST_ATOMIC_CHAR_LOCK_FREE 2
+#define BOOST_ATOMIC_CHAR16_T_LOCK_FREE 2
+#define BOOST_ATOMIC_CHAR32_T_LOCK_FREE 2
+#define BOOST_ATOMIC_WCHAR_T_LOCK_FREE 2
+#define BOOST_ATOMIC_SHORT_LOCK_FREE 2
+#define BOOST_ATOMIC_INT_LOCK_FREE 2
+#define BOOST_ATOMIC_LONG_LOCK_FREE 2
+
+#if defined(__x86_64__)
+#define BOOST_ATOMIC_LLONG_LOCK_FREE 2
+#else
+#define BOOST_ATOMIC_LLONG_LOCK_FREE 1
+#endif
+
+#define BOOST_ATOMIC_POINTER_LOCK_FREE 2
+#define BOOST_ATOMIC_BOOL_LOCK_FREE 2
+
+namespace boost {
+
+#define BOOST_ATOMIC_THREAD_FENCE 2
+inline void
+atomic_thread_fence(memory_order order)
+{
+ switch(order) {
+ case memory_order_relaxed:
+ break;
+ case memory_order_release:
+ __asm__ __volatile__ ("" ::: "memory");
+ break;
+ case memory_order_acquire:
+ __asm__ __volatile__ ("" ::: "memory");
+ break;
+ case memory_order_acq_rel:
+ __asm__ __volatile__ ("" ::: "memory");
+ break;
+ case memory_order_consume:
+ break;
+ case memory_order_seq_cst:
+ __asm__ __volatile__ (BOOST_ATOMIC_X86_FENCE_INSTR ::: "memory");
+ break;
+ default:;
+ }
+}
+
+#define BOOST_ATOMIC_SIGNAL_FENCE 2
+inline void
+atomic_signal_fence(memory_order)
+{
+ __asm__ __volatile__ ("" ::: "memory");
+}
+
+namespace atomics {
+namespace detail {
+
+template<typename T, bool Sign>
+class base_atomic<T, int, 1, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ platform_fence_before(order);
+ const_cast<volatile value_type &>(v_) = v;
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ __asm__ (
+ "lock ; xaddb %0, %1"
+ : "+q" (v), "+m" (v_)
+ );
+ platform_fence_after(order);
+ return v;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ return fetch_add(-v, order);
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ __asm__ (
+ "xchgb %0, %1"
+ : "+q" (v), "+m" (v_)
+ );
+ platform_fence_after(order);
+ return v;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ value_type previous = expected;
+ platform_fence_before(success_order);
+ __asm__ (
+ "lock ; cmpxchgb %2, %1"
+ : "+a" (previous), "+m" (v_)
+ : "q" (desired)
+ );
+ bool success = (previous == expected);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = previous;
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp & v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp | v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp ^ v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, int, 2, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ platform_fence_before(order);
+ const_cast<volatile value_type &>(v_) = v;
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ __asm__ (
+ "lock ; xaddw %0, %1"
+ : "+q" (v), "+m" (v_)
+ );
+ platform_fence_after(order);
+ return v;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ return fetch_add(-v, order);
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ __asm__ (
+ "xchgw %0, %1"
+ : "+q" (v), "+m" (v_)
+ );
+ platform_fence_after(order);
+ return v;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ value_type previous = expected;
+ platform_fence_before(success_order);
+ __asm__ (
+ "lock ; cmpxchgw %2, %1"
+ : "+a" (previous), "+m" (v_)
+ : "q" (desired)
+ );
+ bool success = (previous == expected);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = previous;
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp & v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp | v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp ^ v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, int, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ platform_fence_before(order);
+ const_cast<volatile value_type &>(v_) = v;
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ __asm__ (
+ "lock ; xaddl %0, %1"
+ : "+r" (v), "+m" (v_)
+ );
+ platform_fence_after(order);
+ return v;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ return fetch_add(-v, order);
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ __asm__ (
+ "xchgl %0, %1"
+ : "+r" (v), "+m" (v_)
+ );
+ platform_fence_after(order);
+ return v;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ value_type previous = expected;
+ platform_fence_before(success_order);
+ __asm__ (
+ "lock ; cmpxchgl %2, %1"
+ : "+a" (previous), "+m" (v_)
+ : "r" (desired)
+ );
+ bool success = (previous == expected);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = previous;
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp & v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp | v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp ^ v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+#if defined(__x86_64__)
+template<typename T, bool Sign>
+class base_atomic<T, int, 8, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef T difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ platform_fence_before(order);
+ const_cast<volatile value_type &>(v_) = v;
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ __asm__ (
+ "lock ; xaddq %0, %1"
+ : "+r" (v), "+m" (v_)
+ );
+ platform_fence_after(order);
+ return v;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ return fetch_add(-v, order);
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ __asm__ (
+ "xchgq %0, %1"
+ : "+r" (v), "+m" (v_)
+ );
+ platform_fence_after(order);
+ return v;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ value_type previous = expected;
+ platform_fence_before(success_order);
+ __asm__ (
+ "lock ; cmpxchgq %2, %1"
+ : "+a" (previous), "+m" (v_)
+ : "r" (desired)
+ );
+ bool success = (previous == expected);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = previous;
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp & v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp | v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp ^ v, order, memory_order_relaxed));
+ return tmp;
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+#endif
+
+/* pointers */
+
+#if !defined(__x86_64__)
+
+template<bool Sign>
+class base_atomic<void *, void *, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef void * value_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ platform_fence_before(order);
+ const_cast<volatile value_type &>(v_) = v;
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ __asm__ (
+ "xchgl %0, %1"
+ : "+r" (v), "+m" (v_)
+ );
+ platform_fence_after(order);
+ return v;
+ }
+
+ bool compare_exchange_strong(value_type & expected, value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ value_type previous = expected;
+ platform_fence_before(success_order);
+ __asm__ (
+ "lock ; cmpxchgl %2, %1"
+ : "+a" (previous), "+m" (v_)
+ : "r" (desired)
+ );
+ bool success = (previous == expected);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = previous;
+ return success;
+ }
+
+ bool compare_exchange_weak(value_type & expected, value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T *, void *, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef T * value_type;
+ typedef ptrdiff_t difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ platform_fence_before(order);
+ const_cast<volatile value_type &>(v_) = v;
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ __asm__ (
+ "xchgl %0, %1"
+ : "+r" (v), "+m" (v_)
+ );
+ platform_fence_after(order);
+ return v;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ value_type previous = expected;
+ platform_fence_before(success_order);
+ __asm__ (
+ "lock ; cmpxchgl %2, %1"
+ : "+a" (previous), "+m" (v_)
+ : "r" (desired)
+ );
+ bool success = (previous == expected);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = previous;
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ fetch_add(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ v = v * sizeof(*v_);
+ platform_fence_before(order);
+ __asm__ (
+ "lock ; xaddl %0, %1"
+ : "+r" (v), "+m" (v_)
+ );
+ platform_fence_after(order);
+ return reinterpret_cast<value_type>(v);
+ }
+
+ value_type
+ fetch_sub(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ return fetch_add(-v, order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_POINTER_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+#else
+
+template<bool Sign>
+class base_atomic<void *, void *, 8, Sign> {
+ typedef base_atomic this_type;
+ typedef void * value_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ platform_fence_before(order);
+ const_cast<volatile value_type &>(v_) = v;
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ __asm__ (
+ "xchgq %0, %1"
+ : "+r" (v), "+m" (v_)
+ );
+ platform_fence_after(order);
+ return v;
+ }
+
+ bool compare_exchange_strong(value_type & expected, value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ value_type previous = expected;
+ platform_fence_before(success_order);
+ __asm__ (
+ "lock ; cmpxchgq %2, %1"
+ : "+a" (previous), "+m" (v_)
+ : "r" (desired)
+ );
+ bool success = (previous == expected);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = previous;
+ return success;
+ }
+
+ bool compare_exchange_weak(value_type & expected, value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T *, void *, 8, Sign> {
+ typedef base_atomic this_type;
+ typedef T * value_type;
+ typedef ptrdiff_t difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ platform_fence_before(order);
+ const_cast<volatile value_type &>(v_) = v;
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ __asm__ (
+ "xchgq %0, %1"
+ : "+r" (v), "+m" (v_)
+ );
+ platform_fence_after(order);
+ return v;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ value_type previous = expected;
+ platform_fence_before(success_order);
+ __asm__ (
+ "lock ; cmpxchgq %2, %1"
+ : "+a" (previous), "+m" (v_)
+ : "r" (desired)
+ );
+ bool success = (previous == expected);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = previous;
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ fetch_add(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ v = v * sizeof(*v_);
+ platform_fence_before(order);
+ __asm__ (
+ "lock ; xaddq %0, %1"
+ : "+r" (v), "+m" (v_)
+ );
+ platform_fence_after(order);
+ return reinterpret_cast<value_type>(v);
+ }
+
+ value_type
+ fetch_sub(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ return fetch_add(-v, order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_POINTER_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+#endif
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 1, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint8_t storage_type;
+public:
+ explicit base_atomic(value_type const& v)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ storage_type tmp;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+ const_cast<volatile storage_type &>(v_) = tmp;
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp = const_cast<volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+ __asm__ (
+ "xchgb %0, %1"
+ : "+q" (tmp), "+m" (v_)
+ );
+ platform_fence_after(order);
+ value_type res;
+ memcpy(&res, &tmp, sizeof(value_type));
+ return res;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s, desired_s;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+ storage_type previous_s = expected_s;
+ platform_fence_before(success_order);
+ __asm__ (
+ "lock ; cmpxchgb %2, %1"
+ : "+a" (previous_s), "+m" (v_)
+ : "q" (desired_s)
+ );
+ bool success = (previous_s == expected_s);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ memcpy(&expected, &previous_s, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 2, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint16_t storage_type;
+public:
+ explicit base_atomic(value_type const& v)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ storage_type tmp;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+ const_cast<volatile storage_type &>(v_) = tmp;
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp = const_cast<volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+ __asm__ (
+ "xchgw %0, %1"
+ : "+q" (tmp), "+m" (v_)
+ );
+ platform_fence_after(order);
+ value_type res;
+ memcpy(&res, &tmp, sizeof(value_type));
+ return res;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s, desired_s;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+ storage_type previous_s = expected_s;
+ platform_fence_before(success_order);
+ __asm__ (
+ "lock ; cmpxchgw %2, %1"
+ : "+a" (previous_s), "+m" (v_)
+ : "q" (desired_s)
+ );
+ bool success = (previous_s == expected_s);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ memcpy(&expected, &previous_s, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 4, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type const& v) : v_(0)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+ const_cast<volatile storage_type &>(v_) = tmp;
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp = const_cast<volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+ __asm__ (
+ "xchgl %0, %1"
+ : "+q" (tmp), "+m" (v_)
+ );
+ platform_fence_after(order);
+ value_type res;
+ memcpy(&res, &tmp, sizeof(value_type));
+ return res;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+ storage_type previous_s = expected_s;
+ platform_fence_before(success_order);
+ __asm__ (
+ "lock ; cmpxchgl %2, %1"
+ : "+a" (previous_s), "+m" (v_)
+ : "q" (desired_s)
+ );
+ bool success = (previous_s == expected_s);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ memcpy(&expected, &previous_s, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+#if defined(__x86_64__)
+template<typename T, bool Sign>
+class base_atomic<T, void, 8, Sign> {
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint64_t storage_type;
+public:
+ explicit base_atomic(value_type const& v) : v_(0)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+ const_cast<volatile storage_type &>(v_) = tmp;
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp = const_cast<volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+ __asm__ (
+ "xchgq %0, %1"
+ : "+q" (tmp), "+m" (v_)
+ );
+ platform_fence_after(order);
+ value_type res;
+ memcpy(&res, &tmp, sizeof(value_type));
+ return res;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+ storage_type previous_s = expected_s;
+ platform_fence_before(success_order);
+ __asm__ (
+ "lock ; cmpxchgq %2, %1"
+ : "+a" (previous_s), "+m" (v_)
+ : "q" (desired_s)
+ );
+ bool success = (previous_s == expected_s);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ memcpy(&expected, &previous_s, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+#endif
+
+#if !defined(__x86_64__) && (defined(__i686__) || defined (__GCC_HAVE_SYNC_COMPARE_AND_SWAP_8))
+
+template<typename T>
+inline bool
+platform_cmpxchg64_strong(T & expected, T desired, volatile T * ptr)
+{
+#ifdef __GCC_HAVE_SYNC_COMPARE_AND_SWAP_8
+ const T oldval = __sync_val_compare_and_swap(ptr, expected, desired);
+ const bool result = (oldval == expected);
+ expected = oldval;
+ return result;
+#else
+ int scratch;
+ T prev = expected;
+ /* Make sure ebx is saved and restored properly in case
+ this object is compiled as "position independent". Since
+ programmers on x86 tend to forget specifying -DPIC or
+ similar, always assume PIC.
+
+ To make this work uniformly even in the non-PIC case,
+ setup register constraints such that ebx can not be
+ used by accident e.g. as base address for the variable
+ to be modified. Accessing "scratch" should always be okay,
+ as it can only be placed on the stack (and therefore
+ accessed through ebp or esp only).
+
+ In theory, could push/pop ebx onto/off the stack, but movs
+ to a prepared stack slot turn out to be faster. */
+ __asm__ __volatile__ (
+ "movl %%ebx, %1\n"
+ "movl %2, %%ebx\n"
+ "lock; cmpxchg8b 0(%4)\n"
+ "movl %1, %%ebx\n"
+ : "=A" (prev), "=m" (scratch)
+ : "D" ((int)desired), "c" ((int)(desired >> 32)), "S" (ptr), "0" (prev)
+ : "memory");
+ bool success = (prev == expected);
+ expected = prev;
+ return success;
+#endif
+}
+
+template<typename T>
+inline void
+platform_store64(T value, volatile T * ptr)
+{
+ T expected = *ptr;
+ do {
+ } while (!platform_cmpxchg64_strong(expected, value, ptr));
+}
+
+template<typename T>
+inline T
+platform_load64(const volatile T * ptr)
+{
+ T expected = *ptr;
+ do {
+ } while (!platform_cmpxchg64_strong(expected, expected, const_cast<volatile T*>(ptr)));
+ return expected;
+}
+
+#endif
+
+}
+}
+}
+
+/* pull in 64-bit atomic type using cmpxchg8b above */
+#if !defined(__x86_64__) && (defined(__i686__) || defined (__GCC_HAVE_SYNC_COMPARE_AND_SWAP_8))
+#include <boost/atomic/detail/cas64strong.hpp>
+#endif
+
+#endif /* !defined(BOOST_ATOMIC_FORCE_FALLBACK) */
+
+#endif

Added: branches/release/boost/atomic/detail/generic-cas.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/detail/generic-cas.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,199 @@
+#ifndef BOOST_ATOMIC_DETAIL_GENERIC_CAS_HPP
+#define BOOST_ATOMIC_DETAIL_GENERIC_CAS_HPP
+
+// Copyright (c) 2009 Helge Bahmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <cstddef>
+#include <boost/cstdint.hpp>
+#include <boost/memory_order.hpp>
+#include <boost/atomic/detail/config.hpp>
+#include <boost/atomic/detail/base.hpp>
+#include <boost/atomic/detail/builder.hpp>
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+/* fallback implementation for various compilation targets;
+this is *not* efficient, particularly because all operations
+are fully fenced (full memory barriers before and after
+each operation) */
+
+#if defined(__GNUC__)
+ namespace boost { namespace atomics { namespace detail {
+ inline int32_t
+ fenced_compare_exchange_strong_32(volatile int32_t *ptr, int32_t expected, int32_t desired)
+ {
+ return __sync_val_compare_and_swap_4(ptr, expected, desired);
+ }
+ #define BOOST_ATOMIC_HAVE_CAS32 1
+
+ #if defined(__amd64__) || defined(__i686__)
+ inline int64_t
+ fenced_compare_exchange_strong_64(int64_t *ptr, int64_t expected, int64_t desired)
+ {
+ return __sync_val_compare_and_swap_8(ptr, expected, desired);
+ }
+ #define BOOST_ATOMIC_HAVE_CAS64 1
+ #endif
+ }}}
+
+#elif defined(__ICL) || defined(_MSC_VER)
+
+ #if defined(_MSC_VER)
+ #include <Windows.h>
+ #include <intrin.h>
+ #endif
+
+ namespace boost { namespace atomics { namespace detail {
+ inline int32_t
+ fenced_compare_exchange_strong(int32_t *ptr, int32_t expected, int32_t desired)
+ {
+ return _InterlockedCompareExchange(reinterpret_cast<volatile long*>(ptr), desired, expected);
+ }
+ #define BOOST_ATOMIC_HAVE_CAS32 1
+ #if defined(_WIN64)
+ inline int64_t
+ fenced_compare_exchange_strong(int64_t *ptr, int64_t expected, int64_t desired)
+ {
+ return _InterlockedCompareExchange64(ptr, desired, expected);
+ }
+ #define BOOST_ATOMIC_HAVE_CAS64 1
+ #endif
+ }}}
+
+#elif (defined(__ICC) || defined(__ECC))
+ namespace boost { namespace atomics { namespace detail {
+ inline int32_t
+ fenced_compare_exchange_strong_32(int32_t *ptr, int32_t expected, int32_t desired)
+ {
+ return _InterlockedCompareExchange((void*)ptr, desired, expected);
+ }
+ #define BOOST_ATOMIC_HAVE_CAS32 1
+ #if defined(__x86_64)
+ inline int64_t
+ fenced_compare_exchange_strong(int64_t *ptr, int64_t expected, int64_t desired)
+ {
+ return cas64<int>(ptr, expected, desired);
+ }
+ #define BOOST_ATOMIC_HAVE_CAS64 1
+ #elif defined(__ECC) //IA-64 version
+ inline int64_t
+ fenced_compare_exchange_strong(int64_t *ptr, int64_t expected, int64_t desired)
+ {
+ return _InterlockedCompareExchange64((void*)ptr, desired, expected);
+ }
+ #define BOOST_ATOMIC_HAVE_CAS64 1
+ #endif
+ }}}
+
+#elif (defined(__SUNPRO_CC) && defined(__sparc))
+ #include <sys/atomic.h>
+ namespace boost { namespace atomics { namespace detail {
+ inline int32_t
+ fenced_compare_exchange_strong_32(int32_t *ptr, int32_t expected, int32_t desired)
+ {
+ return atomic_cas_32((volatile unsigned int*)ptr, expected, desired);
+ }
+ #define BOOST_ATOMIC_HAVE_CAS32 1
+
+ /* FIXME: check for 64 bit mode */
+ inline int64_t
+ fenced_compare_exchange_strong_64(int64_t *ptr, int64_t expected, int64_t desired)
+ {
+ return atomic_cas_64((volatile unsigned long long*)ptr, expected, desired);
+ }
+ #define BOOST_ATOMIC_HAVE_CAS64 1
+ }}}
+#endif
+
+
+namespace boost {
+namespace atomics {
+namespace detail {
+
+#ifdef BOOST_ATOMIC_HAVE_CAS32
+template<typename T>
+class atomic_generic_cas32 {
+private:
+ typedef atomic_generic_cas32 this_type;
+public:
+ explicit atomic_generic_cas32(T v) : i((int32_t)v) {}
+ atomic_generic_cas32() {}
+ T load(memory_order order=memory_order_seq_cst) const volatile
+ {
+ T expected=(T)i;
+ do { } while(!const_cast<this_type *>(this)->compare_exchange_weak(expected, expected, order, memory_order_relaxed));
+ return expected;
+ }
+ void store(T v, memory_order order=memory_order_seq_cst) volatile
+ {
+ exchange(v);
+ }
+ bool compare_exchange_strong(
+ T &expected,
+ T desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ T found;
+ found=(T)fenced_compare_exchange_strong_32(&i, (int32_t)expected, (int32_t)desired);
+ bool success=(found==expected);
+ expected=found;
+ return success;
+ }
+ bool compare_exchange_weak(
+ T &expected,
+ T desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+ T exchange(T r, memory_order order=memory_order_seq_cst) volatile
+ {
+ T expected=(T)i;
+ do { } while(!compare_exchange_weak(expected, r, order, memory_order_relaxed));
+ return expected;
+ }
+
+ bool is_lock_free(void) const volatile {return true;}
+ typedef T integral_type;
+private:
+ mutable int32_t i;
+};
+
+template<typename T>
+class platform_atomic_integral<T, 4> : public build_atomic_from_exchange<atomic_generic_cas32<T> > {
+public:
+ typedef build_atomic_from_exchange<atomic_generic_cas32<T> > super;
+ explicit platform_atomic_integral(T v) : super(v) {}
+ platform_atomic_integral(void) {}
+};
+
+template<typename T>
+class platform_atomic_integral<T, 1>: public build_atomic_from_larger_type<atomic_generic_cas32<int32_t>, T> {
+public:
+ typedef build_atomic_from_larger_type<atomic_generic_cas32<int32_t>, T> super;
+
+ explicit platform_atomic_integral(T v) : super(v) {}
+ platform_atomic_integral(void) {}
+};
+
+template<typename T>
+class platform_atomic_integral<T, 2>: public build_atomic_from_larger_type<atomic_generic_cas32<int32_t>, T> {
+public:
+ typedef build_atomic_from_larger_type<atomic_generic_cas32<int32_t>, T> super;
+
+ explicit platform_atomic_integral(T v) : super(v) {}
+ platform_atomic_integral(void) {}
+};
+#endif
+
+} } }
+
+#endif

Added: branches/release/boost/atomic/detail/interlocked.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/detail/interlocked.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,206 @@
+#ifndef BOOST_ATOMIC_DETAIL_INTERLOCKED_HPP
+#define BOOST_ATOMIC_DETAIL_INTERLOCKED_HPP
+
+// Copyright (c) 2009 Helge Bahmann
+// Copyright (c) 2012 Andrey Semashev
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <boost/atomic/detail/config.hpp>
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+#if defined(_WIN32_WCE)
+
+#include <boost/detail/interlocked.hpp>
+
+#define BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE(dest, exchange, compare) BOOST_INTERLOCKED_COMPARE_EXCHANGE(dest, exchange, compare)
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE(dest, newval) BOOST_INTERLOCKED_EXCHANGE(dest, newval)
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD(dest, addend) BOOST_INTERLOCKED_EXCHANGE_ADD(dest, addend)
+#define BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE_POINTER(dest, exchange, compare) BOOST_INTERLOCKED_COMPARE_EXCHANGE_POINTER(dest, exchange, compare)
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_POINTER(dest, newval) BOOST_INTERLOCKED_EXCHANGE_POINTER(dest, newval)
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD_POINTER(dest, byte_offset) ((void*)BOOST_INTERLOCKED_EXCHANGE_ADD((long*)(dest), byte_offset))
+
+#elif defined(_MSC_VER)
+
+#include <intrin.h>
+
+#pragma intrinsic(_InterlockedCompareExchange)
+#pragma intrinsic(_InterlockedExchangeAdd)
+#pragma intrinsic(_InterlockedExchange)
+
+#define BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE(dest, exchange, compare) _InterlockedCompareExchange((long*)(dest), (long)(exchange), (long)(compare))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD(dest, addend) _InterlockedExchangeAdd((long*)(dest), (long)(addend))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE(dest, newval) _InterlockedExchange((long*)(dest), (long)(newval))
+
+#if _MSC_VER >= 1400
+
+#pragma intrinsic(_InterlockedAnd)
+#pragma intrinsic(_InterlockedOr)
+#pragma intrinsic(_InterlockedXor)
+
+#define BOOST_ATOMIC_INTERLOCKED_AND(dest, arg) _InterlockedAnd((long*)(dest), (long)(arg))
+#define BOOST_ATOMIC_INTERLOCKED_OR(dest, arg) _InterlockedOr((long*)(dest), (long)(arg))
+#define BOOST_ATOMIC_INTERLOCKED_XOR(dest, arg) _InterlockedXor((long*)(dest), (long)(arg))
+
+#endif // _MSC_VER >= 1400
+
+#if _MSC_VER >= 1600
+
+// MSVC 2010 and later provide intrinsics for 8 and 16 bit integers.
+// Note that for each bit count these macros must be either all defined or all not defined.
+// Otherwise atomic<> operations will be implemented inconsistently.
+
+#pragma intrinsic(_InterlockedCompareExchange8)
+#pragma intrinsic(_InterlockedExchangeAdd8)
+#pragma intrinsic(_InterlockedExchange8)
+#pragma intrinsic(_InterlockedAnd8)
+#pragma intrinsic(_InterlockedOr8)
+#pragma intrinsic(_InterlockedXor8)
+
+#define BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE8(dest, exchange, compare) _InterlockedCompareExchange8((char*)(dest), (char)(exchange), (char)(compare))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD8(dest, addend) _InterlockedExchangeAdd8((char*)(dest), (char)(addend))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE8(dest, newval) _InterlockedExchange8((char*)(dest), (char)(newval))
+#define BOOST_ATOMIC_INTERLOCKED_AND8(dest, arg) _InterlockedAnd8((char*)(dest), (char)(arg))
+#define BOOST_ATOMIC_INTERLOCKED_OR8(dest, arg) _InterlockedOr8((char*)(dest), (char)(arg))
+#define BOOST_ATOMIC_INTERLOCKED_XOR8(dest, arg) _InterlockedXor8((char*)(dest), (char)(arg))
+
+#pragma intrinsic(_InterlockedCompareExchange16)
+#pragma intrinsic(_InterlockedExchangeAdd16)
+#pragma intrinsic(_InterlockedExchange16)
+#pragma intrinsic(_InterlockedAnd16)
+#pragma intrinsic(_InterlockedOr16)
+#pragma intrinsic(_InterlockedXor16)
+
+#define BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE16(dest, exchange, compare) _InterlockedCompareExchange16((short*)(dest), (short)(exchange), (short)(compare))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD16(dest, addend) _InterlockedExchangeAdd16((short*)(dest), (short)(addend))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE16(dest, newval) _InterlockedExchange16((short*)(dest), (short)(newval))
+#define BOOST_ATOMIC_INTERLOCKED_AND16(dest, arg) _InterlockedAnd16((short*)(dest), (short)(arg))
+#define BOOST_ATOMIC_INTERLOCKED_OR16(dest, arg) _InterlockedOr16((short*)(dest), (short)(arg))
+#define BOOST_ATOMIC_INTERLOCKED_XOR16(dest, arg) _InterlockedXor16((short*)(dest), (short)(arg))
+
+#endif // _MSC_VER >= 1600
+
+#if defined(_M_AMD64) || defined(_M_IA64)
+
+#pragma intrinsic(_InterlockedCompareExchange64)
+#pragma intrinsic(_InterlockedExchangeAdd64)
+#pragma intrinsic(_InterlockedExchange64)
+#pragma intrinsic(_InterlockedAnd64)
+#pragma intrinsic(_InterlockedOr64)
+#pragma intrinsic(_InterlockedXor64)
+
+#define BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE64(dest, exchange, compare) _InterlockedCompareExchange64((__int64*)(dest), (__int64)(exchange), (__int64)(compare))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD64(dest, addend) _InterlockedExchangeAdd64((__int64*)(dest), (__int64)(addend))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE64(dest, newval) _InterlockedExchange64((__int64*)(dest), (__int64)(newval))
+#define BOOST_ATOMIC_INTERLOCKED_AND64(dest, arg) _InterlockedAnd64((__int64*)(dest), (__int64)(arg))
+#define BOOST_ATOMIC_INTERLOCKED_OR64(dest, arg) _InterlockedOr64((__int64*)(dest), (__int64)(arg))
+#define BOOST_ATOMIC_INTERLOCKED_XOR64(dest, arg) _InterlockedXor64((__int64*)(dest), (__int64)(arg))
+
+#pragma intrinsic(_InterlockedCompareExchangePointer)
+#pragma intrinsic(_InterlockedExchangePointer)
+
+#define BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE_POINTER(dest, exchange, compare) _InterlockedCompareExchangePointer((void**)(dest), (void*)(exchange), (void*)(compare))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_POINTER(dest, newval) _InterlockedExchangePointer((void**)(dest), (void*)(newval))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD_POINTER(dest, byte_offset) ((void*)BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD64((long*)(dest), byte_offset))
+
+#else // defined(_M_AMD64)
+
+#define BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE_POINTER(dest, exchange, compare) ((void*)_InterlockedCompareExchange((long*)(dest), (long)(exchange), (long)(compare)))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_POINTER(dest, newval) ((void*)_InterlockedExchange((long*)(dest), (long)(newval)))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD_POINTER(dest, byte_offset) ((void*)BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD((long*)(dest), byte_offset))
+
+#endif // defined(_M_AMD64)
+
+#else // defined(_MSC_VER)
+
+#if defined(BOOST_USE_WINDOWS_H)
+
+#include <windows.h>
+
+#define BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE(dest, exchange, compare) InterlockedCompareExchange((long*)(dest), (long)(exchange), (long)(compare))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE(dest, newval) InterlockedExchange((long*)(dest), (long)(newval))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD(dest, addend) InterlockedExchangeAdd((long*)(dest), (long)(addend))
+
+#if defined(_WIN64)
+
+#define BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE64(dest, exchange, compare) InterlockedCompareExchange64((__int64*)(dest), (__int64)(exchange), (__int64)(compare))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE64(dest, newval) InterlockedExchange64((__int64*)(dest), (__int64)(newval))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD64(dest, addend) InterlockedExchangeAdd64((__int64*)(dest), (__int64)(addend))
+
+#define BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE_POINTER(dest, exchange, compare) InterlockedCompareExchangePointer((void**)(dest), (void*)(exchange), (void*)(compare))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_POINTER(dest, newval) InterlockedExchangePointer((void**)(dest), (void*)(newval))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD_POINTER(dest, byte_offset) ((void*)BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD64(dest, byte_offset))
+
+#else // defined(_WIN64)
+
+#define BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE_POINTER(dest, exchange, compare) ((void*)BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE(dest, exchange, compare))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_POINTER(dest, newval) ((void*)BOOST_ATOMIC_INTERLOCKED_EXCHANGE(dest, newval))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD_POINTER(dest, byte_offset) ((void*)BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD(dest, byte_offset))
+
+#endif // defined(_WIN64)
+
+#else // defined(BOOST_USE_WINDOWS_H)
+
+#if defined(__MINGW64__)
+#define BOOST_ATOMIC_INTERLOCKED_IMPORT
+#else
+#define BOOST_ATOMIC_INTERLOCKED_IMPORT __declspec(dllimport)
+#endif
+
+namespace boost {
+namespace atomics {
+namespace detail {
+
+extern "C" {
+
+BOOST_ATOMIC_INTERLOCKED_IMPORT long __stdcall InterlockedCompareExchange(long volatile*, long, long);
+BOOST_ATOMIC_INTERLOCKED_IMPORT long __stdcall InterlockedExchange(long volatile*, long);
+BOOST_ATOMIC_INTERLOCKED_IMPORT long __stdcall InterlockedExchangeAdd(long volatile*, long);
+
+#define BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE(dest, exchange, compare) boost::atomics::detail::InterlockedCompareExchange((long*)(dest), (long)(exchange), (long)(compare))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE(dest, newval) boost::atomics::detail::InterlockedExchange((long*)(dest), (long)(newval))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD(dest, addend) boost::atomics::detail::InterlockedExchangeAdd((long*)(dest), (long)(addend))
+
+#if defined(_WIN64)
+
+BOOST_ATOMIC_INTERLOCKED_IMPORT __int64 __stdcall InterlockedCompareExchange64(__int64 volatile*, __int64, __int64);
+BOOST_ATOMIC_INTERLOCKED_IMPORT __int64 __stdcall InterlockedExchange64(__int64 volatile*, __int64);
+BOOST_ATOMIC_INTERLOCKED_IMPORT __int64 __stdcall InterlockedExchangeAdd64(__int64 volatile*, __int64);
+
+BOOST_ATOMIC_INTERLOCKED_IMPORT void* __stdcall InterlockedCompareExchangePointer(void* volatile *, void*, void*);
+BOOST_ATOMIC_INTERLOCKED_IMPORT void* __stdcall InterlockedExchangePointer(void* volatile *, void*);
+
+#define BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE64(dest, exchange, compare) boost::atomics::detail::InterlockedCompareExchange64((__int64*)(dest), (__int64)(exchange), (__int64)(compare))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE64(dest, newval) boost::atomics::detail::InterlockedExchange64((__int64*)(dest), (__int64)(newval))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD64(dest, addend) boost::atomics::detail::InterlockedExchangeAdd64((__int64*)(dest), (__int64)(addend))
+
+#define BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE_POINTER(dest, exchange, compare) boost::atomics::detail::InterlockedCompareExchangePointer((void**)(dest), (void*)(exchange), (void*)(compare))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_POINTER(dest, newval) boost::atomics::detail::InterlockedExchangePointer((void**)(dest), (void*)(newval))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD_POINTER(dest, byte_offset) ((void*)BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD64(dest, byte_offset))
+
+#else // defined(_WIN64)
+
+#define BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE_POINTER(dest, exchange, compare) ((void*)BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE(dest, exchange, compare))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_POINTER(dest, newval) ((void*)BOOST_ATOMIC_INTERLOCKED_EXCHANGE(dest, newval))
+#define BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD_POINTER(dest, byte_offset) ((void*)BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD(dest, byte_offset))
+
+#endif // defined(_WIN64)
+
+} // extern "C"
+
+} // namespace detail
+} // namespace atomics
+} // namespace boost
+
+#undef BOOST_ATOMIC_INTERLOCKED_IMPORT
+
+#endif // defined(BOOST_USE_WINDOWS_H)
+
+#endif // defined(_MSC_VER)
+
+#endif

Added: branches/release/boost/atomic/detail/linux-arm.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/detail/linux-arm.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,187 @@
+#ifndef BOOST_ATOMIC_DETAIL_LINUX_ARM_HPP
+#define BOOST_ATOMIC_DETAIL_LINUX_ARM_HPP
+
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+//
+// Copyright (c) 2009, 2011 Helge Bahmann
+// Copyright (c) 2009 Phil Endecott
+// Linux-specific code by Phil Endecott
+
+// Different ARM processors have different atomic instructions. In particular,
+// architecture versions before v6 (which are still in widespread use, e.g. the
+// Intel/Marvell XScale chips like the one in the NSLU2) have only atomic swap.
+// On Linux the kernel provides some support that lets us abstract away from
+// these differences: it provides emulated CAS and barrier functions at special
+// addresses that are garaunteed not to be interrupted by the kernel. Using
+// this facility is slightly slower than inline assembler would be, but much
+// faster than a system call.
+//
+// While this emulated CAS is "strong" in the sense that it does not fail
+// "spuriously" (i.e.: it never fails to perform the exchange when the value
+// found equals the value expected), it does not return the found value on
+// failure. To satisfy the atomic API, compare_exchange_{weak|strong} must
+// return the found value on failure, and we have to manually load this value
+// after the emulated CAS reports failure. This in turn introduces a race
+// between the CAS failing (due to the "wrong" value being found) and subsequently
+// loading (which might turn up the "right" value). From an application's
+// point of view this looks like "spurious failure", and therefore the
+// emulated CAS is only good enough to provide compare_exchange_weak
+// semantics.
+
+#include <cstddef>
+#include <boost/cstdint.hpp>
+#include <boost/memory_order.hpp>
+#include <boost/atomic/detail/config.hpp>
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+namespace boost {
+namespace atomics {
+namespace detail {
+
+inline void
+arm_barrier(void)
+{
+ void (*kernel_dmb)(void) = (void (*)(void)) 0xffff0fa0;
+ kernel_dmb();
+}
+
+inline void
+platform_fence_before(memory_order order)
+{
+ switch(order) {
+ case memory_order_release:
+ case memory_order_acq_rel:
+ case memory_order_seq_cst:
+ arm_barrier();
+ case memory_order_consume:
+ default:;
+ }
+}
+
+inline void
+platform_fence_after(memory_order order)
+{
+ switch(order) {
+ case memory_order_acquire:
+ case memory_order_acq_rel:
+ case memory_order_seq_cst:
+ arm_barrier();
+ default:;
+ }
+}
+
+inline void
+platform_fence_before_store(memory_order order)
+{
+ platform_fence_before(order);
+}
+
+inline void
+platform_fence_after_store(memory_order order)
+{
+ if (order == memory_order_seq_cst)
+ arm_barrier();
+}
+
+inline void
+platform_fence_after_load(memory_order order)
+{
+ platform_fence_after(order);
+}
+
+template<typename T>
+inline bool
+platform_cmpxchg32(T & expected, T desired, volatile T * ptr)
+{
+ typedef T (*kernel_cmpxchg32_t)(T oldval, T newval, volatile T * ptr);
+
+ if (((kernel_cmpxchg32_t) 0xffff0fc0)(expected, desired, ptr) == 0) {
+ return true;
+ } else {
+ expected = *ptr;
+ return false;
+ }
+}
+
+}
+}
+
+#define BOOST_ATOMIC_THREAD_FENCE 2
+inline void
+atomic_thread_fence(memory_order order)
+{
+ switch(order) {
+ case memory_order_acquire:
+ case memory_order_release:
+ case memory_order_acq_rel:
+ case memory_order_seq_cst:
+ atomics::detail::arm_barrier();
+ default:;
+ }
+}
+
+#define BOOST_ATOMIC_SIGNAL_FENCE 2
+inline void
+atomic_signal_fence(memory_order)
+{
+ __asm__ __volatile__ ("" ::: "memory");
+}
+
+class atomic_flag {
+private:
+ atomic_flag(const atomic_flag &) /* = delete */ ;
+ atomic_flag & operator=(const atomic_flag &) /* = delete */ ;
+ uint32_t v_;
+public:
+ atomic_flag(void) : v_(false) {}
+
+ void
+ clear(memory_order order = memory_order_seq_cst) volatile
+ {
+ atomics::detail::platform_fence_before_store(order);
+ const_cast<volatile uint32_t &>(v_) = 0;
+ atomics::detail::platform_fence_after_store(order);
+ }
+
+ bool
+ test_and_set(memory_order order = memory_order_seq_cst) volatile
+ {
+ atomics::detail::platform_fence_before(order);
+ uint32_t expected = v_;
+ do {
+ if (expected == 1)
+ break;
+ } while (!atomics::detail::platform_cmpxchg32(expected, (uint32_t)1, &v_));
+ atomics::detail::platform_fence_after(order);
+ return expected;
+ }
+};
+#define BOOST_ATOMIC_FLAG_LOCK_FREE 2
+
+}
+
+#include <boost/atomic/detail/base.hpp>
+
+#if !defined(BOOST_ATOMIC_FORCE_FALLBACK)
+
+#define BOOST_ATOMIC_CHAR_LOCK_FREE 2
+#define BOOST_ATOMIC_CHAR16_T_LOCK_FREE 2
+#define BOOST_ATOMIC_CHAR32_T_LOCK_FREE 2
+#define BOOST_ATOMIC_WCHAR_T_LOCK_FREE 2
+#define BOOST_ATOMIC_SHORT_LOCK_FREE 2
+#define BOOST_ATOMIC_INT_LOCK_FREE 2
+#define BOOST_ATOMIC_LONG_LOCK_FREE 2
+#define BOOST_ATOMIC_LLONG_LOCK_FREE 0
+#define BOOST_ATOMIC_POINTER_LOCK_FREE 2
+#define BOOST_ATOMIC_BOOL_LOCK_FREE 2
+
+#include <boost/atomic/detail/cas32weak.hpp>
+
+#endif /* !defined(BOOST_ATOMIC_FORCE_FALLBACK) */
+
+#endif

Added: branches/release/boost/atomic/detail/lockpool.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/detail/lockpool.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,92 @@
+#ifndef BOOST_ATOMIC_DETAIL_LOCKPOOL_HPP
+#define BOOST_ATOMIC_DETAIL_LOCKPOOL_HPP
+
+// Copyright (c) 2011 Helge Bahmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <boost/atomic/detail/config.hpp>
+#ifndef BOOST_ATOMIC_FLAG_LOCK_FREE
+#include <boost/thread/mutex.hpp>
+#endif
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+namespace boost {
+namespace atomics {
+namespace detail {
+
+#ifndef BOOST_ATOMIC_FLAG_LOCK_FREE
+
+class lockpool
+{
+public:
+ typedef mutex lock_type;
+ class scoped_lock
+ {
+ private:
+ lock_type& mtx_;
+
+ scoped_lock(scoped_lock const&) /* = delete */;
+ scoped_lock& operator=(scoped_lock const&) /* = delete */;
+
+ public:
+ explicit
+ scoped_lock(const volatile void * addr) : mtx_(get_lock_for(addr))
+ {
+ mtx_.lock();
+ }
+ ~scoped_lock()
+ {
+ mtx_.unlock();
+ }
+ };
+
+private:
+ static BOOST_ATOMIC_DECL lock_type& get_lock_for(const volatile void * addr);
+};
+
+#else
+
+class lockpool
+{
+public:
+ typedef atomic_flag lock_type;
+
+ class scoped_lock
+ {
+ private:
+ atomic_flag& flag_;
+
+ scoped_lock(const scoped_lock &) /* = delete */;
+ scoped_lock& operator=(const scoped_lock &) /* = delete */;
+
+ public:
+ explicit
+ scoped_lock(const volatile void * addr) : flag_(get_lock_for(addr))
+ {
+ do {
+ } while (flag_.test_and_set(memory_order_acquire));
+ }
+
+ ~scoped_lock(void)
+ {
+ flag_.clear(memory_order_release);
+ }
+ };
+
+private:
+ static BOOST_ATOMIC_DECL lock_type& get_lock_for(const volatile void * addr);
+};
+
+#endif
+
+}
+}
+}
+
+#endif

Added: branches/release/boost/atomic/detail/platform.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/detail/platform.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,62 @@
+#ifndef BOOST_ATOMIC_DETAIL_PLATFORM_HPP
+#define BOOST_ATOMIC_DETAIL_PLATFORM_HPP
+
+// Copyright (c) 2009 Helge Bahmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+// Platform selection file
+
+#include <boost/atomic/detail/config.hpp>
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
+
+ #include <boost/atomic/detail/gcc-x86.hpp>
+
+#elif 0 && defined(__GNUC__) && defined(__alpha__) /* currently does not work correctly */
+
+ #include <boost/atomic/detail/base.hpp>
+ #include <boost/atomic/detail/gcc-alpha.hpp>
+
+#elif defined(__GNUC__) && (defined(__POWERPC__) || defined(__PPC__))
+
+ #include <boost/atomic/detail/gcc-ppc.hpp>
+
+// This list of ARM architecture versions comes from Apple's arm/arch.h header.
+// I don't know how complete it is.
+#elif defined(__GNUC__) && (defined(__ARM_ARCH_6__) || defined(__ARM_ARCH_6J__) \
+ || defined(__ARM_ARCH_6Z__) || defined(__ARM_ARCH_6ZK__) \
+ || defined(__ARM_ARCH_6K__) || defined(__ARM_ARCH_7A__))
+
+ #include <boost/atomic/detail/gcc-armv6plus.hpp>
+
+#elif defined(__linux__) && defined(__arm__)
+
+ #include <boost/atomic/detail/linux-arm.hpp>
+
+#elif defined(__GNUC__) && defined(__sparc_v9__)
+
+ #include <boost/atomic/detail/gcc-sparcv9.hpp>
+
+#elif defined(BOOST_WINDOWS) || defined(_WIN32_CE)
+
+ #include <boost/atomic/detail/windows.hpp>
+
+#elif 0 && defined(__GNUC__) /* currently does not work correctly */
+
+ #include <boost/atomic/detail/base.hpp>
+ #include <boost/atomic/detail/gcc-cas.hpp>
+
+#else
+
+#include <boost/atomic/detail/base.hpp>
+
+#endif
+
+#endif

Added: branches/release/boost/atomic/detail/type-classification.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/detail/type-classification.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,45 @@
+#ifndef BOOST_ATOMIC_DETAIL_TYPE_CLASSIFICATION_HPP
+#define BOOST_ATOMIC_DETAIL_TYPE_CLASSIFICATION_HPP
+
+// Copyright (c) 2011 Helge Bahmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <boost/atomic/detail/config.hpp>
+#include <boost/type_traits/is_integral.hpp>
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+namespace boost {
+namespace atomics {
+namespace detail {
+
+template<typename T, bool IsInt = boost::is_integral<T>::value>
+struct classify
+{
+ typedef void type;
+};
+
+template<typename T>
+struct classify<T, true> {typedef int type;};
+
+template<typename T>
+struct classify<T*, false> {typedef void* type;};
+
+template<typename T>
+struct storage_size_of
+{
+ enum _
+ {
+ size = sizeof(T),
+ value = (size == 3 ? 4 : (size == 5 || size == 6 || size == 7 ? 8 : size))
+ };
+};
+
+}}}
+
+#endif

Added: branches/release/boost/atomic/detail/windows.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/atomic/detail/windows.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,1585 @@
+#ifndef BOOST_ATOMIC_DETAIL_WINDOWS_HPP
+#define BOOST_ATOMIC_DETAIL_WINDOWS_HPP
+
+// Copyright (c) 2009 Helge Bahmann
+// Copyright (c) 2012 Andrey Semashev
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <cstddef>
+#include <boost/cstdint.hpp>
+#include <boost/type_traits/make_signed.hpp>
+#include <boost/atomic/detail/config.hpp>
+#include <boost/atomic/detail/interlocked.hpp>
+
+#ifdef BOOST_ATOMIC_HAS_PRAGMA_ONCE
+#pragma once
+#endif
+
+#ifdef _MSC_VER
+#pragma warning(push)
+// 'order' : unreferenced formal parameter
+#pragma warning(disable: 4100)
+#endif
+
+namespace boost {
+namespace atomics {
+namespace detail {
+
+// Define hardware barriers
+#if defined(_MSC_VER) && (defined(_M_AMD64) || (defined(_M_IX86) && defined(_M_IX86_FP) && _M_IX86_FP >= 2))
+extern "C" void _mm_mfence(void);
+#pragma intrinsic(_mm_mfence)
+#endif
+
+BOOST_FORCEINLINE void x86_full_fence(void)
+{
+#if defined(_MSC_VER) && (defined(_M_AMD64) || (defined(_M_IX86) && defined(_M_IX86_FP) && _M_IX86_FP >= 2))
+ // Use mfence only if SSE2 is available
+ _mm_mfence();
+#else
+ long tmp;
+ BOOST_ATOMIC_INTERLOCKED_EXCHANGE(&tmp, 0);
+#endif
+}
+
+// Define compiler barriers
+#if defined(_MSC_VER) && _MSC_VER >= 1310
+
+extern "C" void _ReadWriteBarrier();
+#pragma intrinsic(_ReadWriteBarrier)
+
+#define BOOST_ATOMIC_READ_WRITE_BARRIER() _ReadWriteBarrier()
+
+#if _MSC_VER >= 1400
+
+extern "C" void _ReadBarrier();
+#pragma intrinsic(_ReadBarrier)
+extern "C" void _WriteBarrier();
+#pragma intrinsic(_WriteBarrier)
+
+#define BOOST_ATOMIC_READ_BARRIER() _ReadBarrier()
+#define BOOST_ATOMIC_WRITE_BARRIER() _WriteBarrier()
+
+#endif
+#endif
+
+#ifndef BOOST_ATOMIC_READ_WRITE_BARRIER
+#define BOOST_ATOMIC_READ_WRITE_BARRIER()
+#endif
+#ifndef BOOST_ATOMIC_READ_BARRIER
+#define BOOST_ATOMIC_READ_BARRIER() BOOST_ATOMIC_READ_WRITE_BARRIER()
+#endif
+#ifndef BOOST_ATOMIC_WRITE_BARRIER
+#define BOOST_ATOMIC_WRITE_BARRIER() BOOST_ATOMIC_READ_WRITE_BARRIER()
+#endif
+
+// MSVC (up to 2012, inclusively) optimizer generates a very poor code for switch-case in fence functions.
+// Issuing unconditional compiler barriers generates better code. We may re-enable the main branch if MSVC optimizer improves.
+#ifdef BOOST_MSVC
+#define BOOST_ATOMIC_DETAIL_BAD_SWITCH_CASE_OPTIMIZER
+#endif
+
+BOOST_FORCEINLINE void
+platform_fence_before(memory_order order)
+{
+#ifdef BOOST_ATOMIC_DETAIL_BAD_SWITCH_CASE_OPTIMIZER
+
+ BOOST_ATOMIC_READ_WRITE_BARRIER();
+
+#else
+
+ switch(order)
+ {
+ case memory_order_relaxed:
+ case memory_order_consume:
+ case memory_order_acquire:
+ break;
+ case memory_order_release:
+ case memory_order_acq_rel:
+ BOOST_ATOMIC_WRITE_BARRIER();
+ /* release */
+ break;
+ case memory_order_seq_cst:
+ BOOST_ATOMIC_READ_WRITE_BARRIER();
+ /* seq */
+ break;
+ }
+
+#endif
+}
+
+BOOST_FORCEINLINE void
+platform_fence_after(memory_order order)
+{
+#ifdef BOOST_ATOMIC_DETAIL_BAD_SWITCH_CASE_OPTIMIZER
+
+ BOOST_ATOMIC_READ_WRITE_BARRIER();
+
+#else
+
+ switch(order)
+ {
+ case memory_order_relaxed:
+ case memory_order_release:
+ break;
+ case memory_order_consume:
+ case memory_order_acquire:
+ case memory_order_acq_rel:
+ BOOST_ATOMIC_READ_BARRIER();
+ break;
+ case memory_order_seq_cst:
+ BOOST_ATOMIC_READ_WRITE_BARRIER();
+ /* seq */
+ break;
+ }
+
+#endif
+}
+
+BOOST_FORCEINLINE void
+platform_fence_before_store(memory_order order)
+{
+#ifdef BOOST_ATOMIC_DETAIL_BAD_SWITCH_CASE_OPTIMIZER
+
+ BOOST_ATOMIC_WRITE_BARRIER();
+
+#else
+
+ switch(order)
+ {
+ case memory_order_relaxed:
+ case memory_order_acquire:
+ case memory_order_consume:
+ break;
+ case memory_order_acq_rel:
+ case memory_order_release:
+ case memory_order_seq_cst:
+ BOOST_ATOMIC_WRITE_BARRIER();
+ break;
+ }
+
+#endif
+}
+
+BOOST_FORCEINLINE void
+platform_fence_after_store(memory_order order)
+{
+#ifdef BOOST_ATOMIC_DETAIL_BAD_SWITCH_CASE_OPTIMIZER
+
+ BOOST_ATOMIC_WRITE_BARRIER();
+ if (order == memory_order_seq_cst)
+ x86_full_fence();
+
+#else
+
+ switch(order)
+ {
+ case memory_order_relaxed:
+ case memory_order_acquire:
+ case memory_order_consume:
+ break;
+ case memory_order_acq_rel:
+ case memory_order_release:
+ BOOST_ATOMIC_WRITE_BARRIER();
+ break;
+ case memory_order_seq_cst:
+ x86_full_fence();
+ break;
+ }
+
+#endif
+}
+
+BOOST_FORCEINLINE void
+platform_fence_after_load(memory_order order)
+{
+#ifdef BOOST_ATOMIC_DETAIL_BAD_SWITCH_CASE_OPTIMIZER
+
+ BOOST_ATOMIC_READ_BARRIER();
+ if (order == memory_order_seq_cst)
+ x86_full_fence();
+
+#else
+
+ switch(order)
+ {
+ case memory_order_relaxed:
+ case memory_order_consume:
+ break;
+ case memory_order_acquire:
+ case memory_order_acq_rel:
+ BOOST_ATOMIC_READ_BARRIER();
+ break;
+ case memory_order_release:
+ break;
+ case memory_order_seq_cst:
+ x86_full_fence();
+ break;
+ }
+
+#endif
+}
+
+} // namespace detail
+} // namespace atomics
+
+#define BOOST_ATOMIC_THREAD_FENCE 2
+BOOST_FORCEINLINE void
+atomic_thread_fence(memory_order order)
+{
+#ifdef BOOST_ATOMIC_DETAIL_BAD_SWITCH_CASE_OPTIMIZER
+
+ BOOST_ATOMIC_READ_WRITE_BARRIER();
+ if (order == memory_order_seq_cst)
+ atomics::detail::x86_full_fence();
+
+#else
+
+ switch (order)
+ {
+ case memory_order_relaxed:
+ break;
+ case memory_order_consume:
+ case memory_order_acquire:
+ BOOST_ATOMIC_READ_BARRIER();
+ break;
+ case memory_order_release:
+ BOOST_ATOMIC_WRITE_BARRIER();
+ break;
+ case memory_order_acq_rel:
+ BOOST_ATOMIC_READ_WRITE_BARRIER();
+ break;
+ case memory_order_seq_cst:
+ atomics::detail::x86_full_fence();
+ break;
+ }
+
+#endif
+}
+
+#define BOOST_ATOMIC_SIGNAL_FENCE 2
+BOOST_FORCEINLINE void
+atomic_signal_fence(memory_order order)
+{
+#ifdef BOOST_ATOMIC_DETAIL_BAD_SWITCH_CASE_OPTIMIZER
+
+ BOOST_ATOMIC_READ_WRITE_BARRIER();
+
+#else
+
+ switch (order)
+ {
+ case memory_order_relaxed:
+ break;
+ case memory_order_consume:
+ case memory_order_acquire:
+ BOOST_ATOMIC_READ_BARRIER();
+ break;
+ case memory_order_release:
+ BOOST_ATOMIC_WRITE_BARRIER();
+ break;
+ case memory_order_acq_rel:
+ case memory_order_seq_cst:
+ BOOST_ATOMIC_READ_WRITE_BARRIER();
+ break;
+ }
+
+#endif
+}
+
+#undef BOOST_ATOMIC_READ_WRITE_BARRIER
+#undef BOOST_ATOMIC_READ_BARRIER
+#undef BOOST_ATOMIC_WRITE_BARRIER
+
+class atomic_flag
+{
+private:
+ atomic_flag(const atomic_flag &) /* = delete */ ;
+ atomic_flag & operator=(const atomic_flag &) /* = delete */ ;
+#ifdef BOOST_ATOMIC_INTERLOCKED_EXCHANGE8
+ char v_;
+#else
+ long v_;
+#endif
+public:
+ atomic_flag(void) : v_(0) {}
+
+ void
+ clear(memory_order order = memory_order_seq_cst) volatile
+ {
+ atomics::detail::platform_fence_before_store(order);
+#ifdef BOOST_ATOMIC_INTERLOCKED_EXCHANGE8
+ BOOST_ATOMIC_INTERLOCKED_EXCHANGE8(&v_, 0);
+#else
+ BOOST_ATOMIC_INTERLOCKED_EXCHANGE(&v_, 0);
+#endif
+ atomics::detail::platform_fence_after_store(order);
+ }
+
+ bool
+ test_and_set(memory_order order = memory_order_seq_cst) volatile
+ {
+ atomics::detail::platform_fence_before(order);
+#ifdef BOOST_ATOMIC_INTERLOCKED_EXCHANGE8
+ const char old = BOOST_ATOMIC_INTERLOCKED_EXCHANGE8(&v_, 1);
+#else
+ const long old = BOOST_ATOMIC_INTERLOCKED_EXCHANGE(&v_, 1);
+#endif
+ atomics::detail::platform_fence_after(order);
+ return old != 0;
+ }
+};
+
+} // namespace boost
+
+#define BOOST_ATOMIC_FLAG_LOCK_FREE 2
+
+#include <boost/atomic/detail/base.hpp>
+
+#if !defined(BOOST_ATOMIC_FORCE_FALLBACK)
+
+#define BOOST_ATOMIC_CHAR_LOCK_FREE 2
+#define BOOST_ATOMIC_SHORT_LOCK_FREE 2
+#define BOOST_ATOMIC_INT_LOCK_FREE 2
+#define BOOST_ATOMIC_LONG_LOCK_FREE 2
+#if defined(BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE64)
+#define BOOST_ATOMIC_LLONG_LOCK_FREE 2
+#else
+#define BOOST_ATOMIC_LLONG_LOCK_FREE 0
+#endif
+#define BOOST_ATOMIC_POINTER_LOCK_FREE 2
+#define BOOST_ATOMIC_BOOL_LOCK_FREE 2
+
+namespace boost {
+namespace atomics {
+namespace detail {
+
+#if defined(_MSC_VER)
+#pragma warning(push)
+// 'char' : forcing value to bool 'true' or 'false' (performance warning)
+#pragma warning(disable: 4800)
+#endif
+
+template<typename T, bool Sign>
+class base_atomic<T, int, 1, Sign>
+{
+ typedef base_atomic this_type;
+ typedef T value_type;
+#ifdef BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE8
+ typedef value_type storage_type;
+#else
+ typedef uint32_t storage_type;
+#endif
+ typedef T difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ platform_fence_before(order);
+ v_ = static_cast< storage_type >(v);
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = static_cast< value_type >(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+#ifdef BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD8
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD8(&v_, v));
+#else
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD(&v_, v));
+#endif
+ platform_fence_after(order);
+ return v;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ typedef typename make_signed< value_type >::type signed_value_type;
+ return fetch_add(static_cast< value_type >(-static_cast< signed_value_type >(v)), order);
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+#ifdef BOOST_ATOMIC_INTERLOCKED_EXCHANGE8
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_EXCHANGE8(&v_, v));
+#else
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_EXCHANGE(&v_, v));
+#endif
+ platform_fence_after(order);
+ return v;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ value_type previous = expected;
+ platform_fence_before(success_order);
+#ifdef BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE8
+ value_type oldval = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE8(&v_, desired, previous));
+#else
+ value_type oldval = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE(&v_, desired, previous));
+#endif
+ bool success = (previous == oldval);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = oldval;
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+#ifdef BOOST_ATOMIC_INTERLOCKED_AND8
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_AND8(&v_, v));
+ platform_fence_after(order);
+ return v;
+#elif defined(BOOST_ATOMIC_INTERLOCKED_AND)
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_AND(&v_, v));
+ platform_fence_after(order);
+ return v;
+#else
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp & v, order, memory_order_relaxed));
+ return tmp;
+#endif
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+#ifdef BOOST_ATOMIC_INTERLOCKED_OR8
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_OR8(&v_, v));
+ platform_fence_after(order);
+ return v;
+#elif defined(BOOST_ATOMIC_INTERLOCKED_OR)
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_OR(&v_, v));
+ platform_fence_after(order);
+ return v;
+#else
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp | v, order, memory_order_relaxed));
+ return tmp;
+#endif
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+#ifdef BOOST_ATOMIC_INTERLOCKED_XOR8
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_XOR8(&v_, v));
+ platform_fence_after(order);
+ return v;
+#elif defined(BOOST_ATOMIC_INTERLOCKED_XOR)
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_XOR(&v_, v));
+ platform_fence_after(order);
+ return v;
+#else
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp ^ v, order, memory_order_relaxed));
+ return tmp;
+#endif
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+#if defined(_MSC_VER)
+#pragma warning(pop)
+#endif
+
+template<typename T, bool Sign>
+class base_atomic<T, int, 2, Sign>
+{
+ typedef base_atomic this_type;
+ typedef T value_type;
+#ifdef BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE16
+ typedef value_type storage_type;
+#else
+ typedef uint32_t storage_type;
+#endif
+ typedef T difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ platform_fence_before(order);
+ v_ = static_cast< storage_type >(v);
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = static_cast< value_type >(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+#ifdef BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD16
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD16(&v_, v));
+#else
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD(&v_, v));
+#endif
+ platform_fence_after(order);
+ return v;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ typedef typename make_signed< value_type >::type signed_value_type;
+ return fetch_add(static_cast< value_type >(-static_cast< signed_value_type >(v)), order);
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+#ifdef BOOST_ATOMIC_INTERLOCKED_EXCHANGE16
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_EXCHANGE16(&v_, v));
+#else
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_EXCHANGE(&v_, v));
+#endif
+ platform_fence_after(order);
+ return v;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ value_type previous = expected;
+ platform_fence_before(success_order);
+#ifdef BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE16
+ value_type oldval = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE16(&v_, desired, previous));
+#else
+ value_type oldval = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE(&v_, desired, previous));
+#endif
+ bool success = (previous == oldval);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = oldval;
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+#ifdef BOOST_ATOMIC_INTERLOCKED_AND16
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_AND16(&v_, v));
+ platform_fence_after(order);
+ return v;
+#elif defined(BOOST_ATOMIC_INTERLOCKED_AND)
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_AND(&v_, v));
+ platform_fence_after(order);
+ return v;
+#else
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp & v, order, memory_order_relaxed));
+ return tmp;
+#endif
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+#ifdef BOOST_ATOMIC_INTERLOCKED_OR16
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_OR16(&v_, v));
+ platform_fence_after(order);
+ return v;
+#elif defined(BOOST_ATOMIC_INTERLOCKED_OR)
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_OR(&v_, v));
+ platform_fence_after(order);
+ return v;
+#else
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp | v, order, memory_order_relaxed));
+ return tmp;
+#endif
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+#ifdef BOOST_ATOMIC_INTERLOCKED_XOR16
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_XOR16(&v_, v));
+ platform_fence_after(order);
+ return v;
+#elif defined(BOOST_ATOMIC_INTERLOCKED_XOR)
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_XOR(&v_, v));
+ platform_fence_after(order);
+ return v;
+#else
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp ^ v, order, memory_order_relaxed));
+ return tmp;
+#endif
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, int, 4, Sign>
+{
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef value_type storage_type;
+ typedef T difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ platform_fence_before(order);
+ v_ = static_cast< storage_type >(v);
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = static_cast< value_type >(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD(&v_, v));
+ platform_fence_after(order);
+ return v;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ typedef typename make_signed< value_type >::type signed_value_type;
+ return fetch_add(static_cast< value_type >(-static_cast< signed_value_type >(v)), order);
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_EXCHANGE(&v_, v));
+ platform_fence_after(order);
+ return v;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ value_type previous = expected;
+ platform_fence_before(success_order);
+ value_type oldval = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE(&v_, desired, previous));
+ bool success = (previous == oldval);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = oldval;
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+#if defined(BOOST_ATOMIC_INTERLOCKED_AND)
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_AND(&v_, v));
+ platform_fence_after(order);
+ return v;
+#else
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp & v, order, memory_order_relaxed));
+ return tmp;
+#endif
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+#if defined(BOOST_ATOMIC_INTERLOCKED_OR)
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_OR(&v_, v));
+ platform_fence_after(order);
+ return v;
+#else
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp | v, order, memory_order_relaxed));
+ return tmp;
+#endif
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+#if defined(BOOST_ATOMIC_INTERLOCKED_XOR)
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_XOR(&v_, v));
+ platform_fence_after(order);
+ return v;
+#else
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp ^ v, order, memory_order_relaxed));
+ return tmp;
+#endif
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+#if defined(BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE64)
+
+template<typename T, bool Sign>
+class base_atomic<T, int, 8, Sign>
+{
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef value_type storage_type;
+ typedef T difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ platform_fence_before(order);
+ v_ = static_cast< storage_type >(v);
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = static_cast< value_type >(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ fetch_add(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD64(&v_, v));
+ platform_fence_after(order);
+ return v;
+ }
+
+ value_type
+ fetch_sub(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ typedef typename make_signed< value_type >::type signed_value_type;
+ return fetch_add(static_cast< value_type >(-static_cast< signed_value_type >(v)), order);
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_EXCHANGE64(&v_, v));
+ platform_fence_after(order);
+ return v;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ value_type previous = expected;
+ platform_fence_before(success_order);
+ value_type oldval = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE64(&v_, desired, previous));
+ bool success = (previous == oldval);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = oldval;
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ fetch_and(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+#if defined(BOOST_ATOMIC_INTERLOCKED_AND64)
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_AND64(&v_, v));
+ platform_fence_after(order);
+ return v;
+#else
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp & v, order, memory_order_relaxed));
+ return tmp;
+#endif
+ }
+
+ value_type
+ fetch_or(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+#if defined(BOOST_ATOMIC_INTERLOCKED_OR64)
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_OR64(&v_, v));
+ platform_fence_after(order);
+ return v;
+#else
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp | v, order, memory_order_relaxed));
+ return tmp;
+#endif
+ }
+
+ value_type
+ fetch_xor(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+#if defined(BOOST_ATOMIC_INTERLOCKED_XOR64)
+ platform_fence_before(order);
+ v = static_cast< value_type >(BOOST_ATOMIC_INTERLOCKED_XOR64(&v_, v));
+ platform_fence_after(order);
+ return v;
+#else
+ value_type tmp = load(memory_order_relaxed);
+ do {} while(!compare_exchange_weak(tmp, tmp ^ v, order, memory_order_relaxed));
+ return tmp;
+#endif
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_INTEGRAL_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+#endif // defined(BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE64)
+
+// MSVC 2012 fails to recognize sizeof(T) as a constant expression in template specializations
+enum msvc_sizeof_pointer_workaround { sizeof_pointer = sizeof(void*) };
+
+template<bool Sign>
+class base_atomic<void*, void*, sizeof_pointer, Sign>
+{
+ typedef base_atomic this_type;
+ typedef void* value_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ platform_fence_before(order);
+ const_cast<volatile value_type &>(v_) = v;
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ v = (value_type)BOOST_ATOMIC_INTERLOCKED_EXCHANGE_POINTER(&v_, v);
+ platform_fence_after(order);
+ return v;
+ }
+
+ bool compare_exchange_strong(value_type & expected, value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ value_type previous = expected;
+ platform_fence_before(success_order);
+ value_type oldval = (value_type)BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE_POINTER(&v_, desired, previous);
+ bool success = (previous == oldval);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = oldval;
+ return success;
+ }
+
+ bool compare_exchange_weak(value_type & expected, value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T*, void*, sizeof_pointer, Sign>
+{
+ typedef base_atomic this_type;
+ typedef T* value_type;
+ typedef ptrdiff_t difference_type;
+public:
+ explicit base_atomic(value_type v) : v_(v) {}
+ base_atomic(void) {}
+
+ void
+ store(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ platform_fence_before(order);
+ const_cast<volatile value_type &>(v_) = v;
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ value_type v = const_cast<const volatile value_type &>(v_);
+ platform_fence_after_load(order);
+ return v;
+ }
+
+ value_type
+ exchange(value_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ platform_fence_before(order);
+ v = (value_type)BOOST_ATOMIC_INTERLOCKED_EXCHANGE_POINTER(&v_, v);
+ platform_fence_after(order);
+ return v;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ value_type previous = expected;
+ platform_fence_before(success_order);
+ value_type oldval = (value_type)BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE_POINTER(&v_, desired, previous);
+ bool success = (previous == oldval);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ expected = oldval;
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ value_type
+ fetch_add(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ v = v * sizeof(*v_);
+ platform_fence_before(order);
+ value_type res = (value_type)BOOST_ATOMIC_INTERLOCKED_EXCHANGE_ADD_POINTER(&v_, v);
+ platform_fence_after(order);
+ return res;
+ }
+
+ value_type
+ fetch_sub(difference_type v, memory_order order = memory_order_seq_cst) volatile
+ {
+ return fetch_add(-v, order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_POINTER_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ value_type v_;
+};
+
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 1, Sign>
+{
+ typedef base_atomic this_type;
+ typedef T value_type;
+#ifdef BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE8
+ typedef uint8_t storage_type;
+#else
+ typedef uint32_t storage_type;
+#endif
+public:
+ explicit base_atomic(value_type const& v) : v_(0)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+ const_cast<volatile storage_type &>(v_) = tmp;
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp = const_cast<volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+#ifdef BOOST_ATOMIC_INTERLOCKED_EXCHANGE8
+ tmp = static_cast< storage_type >(BOOST_ATOMIC_INTERLOCKED_EXCHANGE8(&v_, tmp));
+#else
+ tmp = static_cast< storage_type >(BOOST_ATOMIC_INTERLOCKED_EXCHANGE(&v_, tmp));
+#endif
+ platform_fence_after(order);
+ value_type res;
+ memcpy(&res, &tmp, sizeof(value_type));
+ return res;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+ platform_fence_before(success_order);
+#ifdef BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE8
+ storage_type oldval = static_cast< storage_type >(BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE8(&v_, desired_s, expected_s));
+#else
+ storage_type oldval = static_cast< storage_type >(BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE(&v_, desired_s, expected_s));
+#endif
+ bool success = (oldval == expected_s);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ memcpy(&expected, &oldval, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 2, Sign>
+{
+ typedef base_atomic this_type;
+ typedef T value_type;
+#ifdef BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE16
+ typedef uint16_t storage_type;
+#else
+ typedef uint32_t storage_type;
+#endif
+public:
+ explicit base_atomic(value_type const& v) : v_(0)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+ const_cast<volatile storage_type &>(v_) = tmp;
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp = const_cast<volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+#ifdef BOOST_ATOMIC_INTERLOCKED_EXCHANGE16
+ tmp = static_cast< storage_type >(BOOST_ATOMIC_INTERLOCKED_EXCHANGE16(&v_, tmp));
+#else
+ tmp = static_cast< storage_type >(BOOST_ATOMIC_INTERLOCKED_EXCHANGE(&v_, tmp));
+#endif
+ platform_fence_after(order);
+ value_type res;
+ memcpy(&res, &tmp, sizeof(value_type));
+ return res;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+ platform_fence_before(success_order);
+#ifdef BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE16
+ storage_type oldval = static_cast< storage_type >(BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE16(&v_, desired_s, expected_s));
+#else
+ storage_type oldval = static_cast< storage_type >(BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE(&v_, desired_s, expected_s));
+#endif
+ bool success = (oldval == expected_s);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ memcpy(&expected, &oldval, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 4, Sign>
+{
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint32_t storage_type;
+public:
+ explicit base_atomic(value_type const& v) : v_(0)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+ const_cast<volatile storage_type &>(v_) = tmp;
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp = const_cast<volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+ tmp = static_cast< storage_type >(BOOST_ATOMIC_INTERLOCKED_EXCHANGE(&v_, tmp));
+ platform_fence_after(order);
+ value_type res;
+ memcpy(&res, &tmp, sizeof(value_type));
+ return res;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+ platform_fence_before(success_order);
+ storage_type oldval = static_cast< storage_type >(BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE(&v_, desired_s, expected_s));
+ bool success = (oldval == expected_s);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ memcpy(&expected, &oldval, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+#if defined(BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE64)
+
+template<typename T, bool Sign>
+class base_atomic<T, void, 8, Sign>
+{
+ typedef base_atomic this_type;
+ typedef T value_type;
+ typedef uint64_t storage_type;
+public:
+ explicit base_atomic(value_type const& v) : v_(0)
+ {
+ memcpy(&v_, &v, sizeof(value_type));
+ }
+ base_atomic(void) {}
+
+ void
+ store(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ if (order != memory_order_seq_cst) {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+ const_cast<volatile storage_type &>(v_) = tmp;
+ } else {
+ exchange(v, order);
+ }
+ }
+
+ value_type
+ load(memory_order order = memory_order_seq_cst) const volatile
+ {
+ storage_type tmp = const_cast<volatile storage_type &>(v_);
+ platform_fence_after_load(order);
+ value_type v;
+ memcpy(&v, &tmp, sizeof(value_type));
+ return v;
+ }
+
+ value_type
+ exchange(value_type const& v, memory_order order = memory_order_seq_cst) volatile
+ {
+ storage_type tmp = 0;
+ memcpy(&tmp, &v, sizeof(value_type));
+ platform_fence_before(order);
+ tmp = static_cast< storage_type >(BOOST_ATOMIC_INTERLOCKED_EXCHANGE64(&v_, tmp));
+ platform_fence_after(order);
+ value_type res;
+ memcpy(&res, &tmp, sizeof(value_type));
+ return res;
+ }
+
+ bool
+ compare_exchange_strong(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ storage_type expected_s = 0, desired_s = 0;
+ memcpy(&expected_s, &expected, sizeof(value_type));
+ memcpy(&desired_s, &desired, sizeof(value_type));
+ platform_fence_before(success_order);
+ storage_type oldval = static_cast< storage_type >(BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE64(&v_, desired_s, expected_s));
+ bool success = (oldval == expected_s);
+ if (success)
+ platform_fence_after(success_order);
+ else
+ platform_fence_after(failure_order);
+ memcpy(&expected, &oldval, sizeof(value_type));
+ return success;
+ }
+
+ bool
+ compare_exchange_weak(
+ value_type & expected,
+ value_type const& desired,
+ memory_order success_order,
+ memory_order failure_order) volatile
+ {
+ return compare_exchange_strong(expected, desired, success_order, failure_order);
+ }
+
+ bool
+ is_lock_free(void) const volatile
+ {
+ return true;
+ }
+
+ BOOST_ATOMIC_DECLARE_BASE_OPERATORS
+private:
+ base_atomic(const base_atomic &) /* = delete */ ;
+ void operator=(const base_atomic &) /* = delete */ ;
+ storage_type v_;
+};
+
+#endif // defined(BOOST_ATOMIC_INTERLOCKED_COMPARE_EXCHANGE64)
+
+} // namespace detail
+} // namespace atomics
+} // namespace boost
+
+#endif /* !defined(BOOST_ATOMIC_FORCE_FALLBACK) */
+
+#ifdef _MSC_VER
+#pragma warning(pop)
+#endif
+
+#endif

Added: branches/release/boost/lockfree/detail/atomic.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/lockfree/detail/atomic.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,71 @@
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#ifndef BOOST_LOCKFREE_DETAIL_ATOMIC_HPP
+#define BOOST_LOCKFREE_DETAIL_ATOMIC_HPP
+
+#include <boost/config.hpp>
+
+// at this time, few compiles completely implement atomic<>
+#define BOOST_LOCKFREE_NO_HDR_ATOMIC
+
+// MSVC supports atomic<> from version 2012 onwards.
+#if defined(BOOST_MSVC) && (BOOST_MSVC >= 1700)
+#undef BOOST_LOCKFREE_NO_HDR_ATOMIC
+#endif
+
+// GCC supports atomic<> from version 4.8 onwards.
+#if defined(__GNUC__)
+# if defined(__GNUC_PATCHLEVEL__)
+# define __GNUC_VERSION__ (__GNUC__ * 10000 \
+ + __GNUC_MINOR__ * 100 \
+ + __GNUC_PATCHLEVEL__)
+# else
+# define __GNUC_VERSION__ (__GNUC__ * 10000 \
+ + __GNUC_MINOR__ * 100)
+# endif
+#endif
+
+#if (__GNUC_VERSION__ >= 40800) && (__cplusplus >= 201103L)
+#undef BOOST_LOCKFREE_NO_HDR_ATOMIC
+#endif
+
+#undef __GNUC_VERSION__
+
+#if defined(BOOST_LOCKFREE_NO_HDR_ATOMIC)
+#include <boost/atomic.hpp>
+#else
+#include <atomic>
+#endif
+
+namespace boost {
+namespace lockfree {
+namespace detail {
+
+#if defined(BOOST_LOCKFREE_NO_HDR_ATOMIC)
+using boost::atomic;
+using boost::memory_order_acquire;
+using boost::memory_order_consume;
+using boost::memory_order_relaxed;
+using boost::memory_order_release;
+#else
+using std::atomic;
+using std::memory_order_acquire;
+using std::memory_order_consume;
+using std::memory_order_relaxed;
+using std::memory_order_release;
+#endif
+
+}
+using detail::atomic;
+using detail::memory_order_acquire;
+using detail::memory_order_consume;
+using detail::memory_order_relaxed;
+using detail::memory_order_release;
+
+}}
+
+#endif /* BOOST_LOCKFREE_DETAIL_ATOMIC_HPP */

Added: branches/release/boost/lockfree/detail/branch_hints.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/lockfree/detail/branch_hints.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,38 @@
+// branch hints
+// Copyright (C) 2007, 2008 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#ifndef BOOST_LOCKFREE_BRANCH_HINTS_HPP_INCLUDED
+#define BOOST_LOCKFREE_BRANCH_HINTS_HPP_INCLUDED
+
+namespace boost {
+namespace lockfree {
+namespace detail {
+/** \brief hint for the branch prediction */
+inline bool likely(bool expr)
+{
+#ifdef __GNUC__
+ return __builtin_expect(expr, true);
+#else
+ return expr;
+#endif
+ }
+
+/** \brief hint for the branch prediction */
+inline bool unlikely(bool expr)
+{
+#ifdef __GNUC__
+ return __builtin_expect(expr, false);
+#else
+ return expr;
+#endif
+}
+
+} /* namespace detail */
+} /* namespace lockfree */
+} /* namespace boost */
+
+#endif /* BOOST_LOCKFREE_BRANCH_HINTS_HPP_INCLUDED */

Added: branches/release/boost/lockfree/detail/copy_payload.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/lockfree/detail/copy_payload.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,49 @@
+// boost lockfree: copy_payload helper
+//
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#ifndef BOOST_LOCKFREE_DETAIL_COPY_PAYLOAD_HPP_INCLUDED
+#define BOOST_LOCKFREE_DETAIL_COPY_PAYLOAD_HPP_INCLUDED
+
+#include <boost/mpl/if.hpp>
+#include <boost/type_traits/is_convertible.hpp>
+
+namespace boost {
+namespace lockfree {
+namespace detail {
+
+struct copy_convertible
+{
+ template <typename T, typename U>
+ static void copy(T & t, U & u)
+ {
+ u = t;
+ }
+};
+
+struct copy_constructible_and_copyable
+{
+ template <typename T, typename U>
+ static void copy(T & t, U & u)
+ {
+ u = U(t);
+ }
+};
+
+template <typename T, typename U>
+void copy_payload(T & t, U & u)
+{
+ typedef typename boost::mpl::if_<typename boost::is_convertible<T, U>::type,
+ copy_convertible,
+ copy_constructible_and_copyable
+ >::type copy_type;
+ copy_type::copy(t, u);
+}
+
+}}}
+
+#endif /* BOOST_LOCKFREE_DETAIL_COPY_PAYLOAD_HPP_INCLUDED */

Added: branches/release/boost/lockfree/detail/freelist.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/lockfree/detail/freelist.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,625 @@
+// lock-free freelist
+//
+// Copyright (C) 2008, 2009, 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#ifndef BOOST_LOCKFREE_FREELIST_HPP_INCLUDED
+#define BOOST_LOCKFREE_FREELIST_HPP_INCLUDED
+
+#include <memory>
+
+#include <boost/array.hpp>
+#include <boost/config.hpp>
+#include <boost/cstdint.hpp>
+#include <boost/noncopyable.hpp>
+#include <boost/static_assert.hpp>
+
+#include <boost/lockfree/detail/atomic.hpp>
+#include <boost/lockfree/detail/parameter.hpp>
+#include <boost/lockfree/detail/tagged_ptr.hpp>
+
+namespace boost {
+namespace lockfree {
+namespace detail {
+
+template <typename T,
+ typename Alloc = std::allocator<T>
+ >
+class freelist_stack:
+ Alloc
+{
+ struct freelist_node
+ {
+ tagged_ptr<freelist_node> next;
+ };
+
+ typedef tagged_ptr<freelist_node> tagged_node_ptr;
+
+public:
+ typedef tagged_ptr<T> tagged_node_handle;
+
+ template <typename Allocator>
+ freelist_stack (Allocator const & alloc, std::size_t n = 0):
+ Alloc(alloc),
+ pool_(tagged_node_ptr(NULL))
+ {
+ for (std::size_t i = 0; i != n; ++i) {
+ T * node = Alloc::allocate(1);
+#ifdef BOOST_LOCKFREE_FREELIST_INIT_RUNS_DTOR
+ destruct<false>(node);
+#else
+ deallocate<false>(node);
+#endif
+ }
+ }
+
+ template <bool ThreadSafe>
+ void reserve (std::size_t count)
+ {
+ for (std::size_t i = 0; i != count; ++i) {
+ T * node = Alloc::allocate(1);
+ deallocate<ThreadSafe>(node);
+ }
+ }
+
+ template <bool ThreadSafe, bool Bounded>
+ T * construct (void)
+ {
+ T * node = allocate<ThreadSafe, Bounded>();
+ if (node)
+ new(node) T();
+ return node;
+ }
+
+ template <bool ThreadSafe, bool Bounded, typename ArgumentType>
+ T * construct (ArgumentType const & arg)
+ {
+ T * node = allocate<ThreadSafe, Bounded>();
+ if (node)
+ new(node) T(arg);
+ return node;
+ }
+
+ template <bool ThreadSafe, bool Bounded, typename ArgumentType1, typename ArgumentType2>
+ T * construct (ArgumentType1 const & arg1, ArgumentType2 const & arg2)
+ {
+ T * node = allocate<ThreadSafe, Bounded>();
+ if (node)
+ new(node) T(arg1, arg2);
+ return node;
+ }
+
+ template <bool ThreadSafe>
+ void destruct (tagged_node_handle tagged_ptr)
+ {
+ T * n = tagged_ptr.get_ptr();
+ n->~T();
+ deallocate<ThreadSafe>(n);
+ }
+
+ template <bool ThreadSafe>
+ void destruct (T * n)
+ {
+ n->~T();
+ deallocate<ThreadSafe>(n);
+ }
+
+ ~freelist_stack(void)
+ {
+ tagged_node_ptr current (pool_);
+
+ while (current) {
+ freelist_node * current_ptr = current.get_ptr();
+ if (current_ptr)
+ current = current_ptr->next;
+ Alloc::deallocate((T*)current_ptr, 1);
+ }
+ }
+
+ bool is_lock_free(void) const
+ {
+ return pool_.is_lock_free();
+ }
+
+ T * get_handle(T * pointer) const
+ {
+ return pointer;
+ }
+
+ T * get_handle(tagged_node_handle const & handle) const
+ {
+ return get_pointer(handle);
+ }
+
+ T * get_pointer(tagged_node_handle const & tptr) const
+ {
+ return tptr.get_ptr();
+ }
+
+ T * get_pointer(T * pointer) const
+ {
+ return pointer;
+ }
+
+ T * null_handle(void) const
+ {
+ return NULL;
+ }
+
+protected: // allow use from subclasses
+ template <bool ThreadSafe, bool Bounded>
+ T * allocate (void)
+ {
+ if (ThreadSafe)
+ return allocate_impl<Bounded>();
+ else
+ return allocate_impl_unsafe<Bounded>();
+ }
+
+private:
+ template <bool Bounded>
+ T * allocate_impl (void)
+ {
+ tagged_node_ptr old_pool = pool_.load(memory_order_consume);
+
+ for(;;) {
+ if (!old_pool.get_ptr()) {
+ if (!Bounded)
+ return Alloc::allocate(1);
+ else
+ return 0;
+ }
+
+ freelist_node * new_pool_ptr = old_pool->next.get_ptr();
+ tagged_node_ptr new_pool (new_pool_ptr, old_pool.get_tag() + 1);
+
+ if (pool_.compare_exchange_weak(old_pool, new_pool)) {
+ void * ptr = old_pool.get_ptr();
+ return reinterpret_cast<T*>(ptr);
+ }
+ }
+ }
+
+ template <bool Bounded>
+ T * allocate_impl_unsafe (void)
+ {
+ tagged_node_ptr old_pool = pool_.load(memory_order_relaxed);
+
+ if (!old_pool.get_ptr()) {
+ if (!Bounded)
+ return Alloc::allocate(1);
+ else
+ return 0;
+ }
+
+ freelist_node * new_pool_ptr = old_pool->next.get_ptr();
+ tagged_node_ptr new_pool (new_pool_ptr, old_pool.get_tag() + 1);
+
+ pool_.store(new_pool, memory_order_relaxed);
+ void * ptr = old_pool.get_ptr();
+ return reinterpret_cast<T*>(ptr);
+ }
+
+protected:
+ template <bool ThreadSafe>
+ void deallocate (T * n)
+ {
+ if (ThreadSafe)
+ deallocate_impl(n);
+ else
+ deallocate_impl_unsafe(n);
+ }
+
+private:
+ void deallocate_impl (T * n)
+ {
+ void * node = n;
+ tagged_node_ptr old_pool = pool_.load(memory_order_consume);
+ freelist_node * new_pool_ptr = reinterpret_cast<freelist_node*>(node);
+
+ for(;;) {
+ tagged_node_ptr new_pool (new_pool_ptr, old_pool.get_tag());
+ new_pool->next.set_ptr(old_pool.get_ptr());
+
+ if (pool_.compare_exchange_weak(old_pool, new_pool))
+ return;
+ }
+ }
+
+ void deallocate_impl_unsafe (T * n)
+ {
+ void * node = n;
+ tagged_node_ptr old_pool = pool_.load(memory_order_relaxed);
+ freelist_node * new_pool_ptr = reinterpret_cast<freelist_node*>(node);
+
+ tagged_node_ptr new_pool (new_pool_ptr, old_pool.get_tag());
+ new_pool->next.set_ptr(old_pool.get_ptr());
+
+ pool_.store(new_pool, memory_order_relaxed);
+ }
+
+ atomic<tagged_node_ptr> pool_;
+};
+
+class tagged_index
+{
+public:
+ typedef boost::uint16_t tag_t;
+ typedef boost::uint16_t index_t;
+
+ /** uninitialized constructor */
+ tagged_index(void) BOOST_NOEXCEPT //: index(0), tag(0)
+ {}
+
+ /** copy constructor */
+#ifdef BOOST_NO_CXX11_DEFAULTED_FUNCTIONS
+ tagged_index(tagged_index const & rhs):
+ index(rhs.index), tag(rhs.tag)
+ {}
+#else
+ tagged_index(tagged_index const & rhs) = default;
+#endif
+
+ explicit tagged_index(index_t i, tag_t t = 0):
+ index(i), tag(t)
+ {}
+
+ /** index access */
+ /* @{ */
+ index_t get_index() const
+ {
+ return index;
+ }
+
+ void set_index(index_t i)
+ {
+ index = i;
+ }
+ /* @} */
+
+ /** tag access */
+ /* @{ */
+ tag_t get_tag() const
+ {
+ return tag;
+ }
+
+ void set_tag(tag_t t)
+ {
+ tag = t;
+ }
+ /* @} */
+
+ bool operator==(tagged_index const & rhs) const
+ {
+ return (index == rhs.index) && (tag == rhs.tag);
+ }
+
+protected:
+ index_t index;
+ tag_t tag;
+};
+
+template <typename T,
+ std::size_t size>
+struct compiletime_sized_freelist_storage
+{
+ // array-based freelists only support a 16bit address space.
+ BOOST_STATIC_ASSERT(size < 65536);
+
+ boost::array<char, size * sizeof(T)> data;
+
+ // unused ... only for API purposes
+ template <typename Allocator>
+ compiletime_sized_freelist_storage(Allocator const & alloc, std::size_t count)
+ {}
+
+ T * nodes(void) const
+ {
+ return reinterpret_cast<T*>(const_cast<char*>(data.data()));
+ }
+
+ std::size_t node_count(void) const
+ {
+ return size;
+ }
+};
+
+template <typename T,
+ typename Alloc = std::allocator<T> >
+struct runtime_sized_freelist_storage:
+ Alloc
+{
+ T * nodes_;
+ std::size_t node_count_;
+
+ template <typename Allocator>
+ runtime_sized_freelist_storage(Allocator const & alloc, std::size_t count):
+ Alloc(alloc), node_count_(count)
+ {
+ if (count > 65535)
+ boost::throw_exception(std::runtime_error("boost.lockfree: freelist size is limited to a maximum of 65535 objects"));
+ nodes_ = Alloc::allocate(count);
+ }
+
+ ~runtime_sized_freelist_storage(void)
+ {
+ Alloc::deallocate(nodes_, node_count_);
+ }
+
+ T * nodes(void) const
+ {
+ return nodes_;
+ }
+
+ std::size_t node_count(void) const
+ {
+ return node_count_;
+ }
+};
+
+
+template <typename T,
+ typename NodeStorage = runtime_sized_freelist_storage<T>
+ >
+class fixed_size_freelist:
+ NodeStorage
+{
+ struct freelist_node
+ {
+ tagged_index next;
+ };
+
+ typedef tagged_index::index_t index_t;
+
+ void initialize(void)
+ {
+ T * nodes = NodeStorage::nodes();
+ for (std::size_t i = 0; i != NodeStorage::node_count(); ++i) {
+ tagged_index * next_index = reinterpret_cast<tagged_index*>(nodes + i);
+ next_index->set_index(null_handle());
+
+#ifdef BOOST_LOCKFREE_FREELIST_INIT_RUNS_DTOR
+ destruct<false>(nodes + i);
+#else
+ deallocate<false>(static_cast<index_t>(i));
+#endif
+ }
+ }
+
+public:
+ typedef tagged_index tagged_node_handle;
+
+ template <typename Allocator>
+ fixed_size_freelist (Allocator const & alloc, std::size_t count):
+ NodeStorage(alloc, count),
+ pool_(tagged_index(static_cast<index_t>(count), 0))
+ {
+ initialize();
+ }
+
+ fixed_size_freelist (void):
+ pool_(tagged_index(NodeStorage::node_count(), 0))
+ {
+ initialize();
+ }
+
+ template <bool ThreadSafe, bool Bounded>
+ T * construct (void)
+ {
+ index_t node_index = allocate<ThreadSafe>();
+ if (node_index == null_handle())
+ return NULL;
+
+ T * node = NodeStorage::nodes() + node_index;
+ new(node) T();
+ return node;
+ }
+
+ template <bool ThreadSafe, bool Bounded, typename ArgumentType>
+ T * construct (ArgumentType const & arg)
+ {
+ index_t node_index = allocate<ThreadSafe>();
+ if (node_index == null_handle())
+ return NULL;
+
+ T * node = NodeStorage::nodes() + node_index;
+ new(node) T(arg);
+ return node;
+ }
+
+ template <bool ThreadSafe, bool Bounded, typename ArgumentType1, typename ArgumentType2>
+ T * construct (ArgumentType1 const & arg1, ArgumentType2 const & arg2)
+ {
+ index_t node_index = allocate<ThreadSafe>();
+ if (node_index == null_handle())
+ return NULL;
+
+ T * node = NodeStorage::nodes() + node_index;
+ new(node) T(arg1, arg2);
+ return node;
+ }
+
+ template <bool ThreadSafe>
+ void destruct (tagged_node_handle tagged_index)
+ {
+ index_t index = tagged_index.get_index();
+ T * n = NodeStorage::nodes() + index;
+ n->~T();
+ deallocate<ThreadSafe>(index);
+ }
+
+ template <bool ThreadSafe>
+ void destruct (T * n)
+ {
+ n->~T();
+ deallocate<ThreadSafe>(n - NodeStorage::nodes());
+ }
+
+ bool is_lock_free(void) const
+ {
+ return pool_.is_lock_free();
+ }
+
+ index_t null_handle(void) const
+ {
+ return static_cast<index_t>(NodeStorage::node_count());
+ }
+
+ index_t get_handle(T * pointer) const
+ {
+ if (pointer == NULL)
+ return null_handle();
+ else
+ return static_cast<index_t>(pointer - NodeStorage::nodes());
+ }
+
+ index_t get_handle(tagged_node_handle const & handle) const
+ {
+ return handle.get_index();
+ }
+
+ T * get_pointer(tagged_node_handle const & tptr) const
+ {
+ return get_pointer(tptr.get_index());
+ }
+
+ T * get_pointer(index_t index) const
+ {
+ if (index == null_handle())
+ return 0;
+ else
+ return NodeStorage::nodes() + index;
+ }
+
+ T * get_pointer(T * ptr) const
+ {
+ return ptr;
+ }
+
+protected: // allow use from subclasses
+ template <bool ThreadSafe>
+ index_t allocate (void)
+ {
+ if (ThreadSafe)
+ return allocate_impl();
+ else
+ return allocate_impl_unsafe();
+ }
+
+private:
+ index_t allocate_impl (void)
+ {
+ tagged_index old_pool = pool_.load(memory_order_consume);
+
+ for(;;) {
+ index_t index = old_pool.get_index();
+ if (index == null_handle())
+ return index;
+
+ T * old_node = NodeStorage::nodes() + index;
+ tagged_index * next_index = reinterpret_cast<tagged_index*>(old_node);
+
+ tagged_index new_pool(next_index->get_index(), old_pool.get_tag() + 1);
+
+ if (pool_.compare_exchange_weak(old_pool, new_pool))
+ return old_pool.get_index();
+ }
+ }
+
+ index_t allocate_impl_unsafe (void)
+ {
+ tagged_index old_pool = pool_.load(memory_order_consume);
+
+ index_t index = old_pool.get_index();
+ if (index == null_handle())
+ return index;
+
+ T * old_node = NodeStorage::nodes() + index;
+ tagged_index * next_index = reinterpret_cast<tagged_index*>(old_node);
+
+ tagged_index new_pool(next_index->get_index(), old_pool.get_tag() + 1);
+
+ pool_.store(new_pool, memory_order_relaxed);
+ return old_pool.get_index();
+ }
+
+ template <bool ThreadSafe>
+ void deallocate (index_t index)
+ {
+ if (ThreadSafe)
+ deallocate_impl(index);
+ else
+ deallocate_impl_unsafe(index);
+ }
+
+ void deallocate_impl (index_t index)
+ {
+ freelist_node * new_pool_node = reinterpret_cast<freelist_node*>(NodeStorage::nodes() + index);
+ tagged_index old_pool = pool_.load(memory_order_consume);
+
+ for(;;) {
+ tagged_index new_pool (index, old_pool.get_tag());
+ new_pool_node->next.set_index(old_pool.get_index());
+
+ if (pool_.compare_exchange_weak(old_pool, new_pool))
+ return;
+ }
+ }
+
+ void deallocate_impl_unsafe (index_t index)
+ {
+ freelist_node * new_pool_node = reinterpret_cast<freelist_node*>(NodeStorage::nodes() + index);
+ tagged_index old_pool = pool_.load(memory_order_consume);
+
+ tagged_index new_pool (index, old_pool.get_tag());
+ new_pool_node->next.set_index(old_pool.get_index());
+
+ pool_.store(new_pool);
+ }
+
+ atomic<tagged_index> pool_;
+};
+
+template <typename T,
+ typename Alloc,
+ bool IsCompileTimeSized,
+ bool IsFixedSize,
+ std::size_t Capacity
+ >
+struct select_freelist
+{
+ typedef typename mpl::if_c<IsCompileTimeSized,
+ compiletime_sized_freelist_storage<T, Capacity>,
+ runtime_sized_freelist_storage<T, Alloc>
+ >::type fixed_sized_storage_type;
+
+ typedef typename mpl::if_c<IsCompileTimeSized || IsFixedSize,
+ fixed_size_freelist<T, fixed_sized_storage_type>,
+ freelist_stack<T, Alloc>
+ >::type type;
+};
+
+template <typename T, bool IsNodeBased>
+struct select_tagged_handle
+{
+ typedef typename mpl::if_c<IsNodeBased,
+ tagged_ptr<T>,
+ tagged_index
+ >::type tagged_handle_type;
+
+ typedef typename mpl::if_c<IsNodeBased,
+ T*,
+ typename tagged_index::index_t
+ >::type handle_type;
+};
+
+
+} /* namespace detail */
+} /* namespace lockfree */
+} /* namespace boost */
+
+#endif /* BOOST_LOCKFREE_FREELIST_HPP_INCLUDED */

Added: branches/release/boost/lockfree/detail/parameter.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/lockfree/detail/parameter.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,73 @@
+// boost lockfree
+//
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#ifndef BOOST_LOCKFREE_DETAIL_PARAMETER_HPP
+#define BOOST_LOCKFREE_DETAIL_PARAMETER_HPP
+
+#include <boost/lockfree/policies.hpp>
+
+namespace boost {
+namespace lockfree {
+namespace detail {
+
+namespace mpl = boost::mpl;
+
+template <typename bound_args, typename tag_type>
+struct has_arg
+{
+ typedef typename parameter::binding<bound_args, tag_type, mpl::void_>::type type;
+ static const bool value = mpl::is_not_void_<type>::type::value;
+};
+
+
+template <typename bound_args>
+struct extract_capacity
+{
+ static const bool has_capacity = has_arg<bound_args, tag::capacity>::value;
+
+ typedef typename mpl::if_c<has_capacity,
+ typename has_arg<bound_args, tag::capacity>::type,
+ mpl::size_t< 0 >
+ >::type capacity_t;
+
+ static const std::size_t capacity = capacity_t::value;
+};
+
+
+template <typename bound_args, typename T>
+struct extract_allocator
+{
+ static const bool has_allocator = has_arg<bound_args, tag::allocator>::value;
+
+ typedef typename mpl::if_c<has_allocator,
+ typename has_arg<bound_args, tag::allocator>::type,
+ std::allocator<T>
+ >::type allocator_arg;
+
+ typedef typename allocator_arg::template rebind<T>::other type;
+};
+
+template <typename bound_args, bool default_ = false>
+struct extract_fixed_sized
+{
+ static const bool has_fixed_sized = has_arg<bound_args, tag::fixed_sized>::value;
+
+ typedef typename mpl::if_c<has_fixed_sized,
+ typename has_arg<bound_args, tag::fixed_sized>::type,
+ mpl::bool_<default_>
+ >::type type;
+
+ static const bool value = type::value;
+};
+
+
+} /* namespace detail */
+} /* namespace lockfree */
+} /* namespace boost */
+
+#endif /* BOOST_LOCKFREE_DETAIL_PARAMETER_HPP */

Added: branches/release/boost/lockfree/detail/prefix.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/lockfree/detail/prefix.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,56 @@
+// Copyright (C) 2009 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#ifndef BOOST_LOCKFREE_PREFIX_HPP_INCLUDED
+#define BOOST_LOCKFREE_PREFIX_HPP_INCLUDED
+
+/* this file defines the following macros:
+ BOOST_LOCKFREE_CACHELINE_BYTES: size of a cache line
+ BOOST_LOCKFREE_PTR_COMPRESSION: use tag/pointer compression to utilize parts
+ of the virtual address space as tag (at least 16bit)
+ BOOST_LOCKFREE_DCAS_ALIGNMENT: symbol used for aligning structs at cache line
+ boundaries
+*/
+
+#define BOOST_LOCKFREE_CACHELINE_BYTES 64
+
+#ifdef _MSC_VER
+
+#define BOOST_LOCKFREE_CACHELINE_ALIGNMENT __declspec(align(BOOST_LOCKFREE_CACHELINE_BYTES))
+
+#if defined(_M_IX86)
+ #define BOOST_LOCKFREE_DCAS_ALIGNMENT
+#elif defined(_M_X64) || defined(_M_IA64)
+ #define BOOST_LOCKFREE_PTR_COMPRESSION 1
+ #define BOOST_LOCKFREE_DCAS_ALIGNMENT __declspec(align(16))
+#endif
+
+#endif /* _MSC_VER */
+
+#ifdef __GNUC__
+
+#define BOOST_LOCKFREE_CACHELINE_ALIGNMENT __attribute__((aligned(BOOST_LOCKFREE_CACHELINE_BYTES)))
+
+#if defined(__i386__) || defined(__ppc__)
+ #define BOOST_LOCKFREE_DCAS_ALIGNMENT
+#elif defined(__x86_64__)
+ #define BOOST_LOCKFREE_PTR_COMPRESSION 1
+ #define BOOST_LOCKFREE_DCAS_ALIGNMENT __attribute__((aligned(16)))
+#elif defined(__alpha__)
+ // LATER: alpha may benefit from pointer compression. but what is the maximum size of the address space?
+ #define BOOST_LOCKFREE_DCAS_ALIGNMENT
+#endif
+#endif /* __GNUC__ */
+
+#ifndef BOOST_LOCKFREE_DCAS_ALIGNMENT
+#define BOOST_LOCKFREE_DCAS_ALIGNMENT /*BOOST_LOCKFREE_DCAS_ALIGNMENT*/
+#endif
+
+#ifndef BOOST_LOCKFREE_CACHELINE_ALIGNMENT
+#define BOOST_LOCKFREE_CACHELINE_ALIGNMENT /*BOOST_LOCKFREE_CACHELINE_ALIGNMENT*/
+#endif
+
+#endif /* BOOST_LOCKFREE_PREFIX_HPP_INCLUDED */

Added: branches/release/boost/lockfree/detail/tagged_ptr.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/lockfree/detail/tagged_ptr.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,21 @@
+// tagged pointer, for aba prevention
+//
+// Copyright (C) 2008 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#ifndef BOOST_LOCKFREE_TAGGED_PTR_HPP_INCLUDED
+#define BOOST_LOCKFREE_TAGGED_PTR_HPP_INCLUDED
+
+#include <boost/config.hpp>
+#include <boost/lockfree/detail/prefix.hpp>
+
+#ifndef BOOST_LOCKFREE_PTR_COMPRESSION
+#include <boost/lockfree/detail/tagged_ptr_dcas.hpp>
+#else
+#include <boost/lockfree/detail/tagged_ptr_ptrcompression.hpp>
+#endif
+
+#endif /* BOOST_LOCKFREE_TAGGED_PTR_HPP_INCLUDED */

Added: branches/release/boost/lockfree/detail/tagged_ptr_dcas.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/lockfree/detail/tagged_ptr_dcas.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,127 @@
+// tagged pointer, for aba prevention
+//
+// Copyright (C) 2008 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#ifndef BOOST_LOCKFREE_TAGGED_PTR_DCAS_HPP_INCLUDED
+#define BOOST_LOCKFREE_TAGGED_PTR_DCAS_HPP_INCLUDED
+
+#include <boost/lockfree/detail/branch_hints.hpp>
+
+#include <cstddef> /* for std::size_t */
+
+namespace boost {
+namespace lockfree {
+namespace detail {
+
+template <class T>
+class BOOST_LOCKFREE_DCAS_ALIGNMENT tagged_ptr
+{
+public:
+ typedef std::size_t tag_t;
+
+ /** uninitialized constructor */
+ tagged_ptr(void) BOOST_NOEXCEPT//: ptr(0), tag(0)
+ {}
+
+#ifdef BOOST_NO_CXX11_DEFAULTED_FUNCTIONS
+ tagged_ptr(tagged_ptr const & p):
+ ptr(p.ptr), tag(p.tag)
+ {}
+#else
+ tagged_ptr(tagged_ptr const & p) = default;
+#endif
+
+ explicit tagged_ptr(T * p, tag_t t = 0):
+ ptr(p), tag(t)
+ {}
+
+ /** unsafe set operation */
+ /* @{ */
+#ifdef BOOST_NO_CXX11_DEFAULTED_FUNCTIONS
+ tagged_ptr & operator= (tagged_ptr const & p)
+ {
+ set(p.ptr, p.tag);
+ return *this;
+ }
+#else
+ tagged_ptr & operator= (tagged_ptr const & p) = default;
+#endif
+
+ void set(T * p, tag_t t)
+ {
+ ptr = p;
+ tag = t;
+ }
+ /* @} */
+
+ /** comparing semantics */
+ /* @{ */
+ bool operator== (volatile tagged_ptr const & p) const
+ {
+ return (ptr == p.ptr) && (tag == p.tag);
+ }
+
+ bool operator!= (volatile tagged_ptr const & p) const
+ {
+ return !operator==(p);
+ }
+ /* @} */
+
+ /** pointer access */
+ /* @{ */
+ T * get_ptr(void) const volatile
+ {
+ return ptr;
+ }
+
+ void set_ptr(T * p) volatile
+ {
+ ptr = p;
+ }
+ /* @} */
+
+ /** tag access */
+ /* @{ */
+ tag_t get_tag() const volatile
+ {
+ return tag;
+ }
+
+ void set_tag(tag_t t) volatile
+ {
+ tag = t;
+ }
+ /* @} */
+
+ /** smart pointer support */
+ /* @{ */
+ T & operator*() const
+ {
+ return *ptr;
+ }
+
+ T * operator->() const
+ {
+ return ptr;
+ }
+
+ operator bool(void) const
+ {
+ return ptr != 0;
+ }
+ /* @} */
+
+protected:
+ T * ptr;
+ tag_t tag;
+};
+
+} /* namespace detail */
+} /* namespace lockfree */
+} /* namespace boost */
+
+#endif /* BOOST_LOCKFREE_TAGGED_PTR_DCAS_HPP_INCLUDED */

Added: branches/release/boost/lockfree/detail/tagged_ptr_ptrcompression.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/lockfree/detail/tagged_ptr_ptrcompression.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,168 @@
+// tagged pointer, for aba prevention
+//
+// Copyright (C) 2008, 2009 Tim Blechmann, based on code by Cory Nelson
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#ifndef BOOST_LOCKFREE_TAGGED_PTR_PTRCOMPRESSION_HPP_INCLUDED
+#define BOOST_LOCKFREE_TAGGED_PTR_PTRCOMPRESSION_HPP_INCLUDED
+
+#include <boost/lockfree/detail/branch_hints.hpp>
+
+#include <cstddef> /* for std::size_t */
+
+#include <boost/cstdint.hpp>
+
+namespace boost {
+namespace lockfree {
+namespace detail {
+
+#if defined (__x86_64__) || defined (_M_X64)
+
+template <class T>
+class tagged_ptr
+{
+ typedef boost::uint64_t compressed_ptr_t;
+
+public:
+ typedef boost::uint16_t tag_t;
+
+private:
+ union cast_unit
+ {
+ compressed_ptr_t value;
+ tag_t tag[4];
+ };
+
+ static const int tag_index = 3;
+ static const compressed_ptr_t ptr_mask = 0xffffffffffffUL; //(1L<<48L)-1;
+
+ static T* extract_ptr(volatile compressed_ptr_t const & i)
+ {
+ return (T*)(i & ptr_mask);
+ }
+
+ static tag_t extract_tag(volatile compressed_ptr_t const & i)
+ {
+ cast_unit cu;
+ cu.value = i;
+ return cu.tag[tag_index];
+ }
+
+ static compressed_ptr_t pack_ptr(T * ptr, int tag)
+ {
+ cast_unit ret;
+ ret.value = compressed_ptr_t(ptr);
+ ret.tag[tag_index] = tag;
+ return ret.value;
+ }
+
+public:
+ /** uninitialized constructor */
+ tagged_ptr(void) BOOST_NOEXCEPT//: ptr(0), tag(0)
+ {}
+
+ /** copy constructor */
+#ifdef BOOST_NO_CXX11_DEFAULTED_FUNCTIONS
+ tagged_ptr(tagged_ptr const & p):
+ ptr(p.ptr)
+ {}
+#else
+ tagged_ptr(tagged_ptr const & p) = default;
+#endif
+
+ explicit tagged_ptr(T * p, tag_t t = 0):
+ ptr(pack_ptr(p, t))
+ {}
+
+ /** unsafe set operation */
+ /* @{ */
+#ifdef BOOST_NO_CXX11_DEFAULTED_FUNCTIONS
+ tagged_ptr & operator= (tagged_ptr const & p)
+ {
+ ptr = p.ptr;
+ return *this;
+ }
+#else
+ tagged_ptr & operator= (tagged_ptr const & p) = default;
+#endif
+
+ void set(T * p, tag_t t)
+ {
+ ptr = pack_ptr(p, t);
+ }
+ /* @} */
+
+ /** comparing semantics */
+ /* @{ */
+ bool operator== (volatile tagged_ptr const & p) const
+ {
+ return (ptr == p.ptr);
+ }
+
+ bool operator!= (volatile tagged_ptr const & p) const
+ {
+ return !operator==(p);
+ }
+ /* @} */
+
+ /** pointer access */
+ /* @{ */
+ T * get_ptr() const volatile
+ {
+ return extract_ptr(ptr);
+ }
+
+ void set_ptr(T * p) volatile
+ {
+ tag_t tag = get_tag();
+ ptr = pack_ptr(p, tag);
+ }
+ /* @} */
+
+ /** tag access */
+ /* @{ */
+ tag_t get_tag() const volatile
+ {
+ return extract_tag(ptr);
+ }
+
+ void set_tag(tag_t t) volatile
+ {
+ T * p = get_ptr();
+ ptr = pack_ptr(p, t);
+ }
+ /* @} */
+
+ /** smart pointer support */
+ /* @{ */
+ T & operator*() const
+ {
+ return *get_ptr();
+ }
+
+ T * operator->() const
+ {
+ return get_ptr();
+ }
+
+ operator bool(void) const
+ {
+ return get_ptr() != 0;
+ }
+ /* @} */
+
+protected:
+ compressed_ptr_t ptr;
+};
+#else
+#error unsupported platform
+#endif
+
+} /* namespace detail */
+} /* namespace lockfree */
+} /* namespace boost */
+
+#endif /* BOOST_LOCKFREE_TAGGED_PTR_PTRCOMPRESSION_HPP_INCLUDED */

Added: branches/release/boost/lockfree/policies.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/lockfree/policies.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,59 @@
+// boost lockfree
+//
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#ifndef BOOST_LOCKFREE_POLICIES_HPP_INCLUDED
+#define BOOST_LOCKFREE_POLICIES_HPP_INCLUDED
+
+#include <boost/parameter.hpp>
+#include <boost/mpl/bool.hpp>
+#include <boost/mpl/size_t.hpp>
+#include <boost/mpl/void.hpp>
+
+namespace boost {
+namespace lockfree {
+
+#ifndef BOOST_DOXYGEN_INVOKED
+namespace tag { struct allocator ; }
+namespace tag { struct fixed_sized; }
+namespace tag { struct capacity; }
+
+#endif
+
+/** Configures a data structure as \b fixed-sized.
+ *
+ * The internal nodes are stored inside an array and they are addressed by array indexing. This limits the possible size of the
+ * queue to the number of elements that can be addressed by the index type (usually 2**16-2), but on platforms that lack
+ * double-width compare-and-exchange instructions, this is the best way to achieve lock-freedom.
+ * This implies that a data structure is bounded.
+ * */
+template <bool IsFixedSized>
+struct fixed_sized:
+ boost::parameter::template_keyword<tag::fixed_sized, boost::mpl::bool_<IsFixedSized> >
+{};
+
+/** Sets the \b capacity of a data structure at compile-time.
+ *
+ * This implies that a data structure is bounded and fixed-sized.
+ * */
+template <size_t Size>
+struct capacity:
+ boost::parameter::template_keyword<tag::capacity, boost::mpl::size_t<Size> >
+{};
+
+/** Defines the \b allocator type of a data structure.
+ * */
+template <class Alloc>
+struct allocator:
+ boost::parameter::template_keyword<tag::allocator, Alloc>
+{};
+
+}
+}
+
+#endif /* BOOST_LOCKFREE_POLICIES_HPP_INCLUDED */
+

Added: branches/release/boost/lockfree/queue.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/lockfree/queue.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,467 @@
+// lock-free queue from
+// Michael, M. M. and Scott, M. L.,
+// "simple, fast and practical non-blocking and blocking concurrent queue algorithms"
+//
+// Copyright (C) 2008, 2009, 2010, 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#ifndef BOOST_LOCKFREE_FIFO_HPP_INCLUDED
+#define BOOST_LOCKFREE_FIFO_HPP_INCLUDED
+
+#include <memory> /* std::auto_ptr */
+
+#include <boost/assert.hpp>
+#include <boost/noncopyable.hpp>
+#include <boost/static_assert.hpp>
+#include <boost/type_traits/has_trivial_assign.hpp>
+#include <boost/type_traits/has_trivial_destructor.hpp>
+
+#include <boost/lockfree/detail/atomic.hpp>
+#include <boost/lockfree/detail/copy_payload.hpp>
+#include <boost/lockfree/detail/freelist.hpp>
+#include <boost/lockfree/detail/parameter.hpp>
+#include <boost/lockfree/detail/tagged_ptr.hpp>
+
+namespace boost {
+namespace lockfree {
+namespace detail {
+
+typedef parameter::parameters<boost::parameter::optional<tag::allocator>,
+ boost::parameter::optional<tag::capacity>
+ > queue_signature;
+
+} /* namespace detail */
+
+
+/** The queue class provides a multi-writer/multi-reader queue, pushing and popping is lock-free,
+ * construction/destruction has to be synchronized. It uses a freelist for memory management,
+ * freed nodes are pushed to the freelist and not returned to the OS before the queue is destroyed.
+ *
+ * \b Policies:
+ * - \ref boost::lockfree::fixed_sized, defaults to \c boost::lockfree::fixed_sized<false> \n
+ * Can be used to completely disable dynamic memory allocations during push in order to ensure lockfree behavior. \n
+ * If the data structure is configured as fixed-sized, the internal nodes are stored inside an array and they are addressed
+ * by array indexing. This limits the possible size of the queue to the number of elements that can be addressed by the index
+ * type (usually 2**16-2), but on platforms that lack double-width compare-and-exchange instructions, this is the best way
+ * to achieve lock-freedom.
+ *
+ * - \ref boost::lockfree::capacity, optional \n
+ * If this template argument is passed to the options, the size of the queue is set at compile-time.\n
+ * It this option implies \c fixed_sized<true>
+ *
+ * - \ref boost::lockfree::allocator, defaults to \c boost::lockfree::allocator<std::allocator<void>> \n
+ * Specifies the allocator that is used for the internal freelist
+ *
+ * \b Requirements:
+ * - T must have a copy constructor
+ * - T must have a trivial assignment operator
+ * - T must have a trivial destructor
+ *
+ * */
+#ifndef BOOST_DOXYGEN_INVOKED
+template <typename T,
+ class A0 = boost::parameter::void_,
+ class A1 = boost::parameter::void_,
+ class A2 = boost::parameter::void_>
+#else
+template <typename T, ...Options>
+#endif
+class queue:
+ boost::noncopyable
+{
+private:
+#ifndef BOOST_DOXYGEN_INVOKED
+ typedef typename detail::queue_signature::bind<A0, A1, A2>::type bound_args;
+
+ static const bool has_capacity = detail::extract_capacity<bound_args>::has_capacity;
+ static const size_t capacity = detail::extract_capacity<bound_args>::capacity;
+ static const bool fixed_sized = detail::extract_fixed_sized<bound_args>::value;
+ static const bool node_based = !(has_capacity || fixed_sized);
+ static const bool compile_time_sized = has_capacity;
+
+ struct BOOST_LOCKFREE_CACHELINE_ALIGNMENT node
+ {
+ typedef typename detail::select_tagged_handle<node, node_based>::tagged_handle_type tagged_node_handle;
+ typedef typename detail::select_tagged_handle<node, node_based>::handle_type handle_type;
+
+ node(T const & v, handle_type null_handle):
+ data(v)//, next(tagged_node_handle(0, 0))
+ {
+ /* increment tag to avoid ABA problem */
+ tagged_node_handle old_next = next.load(memory_order_relaxed);
+ tagged_node_handle new_next (null_handle, old_next.get_tag()+1);
+ next.store(new_next, memory_order_release);
+ }
+
+ node (handle_type null_handle):
+ next(tagged_node_handle(null_handle, 0))
+ {}
+
+ node(void)
+ {}
+
+ atomic<tagged_node_handle> next;
+ T data;
+ };
+
+ typedef typename detail::extract_allocator<bound_args, node>::type node_allocator;
+ typedef typename detail::select_freelist<node, node_allocator, compile_time_sized, fixed_sized, capacity>::type pool_t;
+ typedef typename pool_t::tagged_node_handle tagged_node_handle;
+ typedef typename detail::select_tagged_handle<node, node_based>::handle_type handle_type;
+
+ void initialize(void)
+ {
+ node * n = pool.template construct<true, false>(pool.null_handle());
+ tagged_node_handle dummy_node(pool.get_handle(n), 0);
+ head_.store(dummy_node, memory_order_relaxed);
+ tail_.store(dummy_node, memory_order_release);
+ }
+
+ struct implementation_defined
+ {
+ typedef node_allocator allocator;
+ typedef std::size_t size_type;
+ };
+
+#endif
+
+public:
+ typedef T value_type;
+ typedef typename implementation_defined::allocator allocator;
+ typedef typename implementation_defined::size_type size_type;
+
+ /**
+ * \return true, if implementation is lock-free.
+ *
+ * \warning It only checks, if the queue head and tail nodes and the freelist can be modified in a lock-free manner.
+ * On most platforms, the whole implementation is lock-free, if this is true. Using c++0x-style atomics, there is
+ * no possibility to provide a completely accurate implementation, because one would need to test every internal
+ * node, which is impossible if further nodes will be allocated from the operating system.
+ * */
+ bool is_lock_free (void) const
+ {
+ return head_.is_lock_free() && tail_.is_lock_free() && pool.is_lock_free();
+ }
+
+ //! Construct queue
+ // @{
+ queue(void):
+ head_(tagged_node_handle(0, 0)),
+ tail_(tagged_node_handle(0, 0)),
+ pool(node_allocator(), capacity)
+ {
+ BOOST_ASSERT(has_capacity);
+ initialize();
+ }
+
+ template <typename U>
+ explicit queue(typename node_allocator::template rebind<U>::other const & alloc):
+ head_(tagged_node_handle(0, 0)),
+ tail_(tagged_node_handle(0, 0)),
+ pool(alloc, capacity)
+ {
+ BOOST_STATIC_ASSERT(has_capacity);
+ initialize();
+ }
+
+ explicit queue(allocator const & alloc):
+ head_(tagged_node_handle(0, 0)),
+ tail_(tagged_node_handle(0, 0)),
+ pool(alloc, capacity)
+ {
+ BOOST_ASSERT(has_capacity);
+ initialize();
+ }
+ // @}
+
+ //! Construct queue, allocate n nodes for the freelist.
+ // @{
+ explicit queue(size_type n):
+ head_(tagged_node_handle(0, 0)),
+ tail_(tagged_node_handle(0, 0)),
+ pool(node_allocator(), n + 1)
+ {
+ BOOST_ASSERT(!has_capacity);
+ initialize();
+ }
+
+ template <typename U>
+ queue(size_type n, typename node_allocator::template rebind<U>::other const & alloc):
+ head_(tagged_node_handle(0, 0)),
+ tail_(tagged_node_handle(0, 0)),
+ pool(alloc, n + 1)
+ {
+ BOOST_STATIC_ASSERT(!has_capacity);
+ initialize();
+ }
+ // @}
+
+ /** \copydoc boost::lockfree::stack::reserve
+ * */
+ void reserve(size_type n)
+ {
+ pool.template reserve<true>(n);
+ }
+
+ /** \copydoc boost::lockfree::stack::reserve_unsafe
+ * */
+ void reserve_unsafe(size_type n)
+ {
+ pool.template reserve<false>(n);
+ }
+
+ /** Destroys queue, free all nodes from freelist.
+ * */
+ ~queue(void)
+ {
+ T dummy;
+ while(unsynchronized_pop(dummy))
+ {}
+
+ pool.template destruct<false>(head_.load(memory_order_relaxed));
+ }
+
+ /** Check if the queue is empty
+ *
+ * \return true, if the queue is empty, false otherwise
+ * \note The result is only accurate, if no other thread modifies the queue. Therefore it is rarely practical to use this
+ * value in program logic.
+ * */
+ bool empty(void)
+ {
+ return pool.get_handle(head_.load()) == pool.get_handle(tail_.load());
+ }
+
+ /** Pushes object t to the queue.
+ *
+ * \post object will be pushed to the queue, if internal node can be allocated
+ * \returns true, if the push operation is successful.
+ *
+ * \note Thread-safe. If internal memory pool is exhausted and the memory pool is not fixed-sized, a new node will be allocated
+ * from the OS. This may not be lock-free.
+ * */
+ bool push(T const & t)
+ {
+ return do_push<false>(t);
+ }
+
+ /** Pushes object t to the queue.
+ *
+ * \post object will be pushed to the queue, if internal node can be allocated
+ * \returns true, if the push operation is successful.
+ *
+ * \note Thread-safe and non-blocking. If internal memory pool is exhausted, operation will fail
+ * \throws if memory allocator throws
+ * */
+ bool bounded_push(T const & t)
+ {
+ return do_push<true>(t);
+ }
+
+
+private:
+#ifndef BOOST_DOXYGEN_INVOKED
+ template <bool Bounded>
+ bool do_push(T const & t)
+ {
+ using detail::likely;
+
+ node * n = pool.template construct<true, Bounded>(t, pool.null_handle());
+ handle_type node_handle = pool.get_handle(n);
+
+ if (n == NULL)
+ return false;
+
+ for (;;) {
+ tagged_node_handle tail = tail_.load(memory_order_acquire);
+ node * tail_node = pool.get_pointer(tail);
+ tagged_node_handle next = tail_node->next.load(memory_order_acquire);
+ node * next_ptr = pool.get_pointer(next);
+
+ tagged_node_handle tail2 = tail_.load(memory_order_acquire);
+ if (likely(tail == tail2)) {
+ if (next_ptr == 0) {
+ tagged_node_handle new_tail_next(node_handle, next.get_tag() + 1);
+ if ( tail_node->next.compare_exchange_weak(next, new_tail_next) ) {
+ tagged_node_handle new_tail(node_handle, tail.get_tag() + 1);
+ tail_.compare_exchange_strong(tail, new_tail);
+ return true;
+ }
+ }
+ else {
+ tagged_node_handle new_tail(pool.get_handle(next_ptr), tail.get_tag() + 1);
+ tail_.compare_exchange_strong(tail, new_tail);
+ }
+ }
+ }
+ }
+#endif
+
+public:
+
+ /** Pushes object t to the queue.
+ *
+ * \post object will be pushed to the queue, if internal node can be allocated
+ * \returns true, if the push operation is successful.
+ *
+ * \note Not Thread-safe. If internal memory pool is exhausted and the memory pool is not fixed-sized, a new node will be allocated
+ * from the OS. This may not be lock-free.
+ * \throws if memory allocator throws
+ * */
+ bool unsynchronized_push(T const & t)
+ {
+ node * n = pool.template construct<false, false>(t, pool.null_handle());
+
+ if (n == NULL)
+ return false;
+
+ for (;;) {
+ tagged_node_handle tail = tail_.load(memory_order_relaxed);
+ tagged_node_handle next = tail->next.load(memory_order_relaxed);
+ node * next_ptr = next.get_ptr();
+
+ if (next_ptr == 0) {
+ tail->next.store(tagged_node_handle(n, next.get_tag() + 1), memory_order_relaxed);
+ tail_.store(tagged_node_handle(n, tail.get_tag() + 1), memory_order_relaxed);
+ return true;
+ }
+ else
+ tail_.store(tagged_node_handle(next_ptr, tail.get_tag() + 1), memory_order_relaxed);
+ }
+ }
+
+ /** Pops object from queue.
+ *
+ * \post if pop operation is successful, object will be copied to ret.
+ * \returns true, if the pop operation is successful, false if queue was empty.
+ *
+ * \note Thread-safe and non-blocking
+ * */
+ bool pop (T & ret)
+ {
+ return pop<T>(ret);
+ }
+
+ /** Pops object from queue.
+ *
+ * \pre type U must be constructible by T and copyable, or T must be convertible to U
+ * \post if pop operation is successful, object will be copied to ret.
+ * \returns true, if the pop operation is successful, false if queue was empty.
+ *
+ * \note Thread-safe and non-blocking
+ * */
+ template <typename U>
+ bool pop (U & ret)
+ {
+ using detail::likely;
+ for (;;) {
+ tagged_node_handle head = head_.load(memory_order_acquire);
+ node * head_ptr = pool.get_pointer(head);
+
+ tagged_node_handle tail = tail_.load(memory_order_acquire);
+ tagged_node_handle next = head_ptr->next.load(memory_order_acquire);
+ node * next_ptr = pool.get_pointer(next);
+
+ tagged_node_handle head2 = head_.load(memory_order_acquire);
+ if (likely(head == head2)) {
+ if (pool.get_handle(head) == pool.get_handle(tail)) {
+ if (next_ptr == 0)
+ return false;
+
+ tagged_node_handle new_tail(pool.get_handle(next), tail.get_tag() + 1);
+ tail_.compare_exchange_strong(tail, new_tail);
+
+ } else {
+ if (next_ptr == 0)
+ /* this check is not part of the original algorithm as published by michael and scott
+ *
+ * however we reuse the tagged_ptr part for the freelist and clear the next part during node
+ * allocation. we can observe a null-pointer here.
+ * */
+ continue;
+ detail::copy_payload(next_ptr->data, ret);
+
+ tagged_node_handle new_head(pool.get_handle(next), head.get_tag() + 1);
+ if (head_.compare_exchange_weak(head, new_head)) {
+ pool.template destruct<true>(head);
+ return true;
+ }
+ }
+ }
+ }
+ }
+
+ /** Pops object from queue.
+ *
+ * \post if pop operation is successful, object will be copied to ret.
+ * \returns true, if the pop operation is successful, false if queue was empty.
+ *
+ * \note Not thread-safe, but non-blocking
+ *
+ * */
+ bool unsynchronized_pop (T & ret)
+ {
+ return unsynchronized_pop<T>(ret);
+ }
+
+ /** Pops object from queue.
+ *
+ * \pre type U must be constructible by T and copyable, or T must be convertible to U
+ * \post if pop operation is successful, object will be copied to ret.
+ * \returns true, if the pop operation is successful, false if queue was empty.
+ *
+ * \note Not thread-safe, but non-blocking
+ *
+ * */
+ template <typename U>
+ bool unsynchronized_pop (U & ret)
+ {
+ for (;;) {
+ tagged_node_handle head = head_.load(memory_order_relaxed);
+ node * head_ptr = pool.get_pointer(head);
+ tagged_node_handle tail = tail_.load(memory_order_relaxed);
+ tagged_node_handle next = head_ptr->next.load(memory_order_relaxed);
+ node * next_ptr = pool.get_pointer(next);
+
+ if (pool.get_handle(head) == pool.get_handle(tail)) {
+ if (next_ptr == 0)
+ return false;
+
+ tagged_node_handle new_tail(pool.get_handle(next), tail.get_tag() + 1);
+ tail_.store(new_tail);
+ } else {
+ if (next_ptr == 0)
+ /* this check is not part of the original algorithm as published by michael and scott
+ *
+ * however we reuse the tagged_ptr part for the freelist and clear the next part during node
+ * allocation. we can observe a null-pointer here.
+ * */
+ continue;
+ detail::copy_payload(next_ptr->data, ret);
+ tagged_node_handle new_head(pool.get_handle(next), head.get_tag() + 1);
+ head_.store(new_head);
+ pool.template destruct<false>(head);
+ return true;
+ }
+ }
+ }
+
+private:
+#ifndef BOOST_DOXYGEN_INVOKED
+ atomic<tagged_node_handle> head_;
+ static const int padding_size = BOOST_LOCKFREE_CACHELINE_BYTES - sizeof(tagged_node_handle);
+ char padding1[padding_size];
+ atomic<tagged_node_handle> tail_;
+ char padding2[padding_size];
+
+ pool_t pool;
+#endif
+};
+
+} /* namespace lockfree */
+} /* namespace boost */
+
+#endif /* BOOST_LOCKFREE_FIFO_HPP_INCLUDED */

Added: branches/release/boost/lockfree/spsc_queue.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/lockfree/spsc_queue.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,650 @@
+// lock-free single-producer/single-consumer ringbuffer
+// this algorithm is implemented in various projects (linux kernel)
+//
+// Copyright (C) 2009, 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#ifndef BOOST_LOCKFREE_SPSC_QUEUE_HPP_INCLUDED
+#define BOOST_LOCKFREE_SPSC_QUEUE_HPP_INCLUDED
+
+#include <algorithm>
+
+#include <boost/array.hpp>
+#include <boost/assert.hpp>
+#include <boost/noncopyable.hpp>
+#include <boost/static_assert.hpp>
+
+#include <boost/lockfree/detail/atomic.hpp>
+#include <boost/lockfree/detail/branch_hints.hpp>
+#include <boost/lockfree/detail/parameter.hpp>
+#include <boost/lockfree/detail/prefix.hpp>
+
+
+namespace boost {
+namespace lockfree {
+namespace detail {
+
+typedef parameter::parameters<boost::parameter::optional<tag::capacity>,
+ boost::parameter::optional<tag::allocator>
+ > ringbuffer_signature;
+
+template <typename T>
+class ringbuffer_base:
+ boost::noncopyable
+{
+#ifndef BOOST_DOXYGEN_INVOKED
+ typedef std::size_t size_t;
+ static const int padding_size = BOOST_LOCKFREE_CACHELINE_BYTES - sizeof(size_t);
+ atomic<size_t> write_index_;
+ char padding1[padding_size]; /* force read_index and write_index to different cache lines */
+ atomic<size_t> read_index_;
+
+protected:
+ ringbuffer_base(void):
+ write_index_(0), read_index_(0)
+ {}
+
+ static size_t next_index(size_t arg, size_t max_size)
+ {
+ size_t ret = arg + 1;
+ while (unlikely(ret >= max_size))
+ ret -= max_size;
+ return ret;
+ }
+
+ static size_t read_available(size_t write_index, size_t read_index, size_t max_size)
+ {
+ if (write_index >= read_index)
+ return write_index - read_index;
+
+ size_t ret = write_index + max_size - read_index;
+ return ret;
+ }
+
+ static size_t write_available(size_t write_index, size_t read_index, size_t max_size)
+ {
+ size_t ret = read_index - write_index - 1;
+ if (write_index >= read_index)
+ ret += max_size;
+ return ret;
+ }
+
+ bool push(T const & t, T * buffer, size_t max_size)
+ {
+ size_t write_index = write_index_.load(memory_order_relaxed); // only written from push thread
+ size_t next = next_index(write_index, max_size);
+
+ if (next == read_index_.load(memory_order_acquire))
+ return false; /* ringbuffer is full */
+
+ buffer[write_index] = t;
+
+ write_index_.store(next, memory_order_release);
+
+ return true;
+ }
+
+ size_t push(const T * input_buffer, size_t input_count, T * internal_buffer, size_t max_size)
+ {
+ size_t write_index = write_index_.load(memory_order_relaxed); // only written from push thread
+ const size_t read_index = read_index_.load(memory_order_acquire);
+ const size_t avail = write_available(write_index, read_index, max_size);
+
+ if (avail == 0)
+ return 0;
+
+ input_count = (std::min)(input_count, avail);
+
+ size_t new_write_index = write_index + input_count;
+
+ if (write_index + input_count > max_size) {
+ /* copy data in two sections */
+ size_t count0 = max_size - write_index;
+
+ std::copy(input_buffer, input_buffer + count0, internal_buffer + write_index);
+ std::copy(input_buffer + count0, input_buffer + input_count, internal_buffer);
+ new_write_index -= max_size;
+ } else {
+ std::copy(input_buffer, input_buffer + input_count, internal_buffer + write_index);
+
+ if (new_write_index == max_size)
+ new_write_index = 0;
+ }
+
+ write_index_.store(new_write_index, memory_order_release);
+ return input_count;
+ }
+
+ template <typename ConstIterator>
+ ConstIterator push(ConstIterator begin, ConstIterator end, T * internal_buffer, size_t max_size)
+ {
+ // FIXME: avoid std::distance and std::advance
+
+ size_t write_index = write_index_.load(memory_order_relaxed); // only written from push thread
+ const size_t read_index = read_index_.load(memory_order_acquire);
+ const size_t avail = write_available(write_index, read_index, max_size);
+
+ if (avail == 0)
+ return begin;
+
+ size_t input_count = std::distance(begin, end);
+ input_count = (std::min)(input_count, avail);
+
+ size_t new_write_index = write_index + input_count;
+
+ ConstIterator last = begin;
+ std::advance(last, input_count);
+
+ if (write_index + input_count > max_size) {
+ /* copy data in two sections */
+ size_t count0 = max_size - write_index;
+ ConstIterator midpoint = begin;
+ std::advance(midpoint, count0);
+
+ std::copy(begin, midpoint, internal_buffer + write_index);
+ std::copy(midpoint, last, internal_buffer);
+ new_write_index -= max_size;
+ } else {
+ std::copy(begin, last, internal_buffer + write_index);
+
+ if (new_write_index == max_size)
+ new_write_index = 0;
+ }
+
+ write_index_.store(new_write_index, memory_order_release);
+ return last;
+ }
+
+ bool pop (T & ret, T * buffer, size_t max_size)
+ {
+ size_t write_index = write_index_.load(memory_order_acquire);
+ size_t read_index = read_index_.load(memory_order_relaxed); // only written from pop thread
+ if (empty(write_index, read_index))
+ return false;
+
+ ret = buffer[read_index];
+ size_t next = next_index(read_index, max_size);
+ read_index_.store(next, memory_order_release);
+ return true;
+ }
+
+ size_t pop (T * output_buffer, size_t output_count, const T * internal_buffer, size_t max_size)
+ {
+ const size_t write_index = write_index_.load(memory_order_acquire);
+ size_t read_index = read_index_.load(memory_order_relaxed); // only written from pop thread
+
+ const size_t avail = read_available(write_index, read_index, max_size);
+
+ if (avail == 0)
+ return 0;
+
+ output_count = (std::min)(output_count, avail);
+
+ size_t new_read_index = read_index + output_count;
+
+ if (read_index + output_count > max_size) {
+ /* copy data in two sections */
+ size_t count0 = max_size - read_index;
+ size_t count1 = output_count - count0;
+
+ std::copy(internal_buffer + read_index, internal_buffer + max_size, output_buffer);
+ std::copy(internal_buffer, internal_buffer + count1, output_buffer + count0);
+
+ new_read_index -= max_size;
+ } else {
+ std::copy(internal_buffer + read_index, internal_buffer + read_index + output_count, output_buffer);
+ if (new_read_index == max_size)
+ new_read_index = 0;
+ }
+
+ read_index_.store(new_read_index, memory_order_release);
+ return output_count;
+ }
+
+ template <typename OutputIterator>
+ size_t pop (OutputIterator it, const T * internal_buffer, size_t max_size)
+ {
+ const size_t write_index = write_index_.load(memory_order_acquire);
+ size_t read_index = read_index_.load(memory_order_relaxed); // only written from pop thread
+
+ const size_t avail = read_available(write_index, read_index, max_size);
+ if (avail == 0)
+ return 0;
+
+ size_t new_read_index = read_index + avail;
+
+ if (read_index + avail > max_size) {
+ /* copy data in two sections */
+ size_t count0 = max_size - read_index;
+ size_t count1 = avail - count0;
+
+ std::copy(internal_buffer + read_index, internal_buffer + max_size, it);
+ std::copy(internal_buffer, internal_buffer + count1, it);
+
+ new_read_index -= max_size;
+ } else {
+ std::copy(internal_buffer + read_index, internal_buffer + read_index + avail, it);
+ if (new_read_index == max_size)
+ new_read_index = 0;
+ }
+
+ read_index_.store(new_read_index, memory_order_release);
+ return avail;
+ }
+#endif
+
+
+public:
+ /** reset the ringbuffer
+ *
+ * \note Not thread-safe
+ * */
+ void reset(void)
+ {
+ write_index_.store(0, memory_order_relaxed);
+ read_index_.store(0, memory_order_release);
+ }
+
+ /** Check if the ringbuffer is empty
+ *
+ * \return true, if the ringbuffer is empty, false otherwise
+ * \note Due to the concurrent nature of the ringbuffer the result may be inaccurate.
+ * */
+ bool empty(void)
+ {
+ return empty(write_index_.load(memory_order_relaxed), read_index_.load(memory_order_relaxed));
+ }
+
+ /**
+ * \return true, if implementation is lock-free.
+ *
+ * */
+ bool is_lock_free(void) const
+ {
+ return write_index_.is_lock_free() && read_index_.is_lock_free();
+ }
+
+private:
+ bool empty(size_t write_index, size_t read_index)
+ {
+ return write_index == read_index;
+ }
+};
+
+template <typename T, std::size_t max_size>
+class compile_time_sized_ringbuffer:
+ public ringbuffer_base<T>
+{
+ typedef std::size_t size_t;
+ boost::array<T, max_size> array_;
+
+public:
+ bool push(T const & t)
+ {
+ return ringbuffer_base<T>::push(t, array_.c_array(), max_size);
+ }
+
+ bool pop(T & ret)
+ {
+ return ringbuffer_base<T>::pop(ret, array_.c_array(), max_size);
+ }
+
+ size_t push(T const * t, size_t size)
+ {
+ return ringbuffer_base<T>::push(t, size, array_.c_array(), max_size);
+ }
+
+ template <size_t size>
+ size_t push(T const (&t)[size])
+ {
+ return push(t, size);
+ }
+
+ template <typename ConstIterator>
+ ConstIterator push(ConstIterator begin, ConstIterator end)
+ {
+ return ringbuffer_base<T>::push(begin, end, array_.c_array(), max_size);
+ }
+
+ size_t pop(T * ret, size_t size)
+ {
+ return ringbuffer_base<T>::pop(ret, size, array_.c_array(), max_size);
+ }
+
+ template <size_t size>
+ size_t pop(T (&ret)[size])
+ {
+ return pop(ret, size);
+ }
+
+ template <typename OutputIterator>
+ size_t pop(OutputIterator it)
+ {
+ return ringbuffer_base<T>::pop(it, array_.c_array(), max_size);
+ }
+};
+
+template <typename T, typename Alloc>
+class runtime_sized_ringbuffer:
+ public ringbuffer_base<T>,
+ private Alloc
+{
+ typedef std::size_t size_t;
+ size_t max_elements_;
+ typedef typename Alloc::pointer pointer;
+ pointer array_;
+
+public:
+ explicit runtime_sized_ringbuffer(size_t max_elements):
+ max_elements_(max_elements)
+ {
+ // TODO: we don't necessarily need to construct all elements
+ array_ = Alloc::allocate(max_elements);
+ for (size_t i = 0; i != max_elements; ++i)
+ Alloc::construct(array_ + i, T());
+ }
+
+ template <typename U>
+ runtime_sized_ringbuffer(typename Alloc::template rebind<U>::other const & alloc, size_t max_elements):
+ Alloc(alloc), max_elements_(max_elements)
+ {
+ // TODO: we don't necessarily need to construct all elements
+ array_ = Alloc::allocate(max_elements);
+ for (size_t i = 0; i != max_elements; ++i)
+ Alloc::construct(array_ + i, T());
+ }
+
+ runtime_sized_ringbuffer(Alloc const & alloc, size_t max_elements):
+ Alloc(alloc), max_elements_(max_elements)
+ {
+ // TODO: we don't necessarily need to construct all elements
+ array_ = Alloc::allocate(max_elements);
+ for (size_t i = 0; i != max_elements; ++i)
+ Alloc::construct(array_ + i, T());
+ }
+
+ ~runtime_sized_ringbuffer(void)
+ {
+ for (size_t i = 0; i != max_elements_; ++i)
+ Alloc::destroy(array_ + i);
+ Alloc::deallocate(array_, max_elements_);
+ }
+
+ bool push(T const & t)
+ {
+ return ringbuffer_base<T>::push(t, &*array_, max_elements_);
+ }
+
+ bool pop(T & ret)
+ {
+ return ringbuffer_base<T>::pop(ret, &*array_, max_elements_);
+ }
+
+ size_t push(T const * t, size_t size)
+ {
+ return ringbuffer_base<T>::push(t, size, &*array_, max_elements_);
+ }
+
+ template <size_t size>
+ size_t push(T const (&t)[size])
+ {
+ return push(t, size);
+ }
+
+ template <typename ConstIterator>
+ ConstIterator push(ConstIterator begin, ConstIterator end)
+ {
+ return ringbuffer_base<T>::push(begin, end, array_, max_elements_);
+ }
+
+ size_t pop(T * ret, size_t size)
+ {
+ return ringbuffer_base<T>::pop(ret, size, array_, max_elements_);
+ }
+
+ template <size_t size>
+ size_t pop(T (&ret)[size])
+ {
+ return pop(ret, size);
+ }
+
+ template <typename OutputIterator>
+ size_t pop(OutputIterator it)
+ {
+ return ringbuffer_base<T>::pop(it, array_, max_elements_);
+ }
+};
+
+template <typename T, typename A0, typename A1>
+struct make_ringbuffer
+{
+ typedef typename ringbuffer_signature::bind<A0, A1>::type bound_args;
+
+ typedef extract_capacity<bound_args> extract_capacity_t;
+
+ static const bool runtime_sized = !extract_capacity_t::has_capacity;
+ static const size_t capacity = extract_capacity_t::capacity;
+
+ typedef extract_allocator<bound_args, T> extract_allocator_t;
+ typedef typename extract_allocator_t::type allocator;
+
+ // allocator argument is only sane, for run-time sized ringbuffers
+ BOOST_STATIC_ASSERT((mpl::if_<mpl::bool_<!runtime_sized>,
+ mpl::bool_<!extract_allocator_t::has_allocator>,
+ mpl::true_
+ >::type::value));
+
+ typedef typename mpl::if_c<runtime_sized,
+ runtime_sized_ringbuffer<T, allocator>,
+ compile_time_sized_ringbuffer<T, capacity>
+ >::type ringbuffer_type;
+};
+
+
+} /* namespace detail */
+
+
+/** The spsc_queue class provides a single-writer/single-reader fifo queue, pushing and popping is wait-free.
+ *
+ * \b Policies:
+ * - \c boost::lockfree::capacity<>, optional <br>
+ * If this template argument is passed to the options, the size of the ringbuffer is set at compile-time.
+ *
+ * - \c boost::lockfree::allocator<>, defaults to \c boost::lockfree::allocator<std::allocator<T>> <br>
+ * Specifies the allocator that is used to allocate the ringbuffer. This option is only valid, if the ringbuffer is configured
+ * to be sized at run-time
+ *
+ * \b Requirements:
+ * - T must have a default constructor
+ * - T must be copyable
+ * */
+#ifndef BOOST_DOXYGEN_INVOKED
+template <typename T,
+ class A0 = boost::parameter::void_,
+ class A1 = boost::parameter::void_>
+#else
+template <typename T, ...Options>
+#endif
+class spsc_queue:
+ public detail::make_ringbuffer<T, A0, A1>::ringbuffer_type
+{
+private:
+
+#ifndef BOOST_DOXYGEN_INVOKED
+ typedef typename detail::make_ringbuffer<T, A0, A1>::ringbuffer_type base_type;
+ static const bool runtime_sized = detail::make_ringbuffer<T, A0, A1>::runtime_sized;
+ typedef typename detail::make_ringbuffer<T, A0, A1>::allocator allocator_arg;
+
+ struct implementation_defined
+ {
+ typedef allocator_arg allocator;
+ typedef std::size_t size_type;
+ };
+#endif
+
+public:
+ typedef T value_type;
+ typedef typename implementation_defined::allocator allocator;
+ typedef typename implementation_defined::size_type size_type;
+
+ /** Constructs a spsc_queue
+ *
+ * \pre spsc_queue must be configured to be sized at compile-time
+ */
+ // @{
+ spsc_queue(void)
+ {
+ BOOST_ASSERT(!runtime_sized);
+ }
+
+ template <typename U>
+ explicit spsc_queue(typename allocator::template rebind<U>::other const & alloc)
+ {
+ // just for API compatibility: we don't actually need an allocator
+ BOOST_STATIC_ASSERT(!runtime_sized);
+ }
+
+ explicit spsc_queue(allocator const & alloc)
+ {
+ // just for API compatibility: we don't actually need an allocator
+ BOOST_ASSERT(!runtime_sized);
+ }
+ // @}
+
+
+ /** Constructs a spsc_queue for element_count elements
+ *
+ * \pre spsc_queue must be configured to be sized at run-time
+ */
+ // @{
+ explicit spsc_queue(size_type element_count):
+ base_type(element_count)
+ {
+ BOOST_ASSERT(runtime_sized);
+ }
+
+ template <typename U>
+ spsc_queue(size_type element_count, typename allocator::template rebind<U>::other const & alloc):
+ base_type(alloc, element_count)
+ {
+ BOOST_STATIC_ASSERT(runtime_sized);
+ }
+
+ spsc_queue(size_type element_count, allocator_arg const & alloc):
+ base_type(alloc, element_count)
+ {
+ BOOST_ASSERT(runtime_sized);
+ }
+ // @}
+
+ /** Pushes object t to the ringbuffer.
+ *
+ * \pre only one thread is allowed to push data to the spsc_queue
+ * \post object will be pushed to the spsc_queue, unless it is full.
+ * \return true, if the push operation is successful.
+ *
+ * \note Thread-safe and wait-free
+ * */
+ bool push(T const & t)
+ {
+ return base_type::push(t);
+ }
+
+ /** Pops one object from ringbuffer.
+ *
+ * \pre only one thread is allowed to pop data to the spsc_queue
+ * \post if ringbuffer is not empty, object will be copied to ret.
+ * \return true, if the pop operation is successful, false if ringbuffer was empty.
+ *
+ * \note Thread-safe and wait-free
+ */
+ bool pop(T & ret)
+ {
+ return base_type::pop(ret);
+ }
+
+ /** Pushes as many objects from the array t as there is space.
+ *
+ * \pre only one thread is allowed to push data to the spsc_queue
+ * \return number of pushed items
+ *
+ * \note Thread-safe and wait-free
+ */
+ size_type push(T const * t, size_type size)
+ {
+ return base_type::push(t, size);
+ }
+
+ /** Pushes as many objects from the array t as there is space available.
+ *
+ * \pre only one thread is allowed to push data to the spsc_queue
+ * \return number of pushed items
+ *
+ * \note Thread-safe and wait-free
+ */
+ template <size_type size>
+ size_type push(T const (&t)[size])
+ {
+ return push(t, size);
+ }
+
+ /** Pushes as many objects from the range [begin, end) as there is space .
+ *
+ * \pre only one thread is allowed to push data to the spsc_queue
+ * \return iterator to the first element, which has not been pushed
+ *
+ * \note Thread-safe and wait-free
+ */
+ template <typename ConstIterator>
+ ConstIterator push(ConstIterator begin, ConstIterator end)
+ {
+ return base_type::push(begin, end);
+ }
+
+ /** Pops a maximum of size objects from ringbuffer.
+ *
+ * \pre only one thread is allowed to pop data to the spsc_queue
+ * \return number of popped items
+ *
+ * \note Thread-safe and wait-free
+ * */
+ size_type pop(T * ret, size_type size)
+ {
+ return base_type::pop(ret, size);
+ }
+
+ /** Pops a maximum of size objects from spsc_queue.
+ *
+ * \pre only one thread is allowed to pop data to the spsc_queue
+ * \return number of popped items
+ *
+ * \note Thread-safe and wait-free
+ * */
+ template <size_type size>
+ size_type pop(T (&ret)[size])
+ {
+ return pop(ret, size);
+ }
+
+ /** Pops objects to the output iterator it
+ *
+ * \pre only one thread is allowed to pop data to the spsc_queue
+ * \return number of popped items
+ *
+ * \note Thread-safe and wait-free
+ * */
+ template <typename OutputIterator>
+ size_type pop(OutputIterator it)
+ {
+ return base_type::pop(it);
+ }
+};
+
+} /* namespace lockfree */
+} /* namespace boost */
+
+
+#endif /* BOOST_LOCKFREE_SPSC_QUEUE_HPP_INCLUDED */

Added: branches/release/boost/lockfree/stack.hpp
==============================================================================
--- (empty file)
+++ branches/release/boost/lockfree/stack.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,512 @@
+// Copyright (C) 2008, 2009, 2010, 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#ifndef BOOST_LOCKFREE_STACK_HPP_INCLUDED
+#define BOOST_LOCKFREE_STACK_HPP_INCLUDED
+
+#include <boost/assert.hpp>
+#include <boost/checked_delete.hpp>
+#include <boost/integer_traits.hpp>
+#include <boost/noncopyable.hpp>
+#include <boost/static_assert.hpp>
+#include <boost/tuple/tuple.hpp>
+#include <boost/type_traits/has_trivial_assign.hpp>
+#include <boost/type_traits/has_trivial_destructor.hpp>
+
+#include <boost/lockfree/detail/atomic.hpp>
+#include <boost/lockfree/detail/copy_payload.hpp>
+#include <boost/lockfree/detail/freelist.hpp>
+#include <boost/lockfree/detail/parameter.hpp>
+#include <boost/lockfree/detail/tagged_ptr.hpp>
+
+namespace boost {
+namespace lockfree {
+namespace detail {
+
+typedef parameter::parameters<boost::parameter::optional<tag::allocator>,
+ boost::parameter::optional<tag::capacity>
+ > stack_signature;
+
+}
+
+/** The stack class provides a multi-writer/multi-reader stack, pushing and popping is lock-free,
+ * construction/destruction has to be synchronized. It uses a freelist for memory management,
+ * freed nodes are pushed to the freelist and not returned to the OS before the stack is destroyed.
+ *
+ * \b Policies:
+ *
+ * - \c boost::lockfree::fixed_sized<>, defaults to \c boost::lockfree::fixed_sized<false> <br>
+ * Can be used to completely disable dynamic memory allocations during push in order to ensure lockfree behavior.<br>
+ * If the data structure is configured as fixed-sized, the internal nodes are stored inside an array and they are addressed
+ * by array indexing. This limits the possible size of the stack to the number of elements that can be addressed by the index
+ * type (usually 2**16-2), but on platforms that lack double-width compare-and-exchange instructions, this is the best way
+ * to achieve lock-freedom.
+ *
+ * - \c boost::lockfree::capacity<>, optional <br>
+ * If this template argument is passed to the options, the size of the stack is set at compile-time. <br>
+ * It this option implies \c fixed_sized<true>
+ *
+ * - \c boost::lockfree::allocator<>, defaults to \c boost::lockfree::allocator<std::allocator<void>> <br>
+ * Specifies the allocator that is used for the internal freelist
+ *
+ * \b Requirements:
+ * - T must have a copy constructor
+ * */
+#ifndef BOOST_DOXYGEN_INVOKED
+template <typename T,
+ class A0 = boost::parameter::void_,
+ class A1 = boost::parameter::void_,
+ class A2 = boost::parameter::void_>
+#else
+template <typename T, ...Options>
+#endif
+class stack:
+ boost::noncopyable
+{
+private:
+#ifndef BOOST_DOXYGEN_INVOKED
+ BOOST_STATIC_ASSERT(boost::has_trivial_assign<T>::value);
+ BOOST_STATIC_ASSERT(boost::has_trivial_destructor<T>::value);
+
+ typedef typename detail::stack_signature::bind<A0, A1, A2>::type bound_args;
+
+ static const bool has_capacity = detail::extract_capacity<bound_args>::has_capacity;
+ static const size_t capacity = detail::extract_capacity<bound_args>::capacity;
+ static const bool fixed_sized = detail::extract_fixed_sized<bound_args>::value;
+ static const bool node_based = !(has_capacity || fixed_sized);
+ static const bool compile_time_sized = has_capacity;
+
+ struct node
+ {
+ node(T const & val):
+ v(val)
+ {}
+
+ typedef typename detail::select_tagged_handle<node, node_based>::handle_type handle_t;
+ handle_t next;
+ const T v;
+ };
+
+ typedef typename detail::extract_allocator<bound_args, node>::type node_allocator;
+ typedef typename detail::select_freelist<node, node_allocator, compile_time_sized, fixed_sized, capacity>::type pool_t;
+ typedef typename pool_t::tagged_node_handle tagged_node_handle;
+
+ // check compile-time capacity
+ BOOST_STATIC_ASSERT((mpl::if_c<has_capacity,
+ mpl::bool_<capacity - 1 < boost::integer_traits<boost::uint16_t>::const_max>,
+ mpl::true_
+ >::type::value));
+
+ struct implementation_defined
+ {
+ typedef node_allocator allocator;
+ typedef std::size_t size_type;
+ };
+
+#endif
+
+public:
+ typedef T value_type;
+ typedef typename implementation_defined::allocator allocator;
+ typedef typename implementation_defined::size_type size_type;
+
+ /**
+ * \return true, if implementation is lock-free.
+ *
+ * \warning It only checks, if the top stack node and the freelist can be modified in a lock-free manner.
+ * On most platforms, the whole implementation is lock-free, if this is true. Using c++0x-style atomics,
+ * there is no possibility to provide a completely accurate implementation, because one would need to test
+ * every internal node, which is impossible if further nodes will be allocated from the operating system.
+ *
+ * */
+ bool is_lock_free (void) const
+ {
+ return tos.is_lock_free() && pool.is_lock_free();
+ }
+
+ //! Construct stack
+ // @{
+ stack(void):
+ pool(node_allocator(), capacity)
+ {
+ BOOST_ASSERT(has_capacity);
+ initialize();
+ }
+
+ template <typename U>
+ explicit stack(typename node_allocator::template rebind<U>::other const & alloc):
+ pool(alloc, capacity)
+ {
+ BOOST_STATIC_ASSERT(has_capacity);
+ initialize();
+ }
+
+ explicit stack(allocator const & alloc):
+ pool(alloc, capacity)
+ {
+ BOOST_ASSERT(has_capacity);
+ initialize();
+ }
+ // @}
+
+ //! Construct stack, allocate n nodes for the freelist.
+ // @{
+ explicit stack(size_type n):
+ pool(node_allocator(), n)
+ {
+ BOOST_ASSERT(!has_capacity);
+ initialize();
+ }
+
+ template <typename U>
+ stack(size_type n, typename node_allocator::template rebind<U>::other const & alloc):
+ pool(alloc, n)
+ {
+ BOOST_STATIC_ASSERT(!has_capacity);
+ initialize();
+ }
+ // @}
+
+ /** Allocate n nodes for freelist
+ *
+ * \pre only valid if no capacity<> argument given
+ * \note thread-safe, may block if memory allocator blocks
+ *
+ * */
+ void reserve(size_type n)
+ {
+ BOOST_STATIC_ASSERT(!has_capacity);
+ pool.reserve(n);
+ }
+
+ /** Allocate n nodes for freelist
+ *
+ * \pre only valid if no capacity<> argument given
+ * \note not thread-safe, may block if memory allocator blocks
+ *
+ * */
+ void reserve_unsafe(size_type n)
+ {
+ BOOST_STATIC_ASSERT(!has_capacity);
+ pool.reserve_unsafe(n);
+ }
+
+ /** Destroys stack, free all nodes from freelist.
+ *
+ * \note not thread-safe
+ *
+ * */
+ ~stack(void)
+ {
+ T dummy;
+ while(unsynchronized_pop(dummy))
+ {}
+ }
+
+private:
+#ifndef BOOST_DOXYGEN_INVOKED
+ void initialize(void)
+ {
+ tos.store(tagged_node_handle(pool.null_handle(), 0));
+ }
+
+ void link_nodes_atomic(node * new_top_node, node * end_node)
+ {
+ tagged_node_handle old_tos = tos.load(detail::memory_order_relaxed);
+ for (;;) {
+ tagged_node_handle new_tos (pool.get_handle(new_top_node), old_tos.get_tag());
+ end_node->next = pool.get_handle(old_tos);
+
+ if (tos.compare_exchange_weak(old_tos, new_tos))
+ break;
+ }
+ }
+
+ void link_nodes_unsafe(node * new_top_node, node * end_node)
+ {
+ tagged_node_handle old_tos = tos.load(detail::memory_order_relaxed);
+
+ tagged_node_handle new_tos (pool.get_handle(new_top_node), old_tos.get_tag());
+ end_node->next = pool.get_pointer(old_tos);
+
+ tos.store(new_tos, memory_order_relaxed);
+ }
+
+ template <bool Threadsafe, bool Bounded, typename ConstIterator>
+ tuple<node*, node*> prepare_node_list(ConstIterator begin, ConstIterator end, ConstIterator & ret)
+ {
+ ConstIterator it = begin;
+ node * end_node = pool.template construct<Threadsafe, Bounded>(*it++);
+ if (end_node == NULL) {
+ ret = begin;
+ return make_tuple<node*, node*>(NULL, NULL);
+ }
+
+ node * new_top_node = end_node;
+ end_node->next = NULL;
+
+ try {
+ /* link nodes */
+ for (; it != end; ++it) {
+ node * newnode = pool.template construct<Threadsafe, Bounded>(*it);
+ if (newnode == NULL)
+ break;
+ newnode->next = new_top_node;
+ new_top_node = newnode;
+ }
+ } catch (...) {
+ for (node * current_node = new_top_node; current_node != NULL;) {
+ node * next = current_node->next;
+ pool.template destruct<Threadsafe>(current_node);
+ current_node = next;
+ }
+ throw;
+ }
+ ret = it;
+ return make_tuple(new_top_node, end_node);
+ }
+#endif
+
+public:
+ /** Pushes object t to the stack.
+ *
+ * \post object will be pushed to the stack, if internal node can be allocated
+ * \returns true, if the push operation is successful.
+ *
+ * \note Thread-safe. If internal memory pool is exhausted and the memory pool is not fixed-sized, a new node will be allocated
+ * from the OS. This may not be lock-free.
+ * \throws if memory allocator throws
+ * */
+ bool push(T const & v)
+ {
+ return do_push<false>(v);
+ }
+
+ /** Pushes object t to the stack.
+ *
+ * \post object will be pushed to the stack, if internal node can be allocated
+ * \returns true, if the push operation is successful.
+ *
+ * \note Thread-safe and non-blocking. If internal memory pool is exhausted, the push operation will fail
+ * */
+ bool bounded_push(T const & v)
+ {
+ return do_push<true>(v);
+ }
+
+#ifndef BOOST_DOXYGEN_INVOKED
+private:
+ template <bool Bounded>
+ bool do_push(T const & v)
+ {
+ node * newnode = pool.template construct<true, Bounded>(v);
+ if (newnode == 0)
+ return false;
+
+ link_nodes_atomic(newnode, newnode);
+ return true;
+ }
+
+ template <bool Bounded, typename ConstIterator>
+ ConstIterator do_push(ConstIterator begin, ConstIterator end)
+ {
+ node * new_top_node;
+ node * end_node;
+ ConstIterator ret;
+
+ tie(new_top_node, end_node) = prepare_node_list<true, Bounded>(begin, end, ret);
+ if (new_top_node)
+ link_nodes_atomic(new_top_node, end_node);
+
+ return ret;
+ }
+
+public:
+#endif
+
+ /** Pushes as many objects from the range [begin, end) as freelist node can be allocated.
+ *
+ * \return iterator to the first element, which has not been pushed
+ *
+ * \note Operation is applied atomically
+ * \note Thread-safe. If internal memory pool is exhausted and the memory pool is not fixed-sized, a new node will be allocated
+ * from the OS. This may not be lock-free.
+ * \throws if memory allocator throws
+ */
+ template <typename ConstIterator>
+ ConstIterator push(ConstIterator begin, ConstIterator end)
+ {
+ return do_push<false, ConstIterator>(begin, end);
+ }
+
+ /** Pushes as many objects from the range [begin, end) as freelist node can be allocated.
+ *
+ * \return iterator to the first element, which has not been pushed
+ *
+ * \note Operation is applied atomically
+ * \note Thread-safe and non-blocking. If internal memory pool is exhausted, the push operation will fail
+ * \throws if memory allocator throws
+ */
+ template <typename ConstIterator>
+ ConstIterator bounded_push(ConstIterator begin, ConstIterator end)
+ {
+ return do_push<true, ConstIterator>(begin, end);
+ }
+
+
+ /** Pushes object t to the stack.
+ *
+ * \post object will be pushed to the stack, if internal node can be allocated
+ * \returns true, if the push operation is successful.
+ *
+ * \note Not thread-safe. If internal memory pool is exhausted and the memory pool is not fixed-sized, a new node will be allocated
+ * from the OS. This may not be lock-free.
+ * \throws if memory allocator throws
+ * */
+ bool unsynchronized_push(T const & v)
+ {
+ node * newnode = pool.template construct<false, false>(v);
+ if (newnode == 0)
+ return false;
+
+ link_nodes_unsafe(newnode, newnode);
+ return true;
+ }
+
+ /** Pushes as many objects from the range [begin, end) as freelist node can be allocated.
+ *
+ * \return iterator to the first element, which has not been pushed
+ *
+ * \note Not thread-safe. If internal memory pool is exhausted and the memory pool is not fixed-sized, a new node will be allocated
+ * from the OS. This may not be lock-free.
+ * \throws if memory allocator throws
+ */
+ template <typename ConstIterator>
+ ConstIterator unsynchronized_push(ConstIterator begin, ConstIterator end)
+ {
+ node * new_top_node;
+ node * end_node;
+ ConstIterator ret;
+
+ tie(new_top_node, end_node) = prepare_node_list<false, false>(begin, end, ret);
+ if (new_top_node)
+ link_nodes_unsafe(new_top_node, end_node);
+
+ return ret;
+ }
+
+
+ /** Pops object from stack.
+ *
+ * \post if pop operation is successful, object will be copied to ret.
+ * \returns true, if the pop operation is successful, false if stack was empty.
+ *
+ * \note Thread-safe and non-blocking
+ *
+ * */
+ bool pop(T & ret)
+ {
+ return pop<T>(ret);
+ }
+
+ /** Pops object from stack.
+ *
+ * \pre type T must be convertible to U
+ * \post if pop operation is successful, object will be copied to ret.
+ * \returns true, if the pop operation is successful, false if stack was empty.
+ *
+ * \note Thread-safe and non-blocking
+ *
+ * */
+ template <typename U>
+ bool pop(U & ret)
+ {
+ BOOST_STATIC_ASSERT((boost::is_convertible<T, U>::value));
+ tagged_node_handle old_tos = tos.load(detail::memory_order_consume);
+
+ for (;;) {
+ node * old_tos_pointer = pool.get_pointer(old_tos);
+ if (!old_tos_pointer)
+ return false;
+
+ tagged_node_handle new_tos(old_tos_pointer->next, old_tos.get_tag() + 1);
+
+ if (tos.compare_exchange_weak(old_tos, new_tos)) {
+ detail::copy_payload(old_tos_pointer->v, ret);
+ pool.template destruct<true>(old_tos);
+ return true;
+ }
+ }
+ }
+
+
+ /** Pops object from stack.
+ *
+ * \post if pop operation is successful, object will be copied to ret.
+ * \returns true, if the pop operation is successful, false if stack was empty.
+ *
+ * \note Not thread-safe, but non-blocking
+ *
+ * */
+ bool unsynchronized_pop(T & ret)
+ {
+ return unsynchronized_pop<T>(ret);
+ }
+
+ /** Pops object from stack.
+ *
+ * \pre type T must be convertible to U
+ * \post if pop operation is successful, object will be copied to ret.
+ * \returns true, if the pop operation is successful, false if stack was empty.
+ *
+ * \note Not thread-safe, but non-blocking
+ *
+ * */
+ template <typename U>
+ bool unsynchronized_pop(U & ret)
+ {
+ BOOST_STATIC_ASSERT((boost::is_convertible<T, U>::value));
+ tagged_node_handle old_tos = tos.load(detail::memory_order_relaxed);
+ node * old_tos_pointer = pool.get_pointer(old_tos);
+
+ if (!pool.get_pointer(old_tos))
+ return false;
+
+ node * new_tos_ptr = pool.get_pointer(old_tos_pointer->next);
+ tagged_node_handle new_tos(pool.get_handle(new_tos_ptr), old_tos.get_tag() + 1);
+
+ tos.store(new_tos, memory_order_relaxed);
+ detail::copy_payload(old_tos_pointer->v, ret);
+ pool.template destruct<false>(old_tos);
+ return true;
+ }
+
+ /**
+ * \return true, if stack is empty.
+ *
+ * \note It only guarantees that at some point during the execution of the function the stack has been empty.
+ * It is rarely practical to use this value in program logic, because the stack can be modified by other threads.
+ * */
+ bool empty(void) const
+ {
+ return pool.get_pointer(tos.load()) == NULL;
+ }
+
+private:
+#ifndef BOOST_DOXYGEN_INVOKED
+ detail::atomic<tagged_node_handle> tos;
+
+ static const int padding_size = BOOST_LOCKFREE_CACHELINE_BYTES - sizeof(tagged_node_handle);
+ char padding[padding_size];
+
+ pool_t pool;
+#endif
+};
+
+} /* namespace lockfree */
+} /* namespace boost */
+
+#endif /* BOOST_LOCKFREE_STACK_HPP_INCLUDED */

Modified: branches/release/doc/Jamfile.v2
==============================================================================
--- branches/release/doc/Jamfile.v2 (original)
+++ branches/release/doc/Jamfile.v2 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -60,6 +60,9 @@
     #<dependency>../libs/spirit/doc//spirit
     <dependency>../libs/heap/doc//autodoc.xml
     <dependency>../libs/heap/doc//heap
+ <dependency>../libs/lockfree/doc//autodoc.xml
+ <dependency>../libs/lockfree/doc//lockfree
+ <dependency>../libs/atomic/doc//atomic
 
     ## Add path references to the QuickBook generated docs...
 
@@ -90,6 +93,8 @@
     <implicit-dependency>../libs/random/doc//random
     #<implicit-dependency>../libs/spirit/doc//spirit
     <implicit-dependency>../libs/heap/doc//heap
+ <implicit-dependency>../libs/lockfree/doc//lockfree
+ <implicit-dependency>../libs/atomic/doc//atomic
 
     <xsl:param>boost.libraries=../../libs/libraries.htm
 

Added: branches/release/doc/html/atomic.html
==============================================================================
--- (empty file)
+++ branches/release/doc/html/atomic.html 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,16 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html>
+ <head>
+ <!-- Copyright (C) 2002 Douglas Gregor <doug.gregor -at- gmail.com>
+
+ Distributed under the Boost Software License, Version 1.0.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ http://www.boost.org/LICENSE_1_0.txt) -->
+ <title>Redirect to generated documentation</title>
+ <meta http-equiv="refresh" content="0; URL=http://boost-sandbox.sourceforge.net/doc/html/atomic.html">
+ </head>
+ <body>
+ Automatic redirection failed, please go to
+ http://boost-sandbox.sourceforge.net/doc/html/atomic.html
+ </body>
+</html>

Added: branches/release/doc/html/lockfree.html
==============================================================================
--- (empty file)
+++ branches/release/doc/html/lockfree.html 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,16 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html>
+ <head>
+ <!-- Copyright (C) 2002 Douglas Gregor <doug.gregor -at- gmail.com>
+
+ Distributed under the Boost Software License, Version 1.0.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ http://www.boost.org/LICENSE_1_0.txt) -->
+ <title>Redirect to generated documentation</title>
+ <meta http-equiv="refresh" content="0; URL=http://boost-sandbox.sourceforge.net/doc/html/lockfree.html">
+ </head>
+ <body>
+ Automatic redirection failed, please go to
+ http://boost-sandbox.sourceforge.net/doc/html/lockfree.html
+ </body>
+</html>

Modified: branches/release/doc/src/boost.xml
==============================================================================
--- branches/release/doc/src/boost.xml (original)
+++ branches/release/doc/src/boost.xml 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -55,6 +55,8 @@
      </libraryinfo>
    </library>
 
+ <xi:include href="atomic.xml"/>
+
    <library name="Bind" dirname="bind" html-only="1">
      <libraryinfo>
        <author>
@@ -372,6 +374,8 @@
 
    <xi:include href="lexical_cast.xml"/>
 
+ <xi:include href="lockfree.xml"/>
+
    <library name="Math" dirname="math" html-only="1">
      <libraryinfo>
        <author>

Added: branches/release/libs/atomic/build/Jamfile.v2
==============================================================================
--- (empty file)
+++ branches/release/libs/atomic/build/Jamfile.v2 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,32 @@
+# Boost.Atomic Library Jamfile
+#
+# Copyright Helge Bahmann 2011.
+# Distributed under the Boost Software License, Version 1.0.
+# (See accompanying file LICENSE_1_0.txt or copy at
+# http://www.boost.org/LICENSE_1_0.txt)
+
+import common ;
+
+project boost/atomic
+ : requirements
+ <threading>multi
+ <link>shared:<define>BOOST_ATOMIC_DYN_LINK=1
+ <define>BOOST_ATOMIC_SOURCE
+ : usage-requirements
+ <link>shared:<define>BOOST_ATOMIC_DYN_LINK=1
+ : source-location ../src
+ ;
+
+alias atomic_sources
+ : lockpool.cpp
+ ;
+
+explicit atomic_sources ;
+
+
+lib boost_atomic
+ : atomic_sources
+ ;
+
+
+boost-install boost_atomic ;

Added: branches/release/libs/atomic/doc/Jamfile.v2
==============================================================================
--- (empty file)
+++ branches/release/libs/atomic/doc/Jamfile.v2 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,26 @@
+# Boost.Atomic library documentation Jamfile
+#
+# Copyright Helge Bahmann 2011.
+# Copyright Tim Blechmann 2012.
+# Distributed under the Boost Software License, Version 1.0.
+# (See accompanying file LICENSE_1_0.txt or copy at
+# http://www.boost.org/LICENSE_1_0.txt)
+
+import quickbook ;
+import boostbook : boostbook ;
+
+xml atomic : atomic.qbk ;
+
+boostbook standalone
+ : atomic
+ : <xsl:param>boost.root=../../../..
+ <xsl:param>boost.libraries=../../../libraries.htm
+ <format>pdf:<xsl:param>boost.url.prefix=http://www.boost.org/doc/libs/release/libs/atomic/doc/html
+ ;
+
+install css : [ glob $(BOOST_ROOT)/doc/src/*.css ]
+ : <location>html ;
+install images : [ glob $(BOOST_ROOT)/doc/src/images/*.png ]
+ : <location>html/images ;
+explicit css ;
+explicit images ;

Added: branches/release/libs/atomic/doc/atomic.hpp
==============================================================================
--- (empty file)
+++ branches/release/libs/atomic/doc/atomic.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,526 @@
+/** \file boost/atomic.hpp */
+
+// Copyright (c) 2009 Helge Bahmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+/* this is just a pseudo-header file fed to doxygen
+to more easily generate the class documentation; will
+be replaced by proper documentation down the road */
+
+namespace boost {
+
+/**
+ \brief Memory ordering constraints
+
+ This defines the relative order of one atomic operation
+ and other memory operations (loads, stores, other atomic operations)
+ executed by the same thread.
+
+ The order of operations specified by the programmer in the
+ source code ("program order") does not necessarily match
+ the order in which they are actually executed on the target system:
+ Both compiler as well as processor may reorder operations
+ quite arbitrarily. <B>Specifying the wrong ordering
+ constraint will therefore generally result in an incorrect program.</B>
+*/
+enum memory_order {
+ /**
+ \brief No constraint
+ Atomic operation and other memory operations may be reordered freely.
+ */
+ memory_order_relaxed,
+ /**
+ \brief Data dependence constraint
+ Atomic operation must strictly precede any memory operation that
+ computationally depends on the outcome of the atomic operation.
+ */
+ memory_order_consume,
+ /**
+ \brief Acquire memory
+ Atomic operation must strictly precede all memory operations that
+ follow in program order.
+ */
+ memory_order_acquire,
+ /**
+ \brief Release memory
+ Atomic operation must strictly follow all memory operations that precede
+ in program order.
+ */
+ memory_order_release,
+ /**
+ \brief Acquire and release memory
+ Combines the effects of \ref memory_order_acquire and \ref memory_order_release
+ */
+ memory_order_acq_rel,
+ /**
+ \brief Sequentially consistent
+ Produces the same result \ref memory_order_acq_rel, but additionally
+ enforces globally sequential consistent execution
+ */
+ memory_order_seq_cst
+};
+
+/**
+ \brief Atomic datatype
+
+ An atomic variable. Provides methods to modify this variable atomically.
+ Valid template parameters are:
+
+ - integral data types (char, short, int, ...)
+ - pointer data types
+ - any other data type that has a non-throwing default
+ constructor and that can be copied via <TT>memcpy</TT>
+
+ Unless specified otherwise, any memory ordering constraint can be used
+ with any of the atomic operations.
+*/
+template<typename Type>
+class atomic {
+public:
+ /**
+ \brief Create uninitialized atomic variable
+ Creates an atomic variable. Its initial value is undefined.
+ */
+ atomic();
+ /**
+ \brief Create an initialize atomic variable
+ \param value Initial value
+ Creates and initializes an atomic variable.
+ */
+ atomic(Type value);
+
+ /**
+ \brief Read the current value of the atomic variable
+ \param order Memory ordering constraint, see \ref memory_order
+ \return Current value of the variable
+
+ Valid memory ordering constraints are:
+ - @c memory_order_relaxed
+ - @c memory_order_consume
+ - @c memory_order_acquire
+ - @c memory_order_seq_cst
+ */
+ Type load(memory_order order=memory_order_seq_cst) const;
+
+ /**
+ \brief Write new value to atomic variable
+ \param value New value
+ \param order Memory ordering constraint, see \ref memory_order
+
+ Valid memory ordering constraints are:
+ - @c memory_order_relaxed
+ - @c memory_order_release
+ - @c memory_order_seq_cst
+ */
+ void store(Type value, memory_order order=memory_order_seq_cst);
+
+ /**
+ \brief Atomically compare and exchange variable
+ \param expected Expected old value
+ \param desired Desired new value
+ \param order Memory ordering constraint, see \ref memory_order
+ \return @c true if value was changed
+
+ Atomically performs the following operation
+
+ \code
+ if (variable==expected) {
+ variable=desired;
+ return true;
+ } else {
+ expected=variable;
+ return false;
+ }
+ \endcode
+
+ This operation may fail "spuriously", i.e. the state of the variable
+ is unchanged even though the expected value was found (this is the
+ case on architectures using "load-linked"/"store conditional" to
+ implement the operation).
+
+ The established memory order will be @c order if the operation
+ is successful. If the operation is unsuccesful, the
+ memory order will be
+
+ - @c memory_order_relaxed if @c order is @c memory_order_acquire ,
+ @c memory_order_relaxed or @c memory_order_consume
+ - @c memory_order_release if @c order is @c memory_order_acq_release
+ or @c memory_order_release
+ - @c memory_order_seq_cst if @c order is @c memory_order_seq_cst
+ */
+ bool compare_exchange_weak(
+ Type &expected,
+ Type desired,
+ memory_order order=memory_order_seq_cst);
+
+ /**
+ \brief Atomically compare and exchange variable
+ \param expected Expected old value
+ \param desired Desired new value
+ \param success_order Memory ordering constraint if operation
+ is successful
+ \param failure_order Memory ordering constraint if operation is unsuccesful
+ \return @c true if value was changed
+
+ Atomically performs the following operation
+
+ \code
+ if (variable==expected) {
+ variable=desired;
+ return true;
+ } else {
+ expected=variable;
+ return false;
+ }
+ \endcode
+
+ This operation may fail "spuriously", i.e. the state of the variable
+ is unchanged even though the expected value was found (this is the
+ case on architectures using "load-linked"/"store conditional" to
+ implement the operation).
+
+ The constraint imposed by @c success_order may not be
+ weaker than the constraint imposed by @c failure_order.
+ */
+ bool compare_exchange_weak(
+ Type &expected,
+ Type desired,
+ memory_order success_order,
+ memory_order failure_order);
+ /**
+ \brief Atomically compare and exchange variable
+ \param expected Expected old value
+ \param desired Desired new value
+ \param order Memory ordering constraint, see \ref memory_order
+ \return @c true if value was changed
+
+ Atomically performs the following operation
+
+ \code
+ if (variable==expected) {
+ variable=desired;
+ return true;
+ } else {
+ expected=variable;
+ return false;
+ }
+ \endcode
+
+ In contrast to \ref compare_exchange_weak, this operation will never
+ fail spuriously. Since compare-and-swap must generally be retried
+ in a loop, implementors are advised to prefer \ref compare_exchange_weak
+ where feasible.
+
+ The established memory order will be @c order if the operation
+ is successful. If the operation is unsuccesful, the
+ memory order will be
+
+ - @c memory_order_relaxed if @c order is @c memory_order_acquire ,
+ @c memory_order_relaxed or @c memory_order_consume
+ - @c memory_order_release if @c order is @c memory_order_acq_release
+ or @c memory_order_release
+ - @c memory_order_seq_cst if @c order is @c memory_order_seq_cst
+ */
+ bool compare_exchange_strong(
+ Type &expected,
+ Type desired,
+ memory_order order=memory_order_seq_cst);
+
+ /**
+ \brief Atomically compare and exchange variable
+ \param expected Expected old value
+ \param desired Desired new value
+ \param success_order Memory ordering constraint if operation
+ is successful
+ \param failure_order Memory ordering constraint if operation is unsuccesful
+ \return @c true if value was changed
+
+ Atomically performs the following operation
+
+ \code
+ if (variable==expected) {
+ variable=desired;
+ return true;
+ } else {
+ expected=variable;
+ return false;
+ }
+ \endcode
+
+ In contrast to \ref compare_exchange_weak, this operation will never
+ fail spuriously. Since compare-and-swap must generally be retried
+ in a loop, implementors are advised to prefer \ref compare_exchange_weak
+ where feasible.
+
+ The constraint imposed by @c success_order may not be
+ weaker than the constraint imposed by @c failure_order.
+ */
+ bool compare_exchange_strong(
+ Type &expected,
+ Type desired,
+ memory_order success_order,
+ memory_order failure_order);
+ /**
+ \brief Atomically exchange variable
+ \param value New value
+ \param order Memory ordering constraint, see \ref memory_order
+ \return Old value of the variable
+
+ Atomically exchanges the value of the variable with the new
+ value and returns its old value.
+ */
+ Type exchange(Type value, memory_order order=memory_order_seq_cst);
+
+ /**
+ \brief Atomically add and return old value
+ \param operand Operand
+ \param order Memory ordering constraint, see \ref memory_order
+ \return Old value of the variable
+
+ Atomically adds operand to the variable and returns its
+ old value.
+ */
+ Type fetch_add(Type operand, memory_order order=memory_order_seq_cst);
+ /**
+ \brief Atomically subtract and return old value
+ \param operand Operand
+ \param order Memory ordering constraint, see \ref memory_order
+ \return Old value of the variable
+
+ Atomically subtracts operand from the variable and returns its
+ old value.
+
+ This method is available only if \c Type is an integral type
+ or a non-void pointer type. If it is a pointer type,
+ @c operand is of type @c ptrdiff_t and the operation
+ is performed following the rules for pointer arithmetic
+ in C++.
+ */
+ Type fetch_sub(Type operand, memory_order order=memory_order_seq_cst);
+
+ /**
+ \brief Atomically perform bitwise "AND" and return old value
+ \param operand Operand
+ \param order Memory ordering constraint, see \ref memory_order
+ \return Old value of the variable
+
+ Atomically performs bitwise "AND" with the variable and returns its
+ old value.
+
+ This method is available only if \c Type is an integral type
+ or a non-void pointer type. If it is a pointer type,
+ @c operand is of type @c ptrdiff_t and the operation
+ is performed following the rules for pointer arithmetic
+ in C++.
+ */
+ Type fetch_and(Type operand, memory_order order=memory_order_seq_cst);
+
+ /**
+ \brief Atomically perform bitwise "OR" and return old value
+ \param operand Operand
+ \param order Memory ordering constraint, see \ref memory_order
+ \return Old value of the variable
+
+ Atomically performs bitwise "OR" with the variable and returns its
+ old value.
+
+ This method is available only if \c Type is an integral type.
+ */
+ Type fetch_or(Type operand, memory_order order=memory_order_seq_cst);
+
+ /**
+ \brief Atomically perform bitwise "XOR" and return old value
+ \param operand Operand
+ \param order Memory ordering constraint, see \ref memory_order
+ \return Old value of the variable
+
+ Atomically performs bitwise "XOR" with the variable and returns its
+ old value.
+
+ This method is available only if \c Type is an integral type.
+ */
+ Type fetch_xor(Type operand, memory_order order=memory_order_seq_cst);
+
+ /**
+ \brief Implicit load
+ \return Current value of the variable
+
+ The same as <tt>load(memory_order_seq_cst)</tt>. Avoid using
+ the implicit conversion operator, use \ref load with
+ an explicit memory ordering constraint.
+ */
+ operator Type(void) const;
+ /**
+ \brief Implicit store
+ \param value New value
+ \return Copy of @c value
+
+ The same as <tt>store(value, memory_order_seq_cst)</tt>. Avoid using
+ the implicit conversion operator, use \ref store with
+ an explicit memory ordering constraint.
+ */
+ Type operator=(Type v);
+
+ /**
+ \brief Atomically perform bitwise "AND" and return new value
+ \param operand Operand
+ \return New value of the variable
+
+ The same as <tt>fetch_and(operand, memory_order_seq_cst)&operand</tt>.
+ Avoid using the implicit bitwise "AND" operator, use \ref fetch_and
+ with an explicit memory ordering constraint.
+ */
+ Type operator&=(Type operand);
+
+ /**
+ \brief Atomically perform bitwise "OR" and return new value
+ \param operand Operand
+ \return New value of the variable
+
+ The same as <tt>fetch_or(operand, memory_order_seq_cst)|operand</tt>.
+ Avoid using the implicit bitwise "OR" operator, use \ref fetch_or
+ with an explicit memory ordering constraint.
+
+ This method is available only if \c Type is an integral type.
+ */
+ Type operator|=(Type operand);
+
+ /**
+ \brief Atomically perform bitwise "XOR" and return new value
+ \param operand Operand
+ \return New value of the variable
+
+ The same as <tt>fetch_xor(operand, memory_order_seq_cst)^operand</tt>.
+ Avoid using the implicit bitwise "XOR" operator, use \ref fetch_xor
+ with an explicit memory ordering constraint.
+
+ This method is available only if \c Type is an integral type.
+ */
+ Type operator^=(Type operand);
+
+ /**
+ \brief Atomically add and return new value
+ \param operand Operand
+ \return New value of the variable
+
+ The same as <tt>fetch_add(operand, memory_order_seq_cst)+operand</tt>.
+ Avoid using the implicit add operator, use \ref fetch_add
+ with an explicit memory ordering constraint.
+
+ This method is available only if \c Type is an integral type
+ or a non-void pointer type. If it is a pointer type,
+ @c operand is of type @c ptrdiff_t and the operation
+ is performed following the rules for pointer arithmetic
+ in C++.
+ */
+ Type operator+=(Type operand);
+
+ /**
+ \brief Atomically subtract and return new value
+ \param operand Operand
+ \return New value of the variable
+
+ The same as <tt>fetch_sub(operand, memory_order_seq_cst)-operand</tt>.
+ Avoid using the implicit subtract operator, use \ref fetch_sub
+ with an explicit memory ordering constraint.
+
+ This method is available only if \c Type is an integral type
+ or a non-void pointer type. If it is a pointer type,
+ @c operand is of type @c ptrdiff_t and the operation
+ is performed following the rules for pointer arithmetic
+ in C++.
+ */
+ Type operator-=(Type operand);
+
+ /**
+ \brief Atomically increment and return new value
+ \return New value of the variable
+
+ The same as <tt>fetch_add(1, memory_order_seq_cst)+1</tt>.
+ Avoid using the implicit increment operator, use \ref fetch_add
+ with an explicit memory ordering constraint.
+
+ This method is available only if \c Type is an integral type
+ or a non-void pointer type. If it is a pointer type,
+ the operation
+ is performed following the rules for pointer arithmetic
+ in C++.
+ */
+ Type operator++(void);
+ /**
+ \brief Atomically increment and return old value
+ \return Old value of the variable
+
+ The same as <tt>fetch_add(1, memory_order_seq_cst)</tt>.
+ Avoid using the implicit increment operator, use \ref fetch_add
+ with an explicit memory ordering constraint.
+
+ This method is available only if \c Type is an integral type
+ or a non-void pointer type. If it is a pointer type,
+ the operation
+ is performed following the rules for pointer arithmetic
+ in C++.
+ */
+ Type operator++(int);
+ /**
+ \brief Atomically subtract and return new value
+ \return New value of the variable
+
+ The same as <tt>fetch_sub(1, memory_order_seq_cst)-1</tt>.
+ Avoid using the implicit increment operator, use \ref fetch_sub
+ with an explicit memory ordering constraint.
+
+ This method is available only if \c Type is an integral type
+ or a non-void pointer type. If it is a pointer type,
+ the operation
+ is performed following the rules for pointer arithmetic
+ in C++.
+ */
+ Type operator--(void);
+ /**
+ \brief Atomically subtract and return old value
+ \return Old value of the variable
+
+ The same as <tt>fetch_sub(1, memory_order_seq_cst)</tt>.
+ Avoid using the implicit increment operator, use \ref fetch_sub
+ with an explicit memory ordering constraint.
+
+ This method is available only if \c Type is an integral type
+ or a non-void pointer type. If it is a pointer type,
+ the operation
+ is performed following the rules for pointer arithmetic
+ in C++.
+ */
+ Type operator--(int);
+
+private:
+ /** \brief Deleted copy constructor */
+ atomic(const atomic &);
+ /** \brief Deleted copy assignment */
+ void operator=(const atomic &);
+};
+
+/**
+ \brief Insert explicit fence
+ \param order Memory ordering constraint
+
+ Inserts an explicit fence. The exact semantic depends on the
+ type of fence inserted:
+
+ - \c memory_order_relaxed: No operation
+ - \c memory_order_release: Performs a "release" operation
+ - \c memory_order_acquire or \c memory_order_consume: Performs an
+ "acquire" operation
+ - \c memory_order_acq_rel: Performs both an "acquire" and a "release"
+ operation
+ - \c memory_order_seq_cst: Performs both an "acquire" and a "release"
+ operation and in addition there exists a global total order of
+ all \c memory_order_seq_cst operations
+
+*/
+void atomic_thread_fence(memory_order order);
+
+}

Added: branches/release/libs/atomic/doc/atomic.qbk
==============================================================================
--- (empty file)
+++ branches/release/libs/atomic/doc/atomic.qbk 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,707 @@
+[/
+ / Copyright (c) 2009 Helge Bahmann
+ /
+ / Distributed under the Boost Software License, Version 1.0. (See accompanying
+ / file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+ /]
+
+[library Boost.Atomic
+ [quickbook 1.4]
+ [authors [Bahmann, Helge]]
+ [copyright 2011 Helge Bahmann]
+ [copyright 2012 Tim Blechmann]
+ [id atomic]
+ [dirname atomic]
+ [purpose Atomic operations]
+ [license
+ Distributed under the Boost Software License, Version 1.0.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ [@http://www.boost.org/LICENSE_1_0.txt])
+ ]
+]
+
+[section:introduction Introduction]
+
+[section:introduction_presenting Presenting Boost.Atomic]
+
+[*Boost.Atomic] is a library that provides [^atomic]
+data types and operations on these data types, as well as memory
+ordering constraints required for coordinating multiple threads through
+atomic variables. It implements the interface as defined by the C++11
+standard, but makes this feature available for platforms lacking
+system/compiler support for this particular C++11 feature.
+
+Users of this library should already be familiar with concurrency
+in general, as well as elementary concepts such as "mutual exclusion".
+
+The implementation makes use of processor-specific instructions where
+possible (via inline assembler, platform libraries or compiler
+intrinsics), and falls back to "emulating" atomic operations through
+locking.
+
+[endsect]
+
+[section:introduction_purpose Purpose]
+
+Operations on "ordinary" variables are not guaranteed to be atomic.
+This means that with [^int n=0] initially, two threads concurrently
+executing
+
+[c++]
+
+ void function()
+ {
+ n ++;
+ }
+
+might result in [^n==1] instead of 2: Each thread will read the
+old value into a processor register, increment it and write the result
+back. Both threads may therefore write [^1], unaware that the other thread
+is doing likewise.
+
+Declaring [^atomic<int> n=0] instead, the same operation on
+this variable will always result in [^n==2] as each operation on this
+variable is ['atomic]: This means that each operation behaves as if it
+were strictly sequentialized with respect to the other.
+
+Atomic variables are useful for two purposes:
+
+* as a means for coordinating multiple threads via custom
+ coordination protocols
+* as faster alternatives to "locked" access to simple variables
+
+Take a look at the [link atomic.usage_examples examples] section
+for common patterns.
+
+[endsect]
+
+[endsect]
+
+[section:thread_coordination Thread coordination using Boost.Atomic]
+
+The most common use of [*Boost.Atomic] is to realize custom
+thread synchronization protocols: The goal is to coordinate
+accesses of threads to shared variables in order to avoid
+"conflicts". The
+programmer must be aware of the fact that
+compilers, CPUs and the cache
+hierarchies may generally reorder memory references at will.
+As a consequence a program such as:
+
+[c++]
+ int x = 0, int y = 0;
+
+ thread1:
+ x = 1;
+ y = 1;
+
+ thread2
+ if (y == 1) {
+ assert(x == 1);
+ }
+
+might indeed fail as there is no guarantee that the read of `x`
+by thread2 "sees" the write by thread1.
+
+[*Boost.Atomic] uses a synchronisation concept based on the
+['happens-before] relation to describe the guarantees under
+which situations such as the above one cannot occur.
+
+The remainder of this section will discuss ['happens-before] in
+a "hands-on" way instead of giving a fully formalized definition.
+The reader is encouraged to additionally have a
+look at the discussion of the correctness of a few of the
+[link atomic.usage_examples examples] afterwards.
+
+[section:mutex Enforcing ['happens-before] through mutual exclusion]
+
+As an introductury example to understand how arguing using
+['happens-before] works, consider two threads synchronizing
+using a common mutex:
+
+[c++]
+
+ mutex m;
+
+ thread1:
+ m.lock();
+ ... /* A */
+ m.unlock();
+
+ thread2:
+ m.lock();
+ ... /* B */
+ m.unlock();
+
+The "lockset-based intuition" would be to argue that A and B
+cannot be executed concurrently as the code paths require a
+common lock to be held.
+
+One can however also arrive at the same conclusion using
+['happens-before]: Either thread1 or thread2 will succeed first
+at [^m.lock()]. If this is be thread1, then as a consequence,
+thread2 cannot succeed at [^m.lock()] before thread1 has executed
+[^m.unlock()], consequently A ['happens-before] B in this case.
+By symmetry, if thread2 suceeds at [^m.unlock()] first, we can
+conclude B ['happens-before] A.
+
+Since this already exhausts all options, we can conclude that
+either A ['happens-before] B or B ['happens-before] A must
+always hold. Obviously cannot state ['which] of the two relationships
+holds, but either one is sufficient to conclude that A and B
+cannot conflict.
+
+Compare the [link boost_atomic.usage_examples.example_spinlock spinlock]
+implementation to see how the mutual exclusion concept can be
+mapped to [*Boost.Atomic].
+
+[endsect]
+
+[section:release_acquire ['happens-before] through [^release] and [^acquire]]
+
+The most basic pattern for coordinating threads via [*Boost.Atomic]
+uses [^release] and [^acquire] on an atomic variable for coordination: If ...
+
+* ... thread1 performs an operation A,
+* ... thread1 subsequently writes (or atomically
+ modifies) an atomic variable with [^release] semantic,
+* ... thread2 reads (or atomically reads-and-modifies)
+ the value this value from the same atomic variable with
+ [^acquire] semantic and
+* ... thread2 subsequently performs an operation B,
+
+... then A ['happens-before] B.
+
+Consider the following example
+
+[c++]
+
+ atomic<int> a(0);
+
+ thread1:
+ ... /* A */
+ a.fetch_add(1, memory_order_release);
+
+ thread2:
+ int tmp = a.load(memory_order_acquire);
+ if (tmp == 1) {
+ ... /* B */
+ } else {
+ ... /* C */
+ }
+
+In this example, two avenues for execution are possible:
+
+* The [^store] operation by thread1 precedes the [^load] by thread2:
+ In this case thread2 will execute B and "A ['happens-before] B"
+ holds as all of the criteria above are satisfied.
+* The [^load] operation by thread2 precedes the [^store] by thread1:
+ In this case, thread2 will execute C, but "A ['happens-before] C"
+ does ['not] hold: thread2 does not read the value written by
+ thread1 through [^a].
+
+Therefore, A and B cannot conflict, but A and C ['can] conflict.
+
+[endsect]
+
+[section:fences Fences]
+
+Ordering constraints are generally specified together with an access to
+an atomic variable. It is however also possible to issue "fence"
+operations in isolation, in this case the fence operates in
+conjunction with preceding (for `acquire`, `consume` or `seq_cst`
+operations) or succeeding (for `release` or `seq_cst`) atomic
+operations.
+
+The example from the previous section could also be written in
+the following way:
+
+[c++]
+
+ atomic<int> a(0);
+
+ thread1:
+ ... /* A */
+ atomic_thread_fence(memory_order_release);
+ a.fetch_add(1, memory_order_relaxed);
+
+ thread2:
+ int tmp = a.load(memory_order_relaxed);
+ if (tmp == 1) {
+ atomic_thread_fence(memory_order_acquire);
+ ... /* B */
+ } else {
+ ... /* C */
+ }
+
+This provides the same ordering guarantees as previously, but
+elides a (possibly expensive) memory ordering operation in
+the case C is executed.
+
+[endsect]
+
+[section:release_consume ['happens-before] through [^release] and [^consume]]
+
+The second pattern for coordinating threads via [*Boost.Atomic]
+uses [^release] and [^consume] on an atomic variable for coordination: If ...
+
+* ... thread1 performs an operation A,
+* ... thread1 subsequently writes (or atomically modifies) an
+ atomic variable with [^release] semantic,
+* ... thread2 reads (or atomically reads-and-modifies)
+ the value this value from the same atomic variable with [^consume] semantic and
+* ... thread2 subsequently performs an operation B that is ['computationally
+ dependent on the value of the atomic variable],
+
+... then A ['happens-before] B.
+
+Consider the following example
+
+[c++]
+
+ atomic<int> a(0);
+ complex_data_structure data[2];
+
+ thread1:
+ data[1] = ...; /* A */
+ a.store(1, memory_order_release);
+
+ thread2:
+ int index = a.load(memory_order_consume);
+ complex_data_structure tmp = data[index]; /* B */
+
+In this example, two avenues for execution are possible:
+
+* The [^store] operation by thread1 precedes the [^load] by thread2:
+ In this case thread2 will read [^data\[1\]] and "A ['happens-before] B"
+ holds as all of the criteria above are satisfied.
+* The [^load] operation by thread2 precedes the [^store] by thread1:
+ In this case thread2 will read [^data\[0\]] and "A ['happens-before] B"
+ does ['not] hold: thread2 does not read the value written by
+ thread1 through [^a].
+
+Here, the ['happens-before] relationship helps ensure that any
+accesses (presumable writes) to [^data\[1\]] by thread1 happen before
+before the accesses (presumably reads) to [^data\[1\]] by thread2:
+Lacking this relationship, thread2 might see stale/inconsistent
+data.
+
+Note that in this example, the fact that operation B is computationally
+dependent on the atomic variable, therefore the following program would
+be erroneous:
+
+[c++]
+
+ atomic<int> a(0);
+ complex_data_structure data[2];
+
+ thread1:
+ data[1] = ...; /* A */
+ a.store(1, memory_order_release);
+
+ thread2:
+ int index = a.load(memory_order_consume);
+ complex_data_structure tmp;
+ if (index == 0)
+ tmp = data[0];
+ else
+ tmp = data[1];
+
+[^consume] is most commonly (and most safely! see
+[link atomic.limitations limitations]) used with
+pointers, compare for example the
+[link boost_atomic.usage_examples.singleton singleton with double-checked locking].
+
+[endsect]
+
+[section:seq_cst Sequential consistency]
+
+The third pattern for coordinating threads via [*Boost.Atomic]
+uses [^seq_cst] for coordination: If ...
+
+* ... thread1 performs an operation A,
+* ... thread1 subsequently performs any operation with [^seq_cst],
+* ... thread1 subsequently performs an operation B,
+* ... thread2 performs an operation C,
+* ... thread2 subsequently performs any operation with [^seq_cst],
+* ... thread2 subsequently performs an operation D,
+
+then either "A ['happens-before] D" or "C ['happens-before] B" holds.
+
+In this case it does not matter whether thread1 and thread2 operate
+on the same or different atomic variables, or use a "stand-alone"
+[^atomic_thread_fence] operation.
+
+[endsect]
+
+[endsect]
+
+[section:interface Programming interfaces]
+
+[section:interface_memory_order Memory order]
+
+The enumeration [^boost::memory_order] defines the following
+values to represent memory ordering constraints:
+
+[table
+ [[Constant] [Description]]
+ [[`memory_order_relaxed`] [No ordering constraint.
+ Informally speaking, following operations may be reordered before,
+ preceding operations may be reordered after the atomic
+ operation. This constraint is suitable only when
+ either a) further operations do not depend on the outcome
+ of the atomic operation or b) ordering is enforced through
+ stand-alone `atomic_thread_fence` operations
+ ]]
+ [[`memory_order_release`] [
+ Perform `release` operation. Informally speaking,
+ prevents all preceding memory operations to be reordered
+ past this point.
+ ]]
+ [[`memory_order_acquire`] [
+ Perform `acquire` operation. Informally speaking,
+ prevents succeeding memory operations to be reordered
+ before this point.
+ ]]
+ [[`memory_order_consume`] [
+ Perform `consume` operation. More restrictive (and
+ usually more efficient) than `memory_order_acquire`
+ as it only affects succeeding operations that are
+ computationally-dependent on the value retrieved from
+ an atomic variable.
+ ]]
+ [[`memory_order_acq_rel`] [Perform both `release` and `acquire` operation]]
+ [[`memory_order_seq_cst`] [
+ Enforce sequential consistency. Implies `memory_order_acq_rel`, but
+ additional enforces total order for all operations such qualified.
+ ]]
+]
+
+See section [link atomic.thread_coordination ['happens-before]] for explanation
+of the various ordering constraints.
+
+[endsect]
+
+[section:interface_atomic_object Atomic objects]
+
+[^boost::atomic<['T]>] provides methods for atomically accessing
+variables of a suitable type [^['T]]. The type is suitable if
+it satisfies one of the following constraints:
+
+* it is an integer, boolean, enum or pointer type
+* it is any other data-type ([^class] or [^struct]) that has
+ a non-throwing default constructor, that is copyable via
+ [^memcpy] and comparable via [^memcmp].
+
+Note that all classes having a trivial default constructor,
+no destructor and no virtual methods satisfy the second condition
+according to C++98. On a given platform, other data-types ['may]
+also satisfy this constraint, however you should exercise
+caution as the behaviour becomes implementation-defined. Also be warned
+that structures with "padding" between data members may compare
+non-equal via [^memcmp] even though all members are equal.
+
+[section:interface_atomic_generic [^boost::atomic<['T]>] template class]
+
+All atomic objects supports the following operations:
+
+[table
+ [[Syntax] [Description]]
+ [
+ [`atomic()`]
+ [Initialize to an unspecified value]
+ ]
+ [
+ [`atomic(T initial_value)`]
+ [Initialize to [^initial_value]]
+ ]
+ [
+ [`bool is_lock_free()`]
+ [Checks if the atomic object is lock-free]
+ ]
+ [
+ [`T load(memory_order order)`]
+ [Return current value]
+ ]
+ [
+ [`void store(T value, memory_order order)`]
+ [Write new value to atomic variable]
+ ]
+ [
+ [`T exchange(T new_value, memory_order order)`]
+ [Exchange current value with `new_value`, returning current value]
+ ]
+ [
+ [`bool compare_exchange_weak(T & expected, T desired, memory_order order)`]
+ [Compare current value with `expected`, change it to `desired` if matches.
+ Returns `true` if an exchange has been performed, and always writes the
+ previous value back in `expected`. May fail spuriously, so must generally be
+ retried in a loop.]
+ ]
+ [
+ [`bool compare_exchange_weak(T & expected, T desired, memory_order success_order, memory_order failure_order)`]
+ [Compare current value with `expected`, change it to `desired` if matches.
+ Returns `true` if an exchange has been performed, and always writes the
+ previous value back in `expected`. May fail spuriously, so must generally be
+ retried in a loop.]
+ ]
+ [
+ [`bool compare_exchange_strong(T & expected, T desired, memory_order order)`]
+ [Compare current value with `expected`, change it to `desired` if matches.
+ Returns `true` if an exchange has been performed, and always writes the
+ previous value back in `expected`.]
+ ]
+ [
+ [`bool compare_exchange_strong(T & expected, T desired, memory_order success_order, memory_order failure_order))`]
+ [Compare current value with `expected`, change it to `desired` if matches.
+ Returns `true` if an exchange has been performed, and always writes the
+ previous value back in `expected`.]
+ ]
+]
+
+`order` always has `memory_order_seq_cst` as default parameter.
+
+The `compare_exchange_weak`/`compare_exchange_strong` variants
+taking four parameters differ from the three parameter variants
+in that they allow a different memory ordering constraint to
+be specified in case the operation fails.
+
+In addition to these explicit operations, each
+[^atomic<['T]>] object also supports
+implicit [^store] and [^load] through the use of "assignment"
+and "conversion to [^T]" operators. Avoid using these operators,
+as they do not allow explicit specification of a memory ordering
+constraint.
+
+[endsect]
+
+[section:interface_atomic_integral [^boost::atomic<['integral]>] template class]
+
+In addition to the operations listed in the previous section,
+[^boost::atomic<['I]>] for integral
+types [^['I]] supports the following operations:
+
+[table
+ [[Syntax] [Description]]
+ [
+ [`T fetch_add(T v, memory_order order)`]
+ [Add `v` to variable, returning previous value]
+ ]
+ [
+ [`T fetch_sub(T v, memory_order order)`]
+ [Subtract `v` from variable, returning previous value]
+ ]
+ [
+ [`T fetch_and(T v, memory_order order)`]
+ [Apply bit-wise "and" with `v` to variable, returning previous value]
+ ]
+ [
+ [`T fetch_or(T v, memory_order order)`]
+ [Apply bit-wise "or" with `v` to variable, returning previous value]
+ ]
+ [
+ [`T fetch_xor(T v, memory_order order)`]
+ [Apply bit-wise "xor" with `v` to variable, returning previous value]
+ ]
+]
+
+`order` always has `memory_order_seq_cst` as default parameter.
+
+In addition to these explicit operations, each
+[^boost::atomic<['I]>] object also
+supports implicit pre-/post- increment/decrement, as well
+as the operators `+=`, `-=`, `&=`, `|=` and `^=`.
+Avoid using these operators,
+as they do not allow explicit specification of a memory ordering
+constraint.
+
+[endsect]
+
+[section:interface_atomic_pointer [^boost::atomic<['pointer]>] template class]
+
+In addition to the operations applicable to all atomic object,
+[^boost::atomic<['P]>] for pointer
+types [^['P]] (other than [^void] pointers) support the following operations:
+
+[table
+ [[Syntax] [Description]]
+ [
+ [`T fetch_add(ptrdiff_t v, memory_order order)`]
+ [Add `v` to variable, returning previous value]
+ ]
+ [
+ [`T fetch_sub(ptrdiff_t v, memory_order order)`]
+ [Subtract `v` from variable, returning previous value]
+ ]
+]
+
+`order` always has `memory_order_seq_cst` as default parameter.
+
+In addition to these explicit operations, each
+[^boost::atomic<['P]>] object also
+supports implicit pre-/post- increment/decrement, as well
+as the operators `+=`, `-=`. Avoid using these operators,
+as they do not allow explicit specification of a memory ordering
+constraint.
+
+[endsect]
+
+[endsect]
+
+[section:interface_fences Fences]
+
+[table
+ [[Syntax] [Description]]
+ [
+ [`void atomic_thread_fence(memory_order order)`]
+ [Issue fence for coordination with other threads.]
+ ]
+ [
+ [`void atomic_signal_fence(memory_order order)`]
+ [Issue fence for coordination with signal handler (only in same thread).]
+ ]
+]
+
+[endsect]
+
+[section:feature_macros Feature testing macros]
+
+[*Boost.Atomic] defines a number of macros to allow compile-time
+detection whether an atomic data type is implemented using
+"true" atomic operations, or whether an internal "lock" is
+used to provide atomicity. The following macros will be
+defined to `0` if operations on the data type always
+require a lock, to `1` if operations on the data type may
+sometimes require a lock, and to `2` if they are always lock-free:
+
+[table
+ [[Macro] [Description]]
+ [
+ [`BOOST_ATOMIC_CHAR_LOCK_FREE`]
+ [Indicate whether `atomic<char>` (including signed/unsigned variants) is lock-free]
+ ]
+ [
+ [`BOOST_ATOMIC_SHORT_LOCK_FREE`]
+ [Indicate whether `atomic<short>` (including signed/unsigned variants) is lock-free]
+ ]
+ [
+ [`BOOST_ATOMIC_INT_LOCK_FREE`]
+ [Indicate whether `atomic<int>` (including signed/unsigned variants) is lock-free]
+ ]
+ [
+ [`BOOST_ATOMIC_LONG_LOCK_FREE`]
+ [Indicate whether `atomic<long>` (including signed/unsigned variants) is lock-free]
+ ]
+ [
+ [`BOOST_ATOMIC_LLONG_LOCK_FREE`]
+ [Indicate whether `atomic<long long>` (including signed/unsigned variants) is lock-free]
+ ]
+ [
+ [`BOOST_ATOMIC_ADDRESS_LOCK_FREE`]
+ [Indicate whether `atomic<T *>` is lock-free]
+ ]
+]
+
+[endsect]
+
+[endsect]
+
+[section:usage_examples Usage examples]
+
+[include examples.qbk]
+
+[endsect]
+
+[/
+[section:platform_support Implementing support for additional platforms]
+
+[include platform.qbk]
+
+[endsect]
+]
+
+[/ [xinclude autodoc.xml] ]
+
+[section:limitations Limitations]
+
+While [*Boost.Atomic] strives to implement the atomic operations
+from C++11 as faithfully as possible, there are a few
+limitations that cannot be lifted without compiler support:
+
+* [*Using non-POD-classes as template paramater to `atomic<T>` results
+ in undefined behavior]: This means that any class containing a
+ constructor, destructor, virtual methods or access control
+ specifications is not a valid argument in C++98. C++11 relaxes
+ this slightly by allowing "trivial" classes containing only
+ empty constructors. [*Advise]: Use only POD types.
+* [*C++98 compilers may transform computation- to control-dependency]:
+ Crucially, `memory_order_consume` only affects computationally-dependent
+ operations, but in general there is nothing preventing a compiler
+ from transforming a computation dependency into a control dependency.
+ A C++11 compiler would be forbidden from such a transformation.
+ [*Advise]: Use `memory_order_consume` only in conjunction with
+ pointer values, as the compiler cannot speculate and transform
+ these into control dependencies.
+* [*Fence operations enforce "too strong" compiler ordering]:
+ Semantically, `memory_order_acquire`/`memory_order_consume`
+ and `memory_order_release` need to restrain reordering of
+ memory operations only in one direction. Since there is no
+ way to express this constraint to the compiler, these act
+ as "full compiler barriers" in this implementation. In corner
+ cases this may lead to worse code than a C++11 compiler
+ could generate.
+* [*No interprocess fallback]: using `atomic<T>` in shared memory only works
+ correctly, if `atomic<T>::is_lock_free == true`
+
+[endsect]
+
+[section:porting Porting]
+
+[section:unit_tests Unit tests]
+
+[*Boost.Atomic] provides a unit test suite to verify that the
+implementation behaves as expected:
+
+* [*fallback_api.cpp] verifies that the fallback-to-locking aspect
+ of [*Boost.Atomic] compiles and has correct value semantics.
+* [*native_api.cpp] verifies that all atomic operations have correct
+ value semantics (e.g. "fetch_add" really adds the desired value,
+ returing the previous). It is a rough "smoke-test" to help weed
+ out the most obvious mistakes (for example with overflow,
+ signed/unsigned extension, ...).
+* [*lockfree.cpp] verifies that the [*BOOST_ATOMIC_*_LOCKFREE] macros
+ are set properly according to the expectations for a given
+ platform, and that they match up with the [*is_lock_free] member
+ functions of the [*atomic] object instances.
+* [*atomicity.cpp] lets two threads race against each other modifying
+ a shared variable, verifying that the operations behave atomic
+ as appropriate. By nature, this test is necessarily stochastic, and
+ the test self-calibrates to yield 99% confidence that a
+ positive result indicates absence of an error. This test is
+ very useful on uni-processor systems with preemption already.
+* [*ordering.cpp] lets two threads race against each other accessing
+ multiple shared variables, verifying that the operations
+ exhibit the expected ordering behavior. By nature, this test is
+ necessarily stochastic, and the test attempts to self-calibrate to
+ yield 99% confidence that a positive result indicates absence
+ of an error. This only works on true multi-processor (or multi-core)
+ systems. It does not yield any result on uni-processor systems
+ or emulators (due to there being no observable reordering even
+ the order=relaxed case) and will report that fact.
+
+[endsect]
+
+[section:tested_compilers Tested compilers]
+
+[*Boost.Atomic] has been tested on and is known to work on
+the following compilers/platforms:
+
+* gcc 4.x: i386, x86_64, ppc32, ppc64, armv5, armv6, alpha
+* Visual Studio Express 2008/Windows XP, i386
+
+If you have an unsupported platform, contact me and I will
+work to add support for it.
+
+[endsect]
+
+[endsect]

Added: branches/release/libs/atomic/doc/examples.qbk
==============================================================================
--- (empty file)
+++ branches/release/libs/atomic/doc/examples.qbk 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,398 @@
+[/
+ / Copyright (c) 2009 Helge Bahmann
+ /
+ / Distributed under the Boost Software License, Version 1.0. (See accompanying
+ / file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+ /]
+
+[section:example_reference_counters Reference counting]
+
+The purpose of a ['reference counter] is to count the number
+of pointers to an object. The object can be destroyed as
+soon as the reference counter reaches zero.
+
+[section Implementation]
+
+[c++]
+
+ #include <boost/intrusive_ptr.hpp>
+ #include <boost/atomic.hpp>
+
+ class X {
+ public:
+ typedef boost::intrusive_ptr<X> pointer;
+ X() : refcount_(0) {}
+
+ private:
+ mutable boost::atomic<int> refcount_;
+ friend void intrusive_ptr_add_ref(const X * x)
+ {
+ x->refcount_.fetch_add(1, boost::memory_order_relaxed);
+ }
+ friend void intrusive_ptr_release(const X * x)
+ {
+ if (x->refcount_.fetch_sub(1, boost::memory_order_release) == 1) {
+ boost::atomic_thread_fence(boost::memory_order_acquire);
+ delete x;
+ }
+ }
+ };
+
+[endsect]
+
+[section Usage]
+
+[c++]
+
+ X::pointer x = new X;
+
+[endsect]
+
+[section Discussion]
+
+Increasing the reference counter can always be done with
+[^memory_order_relaxed]: New references to an object can only
+be formed from an existing reference, and passing an existing
+reference from one thread to another must already provide any
+required synchronization.
+
+It is important to enforce any possible access to the object in
+one thread (through an existing reference) to ['happen before]
+deleting the object in a different thread. This is achieved
+by a "release" operation after dropping a reference (any
+access to the object through this reference must obviously
+happened before), and an "acquire" operation before
+deleting the object.
+
+It would be possible to use [^memory_order_acq_rel] for the
+[^fetch_sub] operation, but this results in unneeded "acquire"
+operations when the reference counter does not yet reach zero
+and may impose a performance penalty.
+
+[endsect]
+
+[endsect]
+
+[section:example_spinlock Spinlock]
+
+The purpose of a ['spin lock] is to prevent multiple threads
+from concurrently accessing a shared data structure. In contrast
+to a mutex, threads will busy-wait and waste CPU cycles instead
+of yielding the CPU to another thread. ['Do not use spinlocks
+unless you are certain that you understand the consequences.]
+
+[section Implementation]
+
+[c++]
+
+ #include <boost/atomic.hpp>
+
+ class spinlock {
+ private:
+ typedef enum {Locked, Unlocked} LockState;
+ boost::atomic<LockState> state_;
+
+ public:
+ spinlock() : state_(Unlocked) {}
+
+ lock()
+ {
+ while (state_.exchange(Locked, boost::memory_order_acquire) == Locked) {
+ /* busy-wait */
+ }
+ }
+ unlock()
+ {
+ state_.store(Unlocked, boost::memory_order_release);
+ }
+ };
+
+[endsect]
+
+[section Usage]
+
+[c++]
+
+ spinlock s;
+
+ s.lock();
+ // access data structure here
+ s.unlock();
+
+[endsect]
+
+[section Discussion]
+
+The purpose of the spinlock is to make sure that one access
+to the shared data structure always strictly "happens before"
+another. The usage of acquire/release in lock/unlock is required
+and sufficient to guarantee this ordering.
+
+It would be correct to write the "lock" operation in the following
+way:
+
+[c++]
+
+ lock()
+ {
+ while (state_.exchange(Locked, boost::memory_order_relaxed) == Locked) {
+ /* busy-wait */
+ }
+ atomic_thread_fence(boost::memory_order_acquire);
+ }
+
+This "optimization" is however a) useless and b) may in fact hurt:
+a) Since the thread will be busily spinning on a blocked spinlock,
+it does not matter if it will waste the CPU cycles with just
+"exchange" operations or with both useless "exchange" and "acquire"
+operations. b) A tight "exchange" loop without any
+memory-synchronizing instruction introduced through an "acquire"
+operation will on some systems monopolize the memory subsystem
+and degrade the performance of other system components.
+
+[endsect]
+
+[endsect]
+
+[section:singleton Singleton with double-checked locking pattern]
+
+The purpose of the ['Singleton with double-checked locking pattern] is to ensure
+that at most one instance of a particular object is created.
+If one instance has been created already, access to the existing
+object should be as light-weight as possible.
+
+[section Implementation]
+
+[c++]
+
+ #include <boost/atomic.hpp>
+ #include <boost/thread/mutex.hpp>
+
+ class X {
+ public:
+ static X * instance()
+ {
+ X * tmp = instance_.load(boost::memory_order_consume);
+ if (!tmp) {
+ boost::mutex::scoped_lock guard(instantiation_mutex);
+ tmp = instance_.load(boost::memory_order_consume);
+ if (!tmp) {
+ tmp = new X;
+ instance_.store(tmp, boost::memory_order_release);
+ }
+ }
+ return tmp;
+ }
+ private:
+ static boost::atomic<X *> instance_;
+ static boost::mutex instantiation_mutex;
+ }
+
+ boost::atomic<X *> X::instance_(0);
+
+[endsect]
+
+[section Usage]
+
+[c++]
+
+ X * x = X::instance();
+ // dereference x
+
+[endsect]
+
+[section Discussion]
+
+The mutex makes sure that only one instance of the object is
+ever created. The [^instance] method must make sure that any
+dereference of the object strictly "happens after" creating
+the instance in another thread. The use of [^memory_order_release]
+after creating and initializing the object and [^memory_order_consume]
+before dereferencing the object provides this guarantee.
+
+It would be permissible to use [^memory_order_acquire] instead of
+[^memory_order_consume], but this provides a stronger guarantee
+than is required since only operations depending on the value of
+the pointer need to be ordered.
+
+[endsect]
+
+[endsect]
+
+[section:example_ringbuffer Wait-free ring buffer]
+
+A ['wait-free ring buffer] provides a mechanism for relaying objects
+from one single "producer" thread to one single "consumer" thread without
+any locks. The operations on this data structure are "wait-free" which
+means that each operation finishes within a constant number of steps.
+This makes this data structure suitable for use in hard real-time systems
+or for communication with interrupt/signal handlers.
+
+[section Implementation]
+
+[c++]
+
+ #include <boost/atomic.hpp>
+
+ template<typename T, size_t Size>
+ class ringbuffer {
+ public:
+ ringbuffer() : head_(0), tail_(0) {}
+
+ bool push(const T & value)
+ {
+ size_t head = head_.load(boost::memory_order_relaxed);
+ size_t next_head = next(head);
+ if (next_head == tail_.load(boost::memory_order_acquire))
+ return false;
+ ring_[head] = value;
+ head_.store(next_head, boost::memory_order_release);
+ return true;
+ }
+ bool pop(T & value)
+ {
+ size_t tail = tail_.load(boost::memory_order_relaxed);
+ if (tail == head_.load(boost::memory_order_acquire))
+ return false;
+ value = ring_[tail];
+ tail_.store(next(tail), boost::memory_order_release));
+ return true;
+ }
+ private:
+ size_t next(size_t current)
+ {
+ return (current + 1) % Size;
+ }
+ T ring_[Size];
+ boost::atomic<size_t> head_, tail_;
+ }
+
+[endsect]
+
+[section Usage]
+
+[c++]
+
+ ringbuffer<int, 32> r;
+
+ // try to insert an element
+ if (r.push(42)) { /* succeeded */ }
+ else { /* buffer full */ }
+
+ // try to retrieve an element
+ int value;
+ if (r.pop(value)) { /* succeeded */ }
+ else { /* buffer empty */ }
+
+[endsect]
+
+[section Discussion]
+
+The implementation makes sure that the ring indices do
+not "lap-around" each other to ensure that no elements
+are either lost or read twice.
+
+Furthermore it must guarantee that read-access to a
+particular object in [^pop] "happens after" it has been
+written in [^push]. This is achieved by writing [^head_ ]
+with "release" and reading it with "acquire". Conversely
+the implementation also ensures that read access to
+a particular ring element "happens before" before
+rewriting this element with a new value by accessing [^tail_]
+with appropriate ordering constraints.
+
+[endsect]
+
+[endsect]
+
+[section:mp_queue Wait-free multi-producer queue]
+
+The purpose of the ['wait-free multi-producer queue] is to allow
+an arbitrary number of producers to enqueue objects which are
+retrieved and processed in FIFO order by a single consumer.
+
+[section Implementation]
+
+[c++]
+
+ template<typename T>
+ class waitfree_queue {
+ public:
+ struct node {
+ T data;
+ node * next;
+ }
+ void push(const T &data)
+ {
+ node * n = new node;
+ n.data = data;
+ node * stale_head = head_.load(boost::memory_order_relaxed);
+ do {
+ node->next = stale_head;
+ } while (!head_.compare_exchange_weak(stale_head, node, boost::memory_order_release);
+ }
+
+ node * pop_all(void)
+ {
+ T * last = pop_all_reverse(), * first = 0;
+ while(last) {
+ T * tmp = last;
+ last = last->next;
+ tmp->next = first;
+ first = tmp;
+ }
+ return first;
+ }
+
+ waitfree_queue() : head_(0) {}
+
+ // alternative interface if ordering is of no importance
+ node * pop_all_reverse(void)
+ {
+ return head_.exchange(0, boost::memory_order_consume);
+ }
+ private:
+ boost::atomic<node *> head_;
+ }
+
+[endsect]
+
+[section Usage]
+
+[c++]
+
+ waitfree_queue<int> q;
+
+ // insert elements
+ q.push(42);
+ q.push(2);
+
+ // pop elements
+ waitfree_queue<int>::node * x = q.pop_all()
+ while(x) {
+ X * tmp = x;
+ x = x->next;
+ // process tmp->data, probably delete it afterwards
+ delete tmp;
+ }
+
+[endsect]
+
+[section Discussion]
+
+The implementation guarantees that all objects enqueued are
+processed in the order they were enqueued by building a singly-linked
+list of object in reverse processing order. The queue is atomically
+emptied by the consumer and brought into correct order.
+
+It must be guaranteed that any access to an object to be enqueued
+by the producer "happens before" any access by the consumer. This
+is assured by inserting objects into the list with ['release] and
+dequeuing them with ['consume] memory order. It is not
+necessary to use ['acquire] memory order in [^waitfree_queue::pop_all]
+because all operations involved depend on the value of
+the atomic pointer through dereference
+
+[endsect]
+
+[endsect]

Added: branches/release/libs/atomic/doc/platform.qbk
==============================================================================
--- (empty file)
+++ branches/release/libs/atomic/doc/platform.qbk 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,312 @@
+[/
+ / Copyright (c) 2009 Helge Bahmann
+ /
+ / Distributed under the Boost Software License, Version 1.0. (See accompanying
+ / file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+ /]
+
+[section:template_organization Organization of class template layers]
+
+The implementation uses multiple layers of template classes that
+inherit from the next lower level each and refine or adapt the respective
+underlying class:
+
+* [^boost::atomic<T>] is the topmost-level, providing
+ the external interface. Implementation-wise, it does not add anything
+ (except for hiding copy constructor and assignment operator).
+
+* [^boost::detail::atomic::internal_atomic&<T,S=sizeof(T),I=is_integral_type<T> >]:
+ This layer is mainly responsible for providing the overloaded operators
+ mapping to API member functions (e.g. [^+=] to [^fetch_add]).
+ The defaulted template parameter [^I] allows
+ to expose the correct API functions (via partial template
+ specialization): For non-integral types, it only
+ publishes the various [^exchange] functions
+ as well as load and store, for integral types it
+ additionally exports arithmetic and logic operations.
+ [br]
+ Depending on whether the given type is integral, it
+ inherits from either [^boost::detail::atomic::platform_atomic<T,S=sizeof(T)>]
+ or [^boost::detail::atomic::platform_atomic_integral<T,S=sizeof(T)>].
+ There is however some special-casing: for non-integral types
+ of size 1, 2, 4 or 8, it will coerce the datatype into an integer representation
+ and delegate to [^boost::detail::atomic::platform_atomic_integral<T,S=sizeof(T)>]
+ -- the rationale is that platform implementors only need to provide
+ integer-type operations.
+
+* [^boost::detail::atomic::platform_atomic_integral<T,S=sizeof(T)>]
+ must provide the full set of operations for an integral type T
+ (i.e. [^load], [^store], [^exchange],
+ [^compare_exchange_weak], [^compare_exchange_strong],
+ [^fetch_add], [^fetch_sub], [^fetch_and],
+ [^fetch_or], [^fetch_xor], [^is_lock_free]).
+ The default implementation uses locking to emulate atomic operations, so
+ this is the level at which implementors should provide template specializations
+ to add support for platform-specific atomic operations.
+ [br]
+ The two separate template parameters allow separate specialization
+ on size and type (which, with fixed size, cannot
+ specify more than signedness/unsignedness). The rationale is that
+ most platform-specific atomic operations usually depend only on the
+ operand size, so that common implementations for signed/unsigned
+ types are possible. Signedness allows to properly to choose sign-extending
+ instructions for the [^load] operation, avoiding later
+ conversion. The expectation is that in most implementations this will
+ be a normal assignment in C, possibly accompanied by memory
+ fences, so that the compiler can automatically choose the correct
+ instruction.
+
+* At the lowest level, [^boost::detail::atomic::platform_atomic<T,S=sizeof(T)>]
+ provides the most basic atomic operations ([^load], [^store],
+ [^exchange], [^compare_exchange_weak],
+ [^compare_exchange_strong]) for arbitrarily generic data types.
+ The default implementation uses locking as a fallback mechanism.
+ Implementors generally do not have to specialize at this level
+ (since these will not be used for the common integral type sizes
+ of 1, 2, 4 and 8 bytes), but if s/he can if s/he so wishes to
+ provide truly atomic operations for "odd" data type sizes.
+ Some amount of care must be taken as the "raw" data type
+ passed in from the user through [^boost::atomic<T>]
+ is visible here -- it thus needs to be type-punned or otherwise
+ manipulated byte-by-byte to avoid using overloaded assigment,
+ comparison operators and copy constructors.
+
+[endsect]
+
+
+[section:platform_atomic_implementation Implementing platform-specific atomic operations]
+
+In principle implementors are responsible for providing the
+full range of named member functions of an atomic object
+(i.e. [^load], [^store], [^exchange],
+[^compare_exchange_weak], [^compare_exchange_strong],
+[^fetch_add], [^fetch_sub], [^fetch_and],
+[^fetch_or], [^fetch_xor], [^is_lock_free]).
+These must be implemented as partial template specializations for
+[^boost::detail::atomic::platform_atomic_integral<T,S=sizeof(T)>]:
+
+[c++]
+
+ template<typename T>
+ class platform_atomic_integral<T, 4>
+ {
+ public:
+ explicit platform_atomic_integral(T v) : i(v) {}
+ platform_atomic_integral(void) {}
+
+ T load(memory_order order=memory_order_seq_cst) const volatile
+ {
+ // platform-specific code
+ }
+ void store(T v, memory_order order=memory_order_seq_cst) volatile
+ {
+ // platform-specific code
+ }
+
+ private:
+ volatile T i;
+ };
+
+As noted above, it will usually suffice to specialize on the second
+template argument, indicating the size of the data type in bytes.
+
+[section:automatic_buildup Templates for automatic build-up]
+
+Often only a portion of the required operations can be
+usefully mapped to machine instructions. Several helper template
+classes are provided that can automatically synthesize missing methods to
+complete an implementation.
+
+At the minimum, an implementor must provide the
+[^load], [^store],
+[^compare_exchange_weak] and
+[^is_lock_free] methods:
+
+[c++]
+
+ template<typename T>
+ class my_atomic_32 {
+ public:
+ my_atomic_32() {}
+ my_atomic_32(T initial_value) : value(initial_value) {}
+
+ T load(memory_order order=memory_order_seq_cst) volatile const
+ {
+ // platform-specific code
+ }
+ void store(T new_value, memory_order order=memory_order_seq_cst) volatile
+ {
+ // platform-specific code
+ }
+ bool compare_exchange_weak(T &expected, T desired,
+ memory_order success_order,
+ memory_order_failure_order) volatile
+ {
+ // platform-specific code
+ }
+ bool is_lock_free() const volatile {return true;}
+ protected:
+ // typedef is required for classes inheriting from this
+ typedef T integral_type;
+ private:
+ T value;
+ };
+
+The template [^boost::detail::atomic::build_atomic_from_minimal]
+can then take care of the rest:
+
+[c++]
+
+ template<typename T>
+ class platform_atomic_integral<T, 4>
+ : public boost::detail::atomic::build_atomic_from_minimal<my_atomic_32<T> >
+ {
+ public:
+ typedef build_atomic_from_minimal<my_atomic_32<T> > super;
+
+ explicit platform_atomic_integral(T v) : super(v) {}
+ platform_atomic_integral(void) {}
+ };
+
+There are several helper classes to assist in building "complete"
+atomic implementations from different starting points:
+
+* [^build_atomic_from_minimal] requires
+ * [^load]
+ * [^store]
+ * [^compare_exchange_weak] (4-operand version)
+
+* [^build_atomic_from_exchange] requires
+ * [^load]
+ * [^store]
+ * [^compare_exchange_weak] (4-operand version)
+ * [^compare_exchange_strong] (4-operand version)
+ * [^exchange]
+
+* [^build_atomic_from_add] requires
+ * [^load]
+ * [^store]
+ * [^compare_exchange_weak] (4-operand version)
+ * [^compare_exchange_strong] (4-operand version)
+ * [^exchange]
+ * [^fetch_add]
+
+* [^build_atomic_from_typical] (<I>supported on gcc only</I>) requires
+ * [^load]
+ * [^store]
+ * [^compare_exchange_weak] (4-operand version)
+ * [^compare_exchange_strong] (4-operand version)
+ * [^exchange]
+ * [^fetch_add_var] (protected method)
+ * [^fetch_inc] (protected method)
+ * [^fetch_dec] (protected method)
+
+ This will generate a [^fetch_add] method
+ that calls [^fetch_inc]/[^fetch_dec]
+ when the given parameter is a compile-time constant
+ equal to +1 or -1 respectively, and [^fetch_add_var]
+ in all other cases. This provides a mechanism for
+ optimizing the extremely common case of an atomic
+ variable being used as a counter.
+
+ The prototypes for these methods to be implemented is:
+ [c++]
+
+ template<typename T>
+ class my_atomic {
+ public:
+ T fetch_inc(memory_order order) volatile;
+ T fetch_dec(memory_order order) volatile;
+ T fetch_add_var(T counter, memory_order order) volatile;
+ };
+
+These helper templates are defined in [^boost/atomic/detail/builder.hpp].
+
+[endsect]
+
+[section:automatic_buildup_small Build sub-word-sized atomic data types]
+
+There is one other helper template that can build sub-word-sized
+atomic data types even though the underlying architecture allows
+only word-sized atomic operations:
+
+[c++]
+
+ template<typename T>
+ class platform_atomic_integral<T, 1> :
+ public build_atomic_from_larger_type<my_atomic_32<uint32_t>, T>
+ {
+ public:
+ typedef build_atomic_from_larger_type<my_atomic_32<uint32_t>, T> super;
+
+ explicit platform_atomic_integral(T v) : super(v) {}
+ platform_atomic_integral(void) {}
+ };
+
+The above would create an atomic data type of 1 byte size, and
+use masking and shifts to map it to 32-bit atomic operations.
+The base type must implement [^load], [^store]
+and [^compare_exchange_weak] for this to work.
+
+[endsect]
+
+[section:other_sizes Atomic data types for unusual object sizes]
+
+In unusual circumstances, an implementor may also opt to specialize
+[^public boost::detail::atomic::platform_atomic<T,S=sizeof(T)>]
+to provide support for atomic objects not fitting an integral size.
+If you do that, keep the following things in mind:
+
+* There is no reason to ever do this for object sizes
+ of 1, 2, 4 and 8
+* Only the following methods need to be implemented:
+ * [^load]
+ * [^store]
+ * [^compare_exchange_weak] (4-operand version)
+ * [^compare_exchange_strong] (4-operand version)
+ * [^exchange]
+
+The type of the data to be stored in the atomic
+variable (template parameter [^T])
+is exposed to this class, and the type may have
+overloaded assignment and comparison operators --
+using these overloaded operators however will result
+in an error. The implementor is responsible for
+accessing the objects in a way that does not
+invoke either of these operators (using e.g.
+[^memcpy] or type-casts).
+
+[endsect]
+
+[endsect]
+
+[section:platform_atomic_fences Fences]
+
+Platform implementors need to provide a function performing
+the action required for [funcref boost::atomic_thread_fence atomic_thread_fence]
+(the fallback implementation will just perform an atomic operation
+on an integer object). This is achieved by specializing the
+[^boost::detail::atomic::platform_atomic_thread_fence] template
+function in the following way:
+
+[c++]
+
+ template<>
+ void platform_atomic_thread_fence(memory_order order)
+ {
+ // platform-specific code here
+ }
+
+[endsect]
+
+[section:platform_atomic_puttogether Putting it altogether]
+
+The template specializations should be put into a header file
+in the [^boost/atomic/detail] directory, preferrably
+specifying supported compiler and architecture in its name.
+
+The file [^boost/atomic/detail/platform.hpp] must
+subsequently be modified to conditionally include the new
+header.
+
+[endsect]

Added: branches/release/libs/atomic/index.html
==============================================================================
--- (empty file)
+++ branches/release/libs/atomic/index.html 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,13 @@
+<html>
+<head>
+<meta http-equiv="refresh" content="0; URL=../../doc/html/atomic.html">
+</head>
+<body>
+Automatic redirection failed, please go to
+../../doc/html/atomic.html &nbsp;<hr>
+<p>&copy; Copyright Beman Dawes, 2001</p>
+<p>Distributed under the Boost Software License, Version 1.0. (See accompanying
+file LICENSE_1_0.txt or copy
+at www.boost.org/LICENSE_1_0.txt)</p>
+</body>
+</html>

Added: branches/release/libs/atomic/src/lockpool.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/atomic/src/lockpool.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,24 @@
+#include <boost/atomic.hpp>
+
+// Copyright (c) 2011 Helge Bahmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+namespace boost {
+namespace atomics {
+namespace detail {
+
+static lockpool::lock_type lock_pool_[41];
+
+// NOTE: This function must NOT be inline. Otherwise MSVC 9 will sometimes generate broken code for modulus operation which result in crashes.
+BOOST_ATOMIC_DECL lockpool::lock_type& lockpool::get_lock_for(const volatile void* addr)
+{
+ std::size_t index = reinterpret_cast<std::size_t>(addr) % (sizeof(lock_pool_) / sizeof(*lock_pool_));
+ return lock_pool_[index];
+}
+
+}
+}
+}

Added: branches/release/libs/atomic/test/Jamfile.v2
==============================================================================
--- (empty file)
+++ branches/release/libs/atomic/test/Jamfile.v2 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,25 @@
+# Boost.Atomic Library test Jamfile
+#
+# Copyright (c) 2011 Helge Bahmann
+# Copyright (c) 2012 Tim Blechmann
+#
+# Distributed under the Boost Software License, Version 1.0. (See
+# accompanying file LICENSE_1_0.txt or copy at
+# http://www.boost.org/LICENSE_1_0.txt)
+
+import testing ;
+
+project boost/atomic/test
+ : requirements
+ <threading>multi
+ <library>../../thread/build//boost_thread
+ <library>/boost/atomic//boost_atomic/<link>static
+ ;
+
+test-suite atomic
+ : [ run native_api.cpp ]
+ [ run fallback_api.cpp ]
+ [ run atomicity.cpp ]
+ [ run ordering.cpp ]
+ [ run lockfree.cpp ]
+ ;

Added: branches/release/libs/atomic/test/api_test_helpers.hpp
==============================================================================
--- (empty file)
+++ branches/release/libs/atomic/test/api_test_helpers.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,325 @@
+// Copyright (c) 2011 Helge Bahmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#ifndef BOOST_ATOMIC_API_TEST_HELPERS_HPP
+#define BOOST_ATOMIC_API_TEST_HELPERS_HPP
+
+/* provide helpers that exercise whether the API
+functions of "boost::atomic" provide the correct
+operational semantic in the case of sequential
+execution */
+
+static void
+test_flag_api(void)
+{
+ boost::atomic_flag f;
+
+ BOOST_CHECK( !f.test_and_set() );
+ BOOST_CHECK( f.test_and_set() );
+ f.clear();
+ BOOST_CHECK( !f.test_and_set() );
+}
+
+template<typename T>
+void test_base_operators(T value1, T value2, T value3)
+{
+ /* explicit load/store */
+ {
+ boost::atomic<T> a(value1);
+ BOOST_CHECK( a.load() == value1 );
+ }
+
+ {
+ boost::atomic<T> a(value1);
+ a.store(value2);
+ BOOST_CHECK( a.load() == value2 );
+ }
+
+ /* overloaded assignment/conversion */
+ {
+ boost::atomic<T> a(value1);
+ BOOST_CHECK( value1 == a );
+ }
+
+ {
+ boost::atomic<T> a;
+ a = value2;
+ BOOST_CHECK( value2 == a );
+ }
+
+ /* exchange-type operators */
+ {
+ boost::atomic<T> a(value1);
+ T n = a.exchange(value2);
+ BOOST_CHECK( a.load() == value2 && n == value1 );
+ }
+
+ {
+ boost::atomic<T> a(value1);
+ T expected = value1;
+ bool success = a.compare_exchange_strong(expected, value3);
+ BOOST_CHECK( success );
+ BOOST_CHECK( a.load() == value3 && expected == value1 );
+ }
+
+ {
+ boost::atomic<T> a(value1);
+ T expected = value2;
+ bool success = a.compare_exchange_strong(expected, value3);
+ BOOST_CHECK( !success );
+ BOOST_CHECK( a.load() == value1 && expected == value1 );
+ }
+
+ {
+ boost::atomic<T> a(value1);
+ T expected;
+ bool success;
+ do {
+ expected = value1;
+ success = a.compare_exchange_weak(expected, value3);
+ } while(!success);
+ BOOST_CHECK( success );
+ BOOST_CHECK( a.load() == value3 && expected == value1 );
+ }
+
+ {
+ boost::atomic<T> a(value1);
+ T expected;
+ bool success;
+ do {
+ expected = value2;
+ success = a.compare_exchange_weak(expected, value3);
+ if (expected != value2)
+ break;
+ } while(!success);
+ BOOST_CHECK( !success );
+ BOOST_CHECK( a.load() == value1 && expected == value1 );
+ }
+}
+
+template<typename T, typename D>
+void test_additive_operators(T value, D delta)
+{
+ /* note: the tests explicitly cast the result of any addition
+ to the type to be tested to force truncation of the result to
+ the correct range in case of overflow */
+
+ /* explicit add/sub */
+ {
+ boost::atomic<T> a(value);
+ T n = a.fetch_add(delta);
+ BOOST_CHECK( a.load() == T(value + delta) );
+ BOOST_CHECK( n == value );
+ }
+
+ {
+ boost::atomic<T> a(value);
+ T n = a.fetch_sub(delta);
+ BOOST_CHECK( a.load() == T(value - delta) );
+ BOOST_CHECK( n == value );
+ }
+
+ /* overloaded modify/assign*/
+ {
+ boost::atomic<T> a(value);
+ T n = (a += delta);
+ BOOST_CHECK( a.load() == T(value + delta) );
+ BOOST_CHECK( n == T(value + delta) );
+ }
+
+ {
+ boost::atomic<T> a(value);
+ T n = (a -= delta);
+ BOOST_CHECK( a.load() == T(value - delta) );
+ BOOST_CHECK( n == T(value - delta) );
+ }
+
+ /* overloaded increment/decrement */
+ {
+ boost::atomic<T> a(value);
+ T n = a++;
+ BOOST_CHECK( a.load() == T(value + 1) );
+ BOOST_CHECK( n == value );
+ }
+
+ {
+ boost::atomic<T> a(value);
+ T n = ++a;
+ BOOST_CHECK( a.load() == T(value + 1) );
+ BOOST_CHECK( n == T(value + 1) );
+ }
+
+ {
+ boost::atomic<T> a(value);
+ T n = a--;
+ BOOST_CHECK( a.load() == T(value - 1) );
+ BOOST_CHECK( n == value );
+ }
+
+ {
+ boost::atomic<T> a(value);
+ T n = --a;
+ BOOST_CHECK( a.load() == T(value - 1) );
+ BOOST_CHECK( n == T(value - 1) );
+ }
+}
+
+template<typename T>
+void test_additive_wrap(T value)
+{
+ {
+ boost::atomic<T> a(value);
+ T n = a.fetch_add(1) + 1;
+ BOOST_CHECK( a.compare_exchange_strong(n, n) );
+ }
+ {
+ boost::atomic<T> a(value);
+ T n = a.fetch_sub(1) - 1;
+ BOOST_CHECK( a.compare_exchange_strong(n, n) );
+ }
+}
+
+template<typename T>
+void test_bit_operators(T value, T delta)
+{
+ /* explicit and/or/xor */
+ {
+ boost::atomic<T> a(value);
+ T n = a.fetch_and(delta);
+ BOOST_CHECK( a.load() == T(value & delta) );
+ BOOST_CHECK( n == value );
+ }
+
+ {
+ boost::atomic<T> a(value);
+ T n = a.fetch_or(delta);
+ BOOST_CHECK( a.load() == T(value | delta) );
+ BOOST_CHECK( n == value );
+ }
+
+ {
+ boost::atomic<T> a(value);
+ T n = a.fetch_xor(delta);
+ BOOST_CHECK( a.load() == T(value ^ delta) );
+ BOOST_CHECK( n == value );
+ }
+
+ /* overloaded modify/assign */
+ {
+ boost::atomic<T> a(value);
+ T n = (a &= delta);
+ BOOST_CHECK( a.load() == T(value & delta) );
+ BOOST_CHECK( n == T(value & delta) );
+ }
+
+ {
+ boost::atomic<T> a(value);
+ T n = (a |= delta);
+ BOOST_CHECK( a.load() == T(value | delta) );
+ BOOST_CHECK( n == T(value | delta) );
+ }
+
+ {
+ boost::atomic<T> a(value);
+ T n = (a ^= delta);
+ BOOST_CHECK( a.load() == T(value ^ delta) );
+ BOOST_CHECK( n == T(value ^ delta) );
+ }
+}
+
+template<typename T>
+void test_integral_api(void)
+{
+ BOOST_CHECK( sizeof(boost::atomic<T>) >= sizeof(T));
+
+ test_base_operators<T>(42, 43, 44);
+ test_additive_operators<T, T>(42, 17);
+ test_bit_operators<T>((T)0x5f5f5f5f5f5f5f5fULL, (T)0xf5f5f5f5f5f5f5f5ULL);
+
+ /* test for unsigned overflow/underflow */
+ test_additive_operators<T, T>((T)-1, 1);
+ test_additive_operators<T, T>(0, 1);
+ /* test for signed overflow/underflow */
+ test_additive_operators<T, T>(((T)-1) >> (sizeof(T) * 8 - 1), 1);
+ test_additive_operators<T, T>(1 + (((T)-1) >> (sizeof(T) * 8 - 1)), 1);
+
+ test_additive_wrap<T>(0);
+ test_additive_wrap<T>((T) -1);
+ test_additive_wrap<T>(-1LL << (sizeof(T) * 8 - 1));
+ test_additive_wrap<T>(~ (-1LL << (sizeof(T) * 8 - 1)));
+}
+
+template<typename T>
+void test_pointer_api(void)
+{
+ BOOST_CHECK( sizeof(boost::atomic<T *>) >= sizeof(T *));
+ BOOST_CHECK( sizeof(boost::atomic<void *>) >= sizeof(T *));
+
+ T values[3];
+
+ test_base_operators<T*>(&values[0], &values[1], &values[2]);
+ test_additive_operators<T*>(&values[1], 1);
+
+ test_base_operators<void*>(&values[0], &values[1], &values[2]);
+
+ boost::atomic<void *> ptr;
+ boost::atomic<intptr_t> integral;
+ BOOST_CHECK( ptr.is_lock_free() == integral.is_lock_free() );
+}
+
+enum test_enum {
+ foo, bar, baz
+};
+
+static void
+test_enum_api(void)
+{
+ test_base_operators(foo, bar, baz);
+}
+
+template<typename T>
+struct test_struct {
+ typedef T value_type;
+ value_type i;
+ inline bool operator==(const test_struct & c) const {return i == c.i;}
+ inline bool operator!=(const test_struct & c) const {return i != c.i;}
+};
+
+template<typename T>
+void
+test_struct_api(void)
+{
+ T a = {1}, b = {2}, c = {3};
+
+ test_base_operators(a, b, c);
+
+ {
+ boost::atomic<T> sa;
+ boost::atomic<typename T::value_type> si;
+ BOOST_CHECK( sa.is_lock_free() == si.is_lock_free() );
+ }
+}
+struct large_struct {
+ long data[64];
+
+ inline bool operator==(const large_struct & c) const
+ {
+ return memcmp(data, &c.data, sizeof(data)) == 0;
+ }
+ inline bool operator!=(const large_struct & c) const
+ {
+ return memcmp(data, &c.data, sizeof(data)) != 0;
+ }
+};
+
+static void
+test_large_struct_api(void)
+{
+ large_struct a = {{1}}, b = {{2}}, c = {{3}};
+ test_base_operators(a, b, c);
+}
+
+#endif

Added: branches/release/libs/atomic/test/atomicity.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/atomic/test/atomicity.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,275 @@
+// Copyright (c) 2011 Helge Bahmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+// Attempt to determine whether the operations on atomic variables
+// do in fact behave atomically: Let multiple threads race modifying
+// a shared atomic variable and verify that it behaves as expected.
+//
+// We assume that "observable race condition" events are exponentially
+// distributed, with unknown "average time between observable races"
+// (which is just the reciprocal of exp distribution parameter lambda).
+// Use a non-atomic implementation that intentionally exhibits a
+// (hopefully tight) race to compute the maximum-likelihood estimate
+// for this time. From this, compute an estimate that covers the
+// unknown value with 0.995 confidence (using chi square quantile).
+//
+// Use this estimate to pick a timeout for the race tests of the
+// atomic implementations such that under the assumed distribution
+// we get 0.995 probability to detect a race (if there is one).
+//
+// Overall this yields 0.995 * 0.995 > 0.99 confidence that the
+// operations truely behave atomic if this test program does not
+// report an error.
+
+#include <algorithm>
+
+#include <boost/atomic.hpp>
+#include <boost/bind.hpp>
+#include <boost/date_time/posix_time/time_formatters.hpp>
+#include <boost/test/test_tools.hpp>
+#include <boost/test/included/test_exec_monitor.hpp>
+#include <boost/thread.hpp>
+
+/* helper class to let two instances of a function race against each
+other, with configurable timeout and early abort on detection of error */
+class concurrent_runner {
+public:
+ /* concurrently run the function in two threads, until either timeout
+ or one of the functions returns "false"; returns true if timeout
+ was reached, or false if early abort and updates timeout accordingly */
+ static bool
+ execute(
+ const boost::function<bool(size_t)> & fn,
+ boost::posix_time::time_duration & timeout)
+ {
+ concurrent_runner runner(fn);
+ runner.wait_finish(timeout);
+ return !runner.failure();
+ }
+
+
+ concurrent_runner(
+ const boost::function<bool(size_t)> & fn)
+ : finished_(false), failure_(false),
+ first_thread_(boost::bind(&concurrent_runner::thread_function, this, fn, 0)),
+ second_thread_(boost::bind(&concurrent_runner::thread_function, this, fn, 1))
+ {
+ }
+
+ void
+ wait_finish(boost::posix_time::time_duration & timeout)
+ {
+ boost::system_time start = boost::get_system_time();
+ boost::system_time end = start + timeout;
+
+ {
+ boost::mutex::scoped_lock guard(m_);
+ while (boost::get_system_time() < end && !finished())
+ c_.timed_wait(guard, end);
+ }
+
+ finished_.store(true, boost::memory_order_relaxed);
+
+ first_thread_.join();
+ second_thread_.join();
+
+ boost::posix_time::time_duration duration = boost::get_system_time() - start;
+ if (duration < timeout)
+ timeout = duration;
+ }
+
+ bool
+ finished(void) const throw() {
+ return finished_.load(boost::memory_order_relaxed);
+ }
+
+ bool
+ failure(void) const throw() {
+ return failure_;
+ }
+private:
+ void
+ thread_function(boost::function<bool(size_t)> function, size_t instance)
+ {
+ while (!finished()) {
+ if (!function(instance)) {
+ boost::mutex::scoped_lock guard(m_);
+ failure_ = true;
+ finished_.store(true, boost::memory_order_relaxed);
+ c_.notify_all();
+ break;
+ }
+ }
+ }
+
+
+ boost::mutex m_;
+ boost::condition_variable c_;
+
+ boost::atomic<bool> finished_;
+ bool failure_;
+
+ boost::thread first_thread_;
+ boost::thread second_thread_;
+};
+
+bool
+racy_add(volatile unsigned int & value, size_t instance)
+{
+ size_t shift = instance * 8;
+ unsigned int mask = 0xff << shift;
+ for (size_t n = 0; n < 255; n++) {
+ unsigned int tmp = value;
+ value = tmp + (1 << shift);
+
+ if ((tmp & mask) != (n << shift))
+ return false;
+ }
+
+ unsigned int tmp = value;
+ value = tmp & ~mask;
+ if ((tmp & mask) != mask)
+ return false;
+
+ return true;
+}
+
+/* compute estimate for average time between races being observable, in usecs */
+static double
+estimate_avg_race_time(void)
+{
+ double sum = 0.0;
+
+ /* take 10 samples */
+ for (size_t n = 0; n < 10; n++) {
+ boost::posix_time::time_duration timeout(0, 0, 10);
+
+ volatile unsigned int value(0);
+ bool success = concurrent_runner::execute(
+ boost::bind(racy_add, boost::ref(value), _1),
+ timeout
+ );
+
+ if (success) {
+ BOOST_FAIL("Failed to establish baseline time for reproducing race condition");
+ }
+
+ sum = sum + timeout.total_microseconds();
+ }
+
+ /* determine maximum likelihood estimate for average time between
+ race observations */
+ double avg_race_time_mle = (sum / 10);
+
+ /* pick 0.995 confidence (7.44 = chi square 0.995 confidence) */
+ double avg_race_time_995 = avg_race_time_mle * 2 * 10 / 7.44;
+
+ return avg_race_time_995;
+}
+
+template<typename value_type, size_t shift_>
+bool
+test_arithmetic(boost::atomic<value_type> & shared_value, size_t instance)
+{
+ size_t shift = instance * 8;
+ value_type mask = 0xff << shift;
+ value_type increment = 1 << shift;
+
+ value_type expected = 0;
+
+ for (size_t n = 0; n < 255; n++) {
+ value_type tmp = shared_value.fetch_add(increment, boost::memory_order_relaxed);
+ if ( (tmp & mask) != (expected << shift) )
+ return false;
+ expected ++;
+ }
+ for (size_t n = 0; n < 255; n++) {
+ value_type tmp = shared_value.fetch_sub(increment, boost::memory_order_relaxed);
+ if ( (tmp & mask) != (expected << shift) )
+ return false;
+ expected --;
+ }
+
+ return true;
+}
+
+template<typename value_type, size_t shift_>
+bool
+test_bitops(boost::atomic<value_type> & shared_value, size_t instance)
+{
+ size_t shift = instance * 8;
+ value_type mask = 0xff << shift;
+
+ value_type expected = 0;
+
+ for (size_t k = 0; k < 8; k++) {
+ value_type mod = 1 << k;
+ value_type tmp = shared_value.fetch_or(mod << shift, boost::memory_order_relaxed);
+ if ( (tmp & mask) != (expected << shift))
+ return false;
+ expected = expected | mod;
+ }
+ for (size_t k = 0; k < 8; k++) {
+ value_type tmp = shared_value.fetch_and( ~ (1 << (shift + k)), boost::memory_order_relaxed);
+ if ( (tmp & mask) != (expected << shift))
+ return false;
+ expected = expected & ~(1<<k);
+ }
+ for (size_t k = 0; k < 8; k++) {
+ value_type mod = 255 ^ (1 << k);
+ value_type tmp = shared_value.fetch_xor(mod << shift, boost::memory_order_relaxed);
+ if ( (tmp & mask) != (expected << shift))
+ return false;
+ expected = expected ^ mod;
+ }
+
+ value_type tmp = shared_value.fetch_and( ~mask, boost::memory_order_relaxed);
+ if ( (tmp & mask) != (expected << shift) )
+ return false;
+
+ return true;
+}
+
+int test_main(int, char *[])
+{
+ boost::posix_time::time_duration reciprocal_lambda;
+
+ double avg_race_time = estimate_avg_race_time();
+
+ /* 5.298 = 0.995 quantile of exponential distribution */
+ const boost::posix_time::time_duration timeout = boost::posix_time::microseconds((long)(5.298 * avg_race_time));
+
+ {
+ boost::atomic<unsigned int> value(0);
+
+ /* testing two different operations in this loop, therefore
+ enlarge timeout */
+ boost::posix_time::time_duration tmp(timeout * 2);
+
+ bool success = concurrent_runner::execute(
+ boost::bind(test_arithmetic<unsigned int, 0>, boost::ref(value), _1),
+ tmp
+ );
+
+ BOOST_CHECK_MESSAGE(success, "concurrent arithmetic");
+ }
+
+ {
+ boost::atomic<unsigned int> value(0);
+
+ /* testing three different operations in this loop, therefore
+ enlarge timeout */
+ boost::posix_time::time_duration tmp(timeout * 3);
+
+ bool success = concurrent_runner::execute(
+ boost::bind(test_bitops<unsigned int, 0>, boost::ref(value), _1),
+ tmp
+ );
+
+ BOOST_CHECK_MESSAGE(success, "concurrent bitops");
+ }
+ return 0;
+}

Added: branches/release/libs/atomic/test/fallback_api.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/atomic/test/fallback_api.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,52 @@
+// Copyright (c) 2011 Helge Bahmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+/* force fallback implementation using locks */
+#define BOOST_ATOMIC_FORCE_FALLBACK 1
+
+#include <boost/atomic.hpp>
+#include <boost/cstdint.hpp>
+#include <boost/test/minimal.hpp>
+
+#include "api_test_helpers.hpp"
+
+int test_main(int, char *[])
+{
+ test_flag_api();
+
+ test_integral_api<char>();
+ test_integral_api<signed char>();
+ test_integral_api<unsigned char>();
+ test_integral_api<boost::uint8_t>();
+ test_integral_api<boost::int8_t>();
+ test_integral_api<short>();
+ test_integral_api<unsigned short>();
+ test_integral_api<boost::uint16_t>();
+ test_integral_api<boost::int16_t>();
+ test_integral_api<int>();
+ test_integral_api<unsigned int>();
+ test_integral_api<boost::uint32_t>();
+ test_integral_api<boost::int32_t>();
+ test_integral_api<long>();
+ test_integral_api<unsigned long>();
+ test_integral_api<boost::uint64_t>();
+ test_integral_api<boost::int64_t>();
+ test_integral_api<long long>();
+ test_integral_api<unsigned long long>();
+
+ test_pointer_api<int>();
+
+ test_enum_api();
+
+ test_struct_api<test_struct<boost::uint8_t> >();
+ test_struct_api<test_struct<boost::uint16_t> >();
+ test_struct_api<test_struct<boost::uint32_t> >();
+ test_struct_api<test_struct<boost::uint64_t> >();
+
+ test_large_struct_api();
+
+ return 0;
+}

Added: branches/release/libs/atomic/test/lockfree.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/atomic/test/lockfree.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,181 @@
+// Copyright (c) 2011 Helge Bahmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+// Verify that definition of the "LOCK_FREE" macros and the
+// "is_lock_free" members is consistent and matches expectations.
+// Also, if any operation is lock-free, then the platform
+// implementation must provide overridden fence implementations.
+
+#include <iostream>
+
+#include <boost/atomic.hpp>
+#include <boost/test/minimal.hpp>
+
+static const char * lock_free_level[] = {
+ "never",
+ "sometimes",
+ "always"
+};
+
+template<typename T>
+void
+verify_lock_free(const char * type_name, int lock_free_macro_val, int lock_free_expect)
+{
+ BOOST_CHECK(lock_free_macro_val >= 0 && lock_free_macro_val <= 2);
+ BOOST_CHECK(lock_free_macro_val == lock_free_expect);
+
+ boost::atomic<T> value;
+
+ if (lock_free_macro_val == 0)
+ BOOST_CHECK(!value.is_lock_free());
+ if (lock_free_macro_val == 2)
+ BOOST_CHECK(value.is_lock_free());
+
+ std::cout << "atomic<" << type_name << "> is " << lock_free_level[lock_free_macro_val] << " lock free\n";
+}
+
+#if defined(__GNUC__) && defined(__i386__)
+
+#define EXPECT_CHAR_LOCK_FREE 2
+#define EXPECT_SHORT_LOCK_FREE 2
+#define EXPECT_INT_LOCK_FREE 2
+#define EXPECT_LONG_LOCK_FREE 2
+#define EXPECT_LLONG_LOCK_FREE 1
+#define EXPECT_POINTER_LOCK_FREE 2
+#define EXPECT_BOOL_LOCK_FREE 2
+
+#elif defined(__GNUC__) && defined(__x86_64__)
+
+#define EXPECT_CHAR_LOCK_FREE 2
+#define EXPECT_SHORT_LOCK_FREE 2
+#define EXPECT_INT_LOCK_FREE 2
+#define EXPECT_LONG_LOCK_FREE 2
+#define EXPECT_LLONG_LOCK_FREE 2
+#define EXPECT_POINTER_LOCK_FREE 2
+#define EXPECT_BOOL_LOCK_FREE 2
+
+#elif defined(__GNUC__) && (defined(__POWERPC__) || defined(__PPC__))
+
+#define EXPECT_CHAR_LOCK_FREE 2
+#define EXPECT_CHAR16_T_LOCK_FREE 2
+#define EXPECT_CHAR32_T_LOCK_FREE 2
+#define EXPECT_WCHAR_T_LOCK_FREE 2
+#define EXPECT_SHORT_LOCK_FREE 2
+#define EXPECT_INT_LOCK_FREE 2
+#define EXPECT_LONG_LOCK_FREE 2
+#if defined(__powerpc64__)
+#define EXPECT_LLONG_LOCK_FREE 2
+#else
+#define EXPECT_LLONG_LOCK_FREE 0
+#endif
+#define EXPECT_POINTER_LOCK_FREE 2
+#define EXPECT_BOOL_LOCK_FREE 2
+
+#elif defined(__GNUC__) && defined(__alpha__)
+
+#define EXPECT_CHAR_LOCK_FREE 2
+#define EXPECT_CHAR16_T_LOCK_FREE 2
+#define EXPECT_CHAR32_T_LOCK_FREE 2
+#define EXPECT_WCHAR_T_LOCK_FREE 2
+#define EXPECT_SHORT_LOCK_FREE 2
+#define EXPECT_INT_LOCK_FREE 2
+#define EXPECT_LONG_LOCK_FREE 2
+#define EXPECT_LLONG_LOCK_FREE 2
+#define EXPECT_POINTER_LOCK_FREE 2
+#define EXPECT_BOOL_LOCK_FREE 2
+
+#elif defined(__GNUC__) && (defined(__ARM_ARCH_6__) || defined(__ARM_ARCH_6J__) \
+ || defined(__ARM_ARCH_6Z__) || defined(__ARM_ARCH_6ZK__) \
+ || defined(__ARM_ARCH_6K__) || defined(__ARM_ARCH_7A__))
+
+#define EXPECT_CHAR_LOCK_FREE 2
+#define EXPECT_SHORT_LOCK_FREE 2
+#define EXPECT_INT_LOCK_FREE 2
+#define EXPECT_LONG_LOCK_FREE 2
+#define EXPECT_LLONG_LOCK_FREE 0
+#define EXPECT_POINTER_LOCK_FREE 2
+#define EXPECT_BOOL_LOCK_FREE 2
+
+#elif defined(__linux__) && defined(__arm__)
+
+#define EXPECT_CHAR_LOCK_FREE 2
+#define EXPECT_SHORT_LOCK_FREE 2
+#define EXPECT_INT_LOCK_FREE 2
+#define EXPECT_LONG_LOCK_FREE 2
+#define EXPECT_LLONG_LOCK_FREE 0
+#define EXPECT_POINTER_LOCK_FREE 2
+#define EXPECT_BOOL_LOCK_FREE 2
+
+#elif defined(__GNUC__) && defined(__sparc_v9__)
+
+#define EXPECT_CHAR_LOCK_FREE 2
+#define EXPECT_SHORT_LOCK_FREE 2
+#define EXPECT_INT_LOCK_FREE 2
+#define EXPECT_LONG_LOCK_FREE 2
+#define EXPECT_LLONG_LOCK_FREE 0
+#define EXPECT_POINTER_LOCK_FREE 2
+#define EXPECT_BOOL_LOCK_FREE 2
+
+#elif defined(BOOST_USE_WINDOWS_H) || defined(_WIN32_CE) || defined(BOOST_MSVC) || defined(BOOST_INTEL_WIN) || defined(WIN32) || defined(_WIN32) || defined(__WIN32__) || defined(__CYGWIN__)
+
+#define EXPECT_CHAR_LOCK_FREE 2
+#define EXPECT_SHORT_LOCK_FREE 2
+#define EXPECT_INT_LOCK_FREE 2
+#define EXPECT_LONG_LOCK_FREE 2
+#if defined(_WIN64)
+#define EXPECT_LLONG_LOCK_FREE 2
+#else
+#define EXPECT_LLONG_LOCK_FREE 0
+#endif
+#define EXPECT_POINTER_LOCK_FREE 2
+#define EXPECT_BOOL_LOCK_FREE 2
+
+#elif 0 && defined(__GNUC__)
+
+#define EXPECT_CHAR_LOCK_FREE 2
+#define EXPECT_SHORT_LOCK_FREE 2
+#define EXPECT_INT_LOCK_FREE 2
+#define EXPECT_LONG_LOCK_FREE (sizeof(long) <= 4 ? 2 : 0)
+#define EXPECT_LLONG_LOCK_FREE (sizeof(long long) <= 4 ? 2 : 0)
+#define EXPECT_POINTER_LOCK_FREE (sizeof(void *) <= 4 ? 2 : 0)
+#define EXPECT_BOOL_LOCK_FREE 2
+
+#else
+
+#define EXPECT_CHAR_LOCK_FREE 0
+#define EXPECT_SHORT_LOCK_FREE 0
+#define EXPECT_INT_LOCK_FREE 0
+#define EXPECT_LONG_LOCK_FREE 0
+#define EXPECT_LLONG_LOCK_FREE 0
+#define EXPECT_POINTER_LOCK_FREE 0
+#define EXPECT_BOOL_LOCK_FREE 0
+
+#endif
+
+int test_main(int, char *[])
+{
+ verify_lock_free<char>("char", BOOST_ATOMIC_CHAR_LOCK_FREE, EXPECT_CHAR_LOCK_FREE);
+ verify_lock_free<short>("short", BOOST_ATOMIC_SHORT_LOCK_FREE, EXPECT_SHORT_LOCK_FREE);
+ verify_lock_free<int>("int", BOOST_ATOMIC_INT_LOCK_FREE, EXPECT_INT_LOCK_FREE);
+ verify_lock_free<long>("long", BOOST_ATOMIC_LONG_LOCK_FREE, EXPECT_LONG_LOCK_FREE);
+#ifdef BOOST_HAS_LONG_LONG
+ verify_lock_free<long long>("long long", BOOST_ATOMIC_LLONG_LOCK_FREE, EXPECT_LLONG_LOCK_FREE);
+#endif
+ verify_lock_free<void *>("void *", BOOST_ATOMIC_POINTER_LOCK_FREE, EXPECT_SHORT_LOCK_FREE);
+ verify_lock_free<bool>("bool", BOOST_ATOMIC_BOOL_LOCK_FREE, EXPECT_BOOL_LOCK_FREE);
+
+ bool any_lock_free =
+ BOOST_ATOMIC_CHAR_LOCK_FREE ||
+ BOOST_ATOMIC_SHORT_LOCK_FREE ||
+ BOOST_ATOMIC_INT_LOCK_FREE ||
+ BOOST_ATOMIC_LONG_LOCK_FREE ||
+ BOOST_ATOMIC_LLONG_LOCK_FREE ||
+ BOOST_ATOMIC_BOOL_LOCK_FREE;
+
+ BOOST_CHECK(!any_lock_free || BOOST_ATOMIC_THREAD_FENCE);
+
+ return 0;
+}

Added: branches/release/libs/atomic/test/native_api.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/atomic/test/native_api.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,49 @@
+// Copyright (c) 2011 Helge Bahmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <boost/atomic.hpp>
+#include <boost/cstdint.hpp>
+#include <boost/test/minimal.hpp>
+
+#include "api_test_helpers.hpp"
+
+int test_main(int, char *[])
+{
+ test_flag_api();
+
+ test_integral_api<char>();
+ test_integral_api<signed char>();
+ test_integral_api<unsigned char>();
+ test_integral_api<boost::uint8_t>();
+ test_integral_api<boost::int8_t>();
+ test_integral_api<short>();
+ test_integral_api<unsigned short>();
+ test_integral_api<boost::uint16_t>();
+ test_integral_api<boost::int16_t>();
+ test_integral_api<int>();
+ test_integral_api<unsigned int>();
+ test_integral_api<boost::uint32_t>();
+ test_integral_api<boost::int32_t>();
+ test_integral_api<long>();
+ test_integral_api<unsigned long>();
+ test_integral_api<boost::uint64_t>();
+ test_integral_api<boost::int64_t>();
+ test_integral_api<long long>();
+ test_integral_api<unsigned long long>();
+
+ test_pointer_api<int>();
+
+ test_enum_api();
+
+ test_struct_api<test_struct<boost::uint8_t> >();
+ test_struct_api<test_struct<boost::uint16_t> >();
+ test_struct_api<test_struct<boost::uint32_t> >();
+ test_struct_api<test_struct<boost::uint64_t> >();
+
+ test_large_struct_api();
+
+ return 0;
+}

Added: branches/release/libs/atomic/test/ordering.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/atomic/test/ordering.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,252 @@
+// Copyright (c) 2011 Helge Bahmann
+// Copyright (c) 2012 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0.
+// See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+// Attempt to determine whether the memory ordering/ fence operations
+// work as expected:
+// Let two threads race accessing multiple shared variables and
+// verify that "observable" order of operations matches with the
+// ordering constraints specified.
+//
+// We assume that "memory ordering violation" events are exponentially
+// distributed, with unknown "average time between violations"
+// (which is just the reciprocal of exp distribution parameter lambda).
+// Use a "relaxed ordering" implementation that intentionally exhibits
+// a (hopefully observable) such violation to compute the maximum-likelihood
+// estimate for this time. From this, compute an estimate that covers the
+// unknown value with 0.995 confidence (using chi square quantile).
+//
+// Use this estimate to pick a timeout for the race tests of the
+// atomic implementations such that under the assumed distribution
+// we get 0.995 probability to detect a race (if there is one).
+//
+// Overall this yields 0.995 * 0.995 > 0.99 confidence that the
+// fences work as expected if this test program does not
+// report an error.
+#include <boost/atomic.hpp>
+#include <boost/date_time/posix_time/time_formatters.hpp>
+#include <boost/test/test_tools.hpp>
+#include <boost/test/included/test_exec_monitor.hpp>
+#include <boost/thread.hpp>
+
+// Two threads perform the following operations:
+//
+// thread # 1 thread # 2
+// store(a, 1) store(b, 1)
+// read(a) read(b)
+// x = read(b) y = read(a)
+//
+// Under relaxed memory ordering, the case (x, y) == (0, 0) is
+// possible. Under sequential consistency, this case is impossible.
+//
+// This "problem" is reproducible on all platforms, even x86.
+template<boost::memory_order store_order, boost::memory_order load_order>
+class total_store_order_test {
+public:
+ total_store_order_test(void);
+
+ void run(boost::posix_time::time_duration & timeout);
+ bool detected_conflict(void) const { return detected_conflict_; }
+private:
+ void thread1fn(void);
+ void thread2fn(void);
+ void check_conflict(void);
+
+ boost::atomic<int> a_;
+ /* insert a bit of padding to push the two variables into
+ different cache lines and increase the likelihood of detecting
+ a conflict */
+ char pad_[512];
+ boost::atomic<int> b_;
+
+ boost::barrier barrier_;
+
+ int vrfya1_, vrfyb1_, vrfya2_, vrfyb2_;
+
+ boost::atomic<bool> terminate_threads_;
+ boost::atomic<int> termination_consensus_;
+
+ bool detected_conflict_;
+ boost::mutex m_;
+ boost::condition_variable c_;
+};
+
+template<boost::memory_order store_order, boost::memory_order load_order>
+total_store_order_test<store_order, load_order>::total_store_order_test(void)
+ : a_(0), b_(0), barrier_(2),
+ terminate_threads_(false), termination_consensus_(0),
+ detected_conflict_(false)
+{
+}
+
+template<boost::memory_order store_order, boost::memory_order load_order>
+void
+total_store_order_test<store_order, load_order>::run(boost::posix_time::time_duration & timeout)
+{
+ boost::system_time start = boost::get_system_time();
+ boost::system_time end = start + timeout;
+
+ boost::thread t1(boost::bind(&total_store_order_test::thread1fn, this));
+ boost::thread t2(boost::bind(&total_store_order_test::thread2fn, this));
+
+ {
+ boost::mutex::scoped_lock guard(m_);
+ while (boost::get_system_time() < end && !detected_conflict_)
+ c_.timed_wait(guard, end);
+ }
+
+ terminate_threads_.store(true, boost::memory_order_relaxed);
+
+ t2.join();
+ t1.join();
+
+ boost::posix_time::time_duration duration = boost::get_system_time() - start;
+ if (duration < timeout)
+ timeout = duration;
+}
+
+volatile int backoff_dummy;
+
+template<boost::memory_order store_order, boost::memory_order load_order>
+void
+total_store_order_test<store_order, load_order>::thread1fn(void)
+{
+ for (;;) {
+ a_.store(1, store_order);
+ int a = a_.load(load_order);
+ int b = b_.load(load_order);
+
+ barrier_.wait();
+
+ vrfya1_ = a;
+ vrfyb1_ = b;
+
+ barrier_.wait();
+
+ check_conflict();
+
+ /* both threads synchronize via barriers, so either
+ both threads must exit here, or they must both do
+ another round, otherwise one of them will wait forever */
+ if (terminate_threads_.load(boost::memory_order_relaxed)) for (;;) {
+ int tmp = termination_consensus_.fetch_or(1, boost::memory_order_relaxed);
+
+ if (tmp == 3)
+ return;
+ if (tmp & 4)
+ break;
+ }
+
+ termination_consensus_.fetch_xor(4, boost::memory_order_relaxed);
+
+ unsigned int delay = rand() % 10000;
+ a_.store(0, boost::memory_order_relaxed);
+
+ barrier_.wait();
+
+ while(delay--) { backoff_dummy = delay; }
+ }
+}
+
+template<boost::memory_order store_order, boost::memory_order load_order>
+void
+total_store_order_test<store_order, load_order>::thread2fn(void)
+{
+ for (;;) {
+ b_.store(1, store_order);
+ int b = b_.load(load_order);
+ int a = a_.load(load_order);
+
+ barrier_.wait();
+
+ vrfya2_ = a;
+ vrfyb2_ = b;
+
+ barrier_.wait();
+
+ check_conflict();
+
+ /* both threads synchronize via barriers, so either
+ both threads must exit here, or they must both do
+ another round, otherwise one of them will wait forever */
+ if (terminate_threads_.load(boost::memory_order_relaxed)) for (;;) {
+ int tmp = termination_consensus_.fetch_or(2, boost::memory_order_relaxed);
+
+ if (tmp == 3)
+ return;
+ if (tmp & 4)
+ break;
+ }
+
+ termination_consensus_.fetch_xor(4, boost::memory_order_relaxed);
+
+
+ unsigned int delay = rand() % 10000;
+ b_.store(0, boost::memory_order_relaxed);
+
+ barrier_.wait();
+
+ while(delay--) { backoff_dummy = delay; }
+ }
+}
+
+template<boost::memory_order store_order, boost::memory_order load_order>
+void
+total_store_order_test<store_order, load_order>::check_conflict(void)
+{
+ if (vrfyb1_ == 0 && vrfya2_ == 0) {
+ boost::mutex::scoped_lock guard(m_);
+ detected_conflict_ = true;
+ terminate_threads_.store(true, boost::memory_order_relaxed);
+ c_.notify_all();
+ }
+}
+
+void
+test_seq_cst(void)
+{
+ double sum = 0.0;
+
+ /* take 10 samples */
+ for (size_t n = 0; n < 10; n++) {
+ boost::posix_time::time_duration timeout(0, 0, 10);
+
+ total_store_order_test<boost::memory_order_relaxed, boost::memory_order_relaxed> test;
+ test.run(timeout);
+ if (!test.detected_conflict()) {
+ BOOST_WARN_MESSAGE(false, "Failed to detect order=seq_cst violation while ith order=relaxed -- intrinsic ordering too strong for this test");
+ return;
+ }
+
+ std::cout << "seq_cst violation with order=relaxed after " << boost::posix_time::to_simple_string(timeout) << "\n";
+
+ sum = sum + timeout.total_microseconds();
+ }
+
+ /* determine maximum likelihood estimate for average time between
+ race observations */
+ double avg_race_time_mle = (sum / 10);
+
+ /* pick 0.995 confidence (7.44 = chi square 0.995 confidence) */
+ double avg_race_time_995 = avg_race_time_mle * 2 * 10 / 7.44;
+
+ /* 5.298 = 0.995 quantile of exponential distribution */
+ boost::posix_time::time_duration timeout = boost::posix_time::microseconds((long)(5.298 * avg_race_time_995));
+
+ std::cout << "run seq_cst for " << boost::posix_time::to_simple_string(timeout) << "\n";
+
+ total_store_order_test<boost::memory_order_seq_cst, boost::memory_order_relaxed> test;
+ test.run(timeout);
+
+ BOOST_CHECK_MESSAGE(!test.detected_conflict(), "sequential consistency");
+}
+
+int test_main(int, char *[])
+{
+ test_seq_cst();
+
+ return 0;
+}

Modified: branches/release/libs/libraries.htm
==============================================================================
--- branches/release/libs/libraries.htm (original)
+++ branches/release/libs/libraries.htm 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -94,6 +94,7 @@
         with constant or generated data has never been
         easier, from Thorsten Ottosen.
         </li>
+ <li>atomic - C++11-style atomic<>, from Helge Bahmann, maintained by Tim Blechmann</li>
     <li>bimap - Bidirectional maps, from Matias Capeletto.
         </li>
     <li>bind and mem_fn - Generalized binders for function/object/pointers and member functions, from Peter
@@ -194,6 +195,7 @@
     handling tools for C++, from Artyom Beilis</li>
     <li>lexical_cast -&nbsp; General literal text conversions, such as an <code>int</code> represented as
     a <code>string</code>, or vice-versa, from Kevlin Henney.</li>
+ <li>lockfree - Lockfree data structures, from Tim Blechmann</li>
     <li>math - Several contributions in the
     domain of mathematics, from various authors.</li>
     <li>math/complex number algorithms -
@@ -400,10 +402,12 @@
     <li>asio - Portable networking and other low-level
         I/O, including sockets, timers, hostname resolution, socket iostreams, serial
         ports, file descriptors and Windows HANDLEs, from Chris Kohlhoff.</li>
+ <li>atomic - C++11-style atomic<>, from Helge Bahmann, maintained by Tim Blechmann</li>
     <li>context - Context switching library, from Oliver Kowalke</li>
     <li>coroutine - Coroutine library, from Oliver Kowalke</li>
     <li>interprocess - Shared memory, memory mapped files,
     process-shared mutexes, condition variables, containers and allocators, from Ion Gazta&ntilde;aga</li>
+ <li>lockfree - Lockfree data structures, from Tim Blechmann</li>
     <li>MPI - Message Passing Interface library, for use in distributed-memory parallel application programming, from Douglas Gregor and Matthias Troyer.</li>
     <li>thread - Portable C++
       multi-threading, from William Kempf.</li>

Added: branches/release/libs/lockfree/doc/Jamfile.v2
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/doc/Jamfile.v2 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,50 @@
+# Copyright 2010 Tim Blechmann
+# Distributed under the Boost Software License, Version 1.0. (See accompanying
+# file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+
+import doxygen ;
+import quickbook ;
+
+doxygen autodoc
+ :
+ [ glob ../../../boost/lockfree/*.hpp ]
+ :
+ #<doxygen:param>EXTRACT_ALL=YES
+ <doxygen:param>"PREDEFINED=\"BOOST_DOXYGEN_INVOKED\" \\
+ \"BOOST_DEDUCED_TYPENAME=typename\" \\
+ \"BOOST_HAS_RVALUE_REFS\" \\
+ "
+ <doxygen:param>HIDE_UNDOC_MEMBERS=YES
+ <doxygen:param>HIDE_UNDOC_CLASSES=YES
+ <doxygen:param>INLINE_INHERITED_MEMB=YES
+ <doxygen:param>EXTRACT_PRIVATE=NO
+ <doxygen:param>ENABLE_PREPROCESSING=YES
+ <doxygen:param>MACRO_EXPANSION=YES
+ <doxygen:param>EXPAND_ONLY_PREDEF=YES
+ <doxygen:param>SEARCH_INCLUDES=YES
+ <doxygen:param>INCLUDE_PATH=$(BOOST_ROOT)
+ <doxygen:param>EXAMPLE_PATH=$(BOOST_ROOT)/libs/lockfree/examples
+ <doxygen:param>BRIEF_MEMBER_DESC=YES
+ <doxygen:param>REPEAT_BRIEF=YES
+ <doxygen:param>MULTILINE_CPP_IS_BRIEF=YES
+ ;
+
+xml lockfree : lockfree.qbk : ;
+
+boostbook standalone
+ : lockfree
+ : <xsl:param>html.stylesheet=../boostbook.css
+ <xsl:param>boost.root=../../../..
+ <xsl:param>boost.libraries=../../../libraries.htm
+ <xsl:param>toc.max.depth=2
+ <xsl:param>toc.section.depth=2
+ <dependency>autodoc
+ <format>pdf:<xsl:param>boost.url.prefix=http://www.boost.org/doc/libs/release/libs/lockfree/doc/html
+ ;
+
+install css : [ glob $(BOOST_ROOT)/doc/src/*.css ]
+ : <location>html ;
+install images : [ glob $(BOOST_ROOT)/doc/src/images/*.png ]
+ : <location>html/images ;
+explicit css ;
+explicit images ;

Added: branches/release/libs/lockfree/doc/lockfree.qbk
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/doc/lockfree.qbk 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,295 @@
+[library Boost.Lockfree
+ [quickbook 1.4]
+ [authors [Blechmann, Tim]]
+ [copyright 2008-2011 Tim Blechmann]
+ [category algorithms]
+ [purpose
+ lockfree concurrent data structures
+ ]
+ [id lockfree]
+ [dirname lockfree]
+ [license
+ Distributed under the Boost Software License, Version 1.0.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ [@http://www.boost.org/LICENSE_1_0.txt])
+ ]
+]
+
+[c++]
+
+
+[/ Images ]
+
+[def _note_ [$images/note.png]]
+[def _alert_ [$images/caution.png]]
+[def _detail_ [$images/note.png]]
+[def _tip_ [$images/tip.png]]
+
+[/ Links ]
+
+[def _lockfree_ [^boost.lockfree]]
+
+[section Introduction & Motivation]
+
+[h2 Introduction & Terminology]
+
+The term *non-blocking* denotes concurrent data structures, which do not use traditional synchronization primitives like
+guards to ensure thread-safety. Maurice Herlihy and Nir Shavit (compare [@http://books.google.com/books?id=pFSwuqtJgxYC
+"The Art of Multiprocessor Programming"]) distinguish between 3 types of non-blocking data structures, each having different
+properties:
+
+* data structures are *wait-free*, if every concurrent operation is guaranteed to be finished in a finite number of
+ steps. It is therefore possible to give worst-case guarantees for the number of operations.
+
+* data structures are *lock-free*, if some concurrent operations are guaranteed to be finished in a finite number of
+ steps. While it is in theory possible that some operations never make any progress, it is very unlikely to happen in
+ practical applications.
+
+* data structures are *obstruction-free*, if a concurrent operation is guaranteed to be finished in a finite number of
+ steps, unless another concurrent operation interferes.
+
+
+Some data structures can only be implemented in a lock-free manner, if they are used under certain restrictions. The
+relevant aspects for the implementation of _lockfree_ are the number of producer and consumer threads. *Single-producer*
+(*sp*) or *multiple producer* (*mp*) means that only a single thread or multiple concurrent threads are allowed to add
+data to a data structure. *Single-consumer* (*sc*) or *Multiple-consumer* (*mc*) denote the equivalent for the removal
+of data from the data structure.
+
+
+[h2 Properties of Non-Blocking Data Structures]
+
+Non-blocking data structures do not rely on locks and mutexes to ensure thread-safety. The synchronization is done completely in
+user-space without any direct interaction with the operating system [footnote Spinlocks do not
+directly interact with the operating system either. However it is possible that the owning thread is preempted by the
+operating system, which violates the lock-free property.]. This implies that they are not prone to issues like priority
+inversion (a low-priority thread needs to wait for a high-priority thread).
+
+Instead of relying on guards, non-blocking data structures require *atomic operations* (specific CPU instructions executed
+without interruption). This means that any thread either sees the state before or after the operation, but no
+intermediate state can be observed. Not all hardware supports the same set of atomic instructions. If it is not
+available in hardware, it can be emulated in software using guards. However this has the obvious drawback of losing the
+lock-free property.
+
+
+[h2 Performance of Non-Blocking Data Structures]
+
+When discussing the performance of non-blocking data structures, one has to distinguish between *amortized* and
+*worst-case* costs. The definition of 'lock-free' and 'wait-free' only mention the upper bound of an operation. Therefore
+lock-free data structures are not necessarily the best choice for every use case. In order to maximise the throughput of an
+application one should consider high-performance concurrent data structures [footnote
+[@http://threadingbuildingblocks.org/ Intel's Thread Building Blocks library] provides many efficient concurrent data structures,
+which are not necessarily lock-free.].
+
+Lock-free data structures will be a better choice in order to optimize the latency of a system or to avoid priority inversion,
+which may be necessary in real-time applications. In general we advise to consider if lock-free data structures are necessary or if
+concurrent data structures are sufficient. In any case we advice to perform benchmarks with different data structures for a
+specific workload.
+
+
+[h2 Sources of Blocking Behavior]
+
+Apart from locks and mutexes (which we are not using in _lockfree_ anyway), there are three other aspects, that could violate
+lock-freedom:
+
+[variablelist
+ [[Atomic Operations]
+ [Some architectures do not provide the necessary atomic operations in natively in hardware. If this is not
+ the case, they are emulated in software using spinlocks, which by itself is blocking.
+ ]
+ ]
+
+ [[Memory Allocations]
+ [Allocating memory from the operating system is not lock-free. This makes it impossible to implement true
+ dynamically-sized non-blocking data structures. The node-based data structures of _lockfree_ use a memory pool to allocate the
+ internal nodes. If this memory pool is exhausted, memory for new nodes has to be allocated from the operating system. However
+ all data structures of _lockfree_ can be configured to avoid memory allocations (instead the specific calls will fail).
+ This is especially useful for real-time systems that require lock-free memory allocations.
+ ]
+ ]
+
+ [[Exception Handling]
+ [The C++ exception handling does not give any guarantees about its real-time behavior. We therefore do
+ not encourage the use of exceptions and exception handling in lock-free code.]
+ ]
+]
+
+[h2 Data Structures]
+
+_lockfree_ implements three lock-free data structures:
+
+[variablelist
+ [[[classref boost::lockfree::queue]]
+ [a lock-free multi-produced/multi-consumer queue]
+ ]
+
+ [[[classref boost::lockfree::stack]]
+ [a lock-free multi-produced/multi-consumer stack]
+ ]
+
+ [[[classref boost::lockfree::spsc_queue]]
+ [a wait-free single-producer/single-consumer queue (commonly known as ringbuffer)]
+ ]
+]
+
+[h3 Data Structure Configuration]
+
+The data structures can be configured with [@boost:/libs/parameter/doc/html/index.html Boost.Parameter]-style templates:
+
+[variablelist
+ [[[classref boost::lockfree::fixed_sized]]
+ [Configures the data structure as *fixed sized*. The internal nodes are stored inside an array and they are addressed by
+ array indexing. This limits the possible size of the queue to the number of elements that can be addressed by the index
+ type (usually 2**16-2), but on platforms that lack double-width compare-and-exchange instructions, this is the best way
+ to achieve lock-freedom.
+ ]
+ ]
+
+ [[[classref boost::lockfree::capacity]]
+ [Sets the *capacity* of a data structure at compile-time. This implies that a data structure is fixed-sized.
+ ]
+ ]
+
+ [[[classref boost::lockfree::allocator]]
+ [Defines the allocator. _lockfree_ supports stateful allocator and is compatible with [@boost:/libs/interprocess/index.html Boost.Interprocess] allocators.]
+ ]
+]
+
+
+[endsect]
+
+[section Examples]
+
+[h2 Queue]
+
+The [classref boost::lockfree::queue boost::lockfree::queue] class implements a multi-writer/multi-reader queue. The
+following example shows how integer values are produced and consumed by 4 threads each:
+
+[import ../examples/queue.cpp]
+[queue_example]
+
+The program output is:
+
+[pre
+produced 40000000 objects.
+consumed 40000000 objects.
+]
+
+
+[h2 Stack]
+
+The [classref boost::lockfree::stack boost::lockfree::stack] class implements a multi-writer/multi-reader stack. The
+following example shows how integer values are produced and consumed by 4 threads each:
+
+[import ../examples/stack.cpp]
+[stack_example]
+
+
+The program output is:
+
+[pre
+produced 4000000 objects.
+consumed 4000000 objects.
+]
+
+[h2 Waitfree Single-Producer/Single-Consumer Queue]
+
+The [classref boost::lockfree::spsc_queue boost::lockfree::spsc_queue] class implements a wait-free single-producer/single-consumer queue. The
+following example shows how integer values are produced and consumed by 2 separate threads:
+
+[import ../examples/spsc_queue.cpp]
+[spsc_queue_example]
+
+
+The program output is:
+
+[pre
+produced 10000000 objects.
+consumed 10000000 objects.
+]
+
+[endsect]
+
+
+[section Rationale]
+
+[section Data Structures]
+
+The implementations are implementations of well-known data structures. The queue is based on
+[@http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.3574 Simple, Fast, and Practical Non-Blocking and Blocking Concurrent Queue Algorithms by Michael Scott and Maged Michael],
+the stack is based on [@http://books.google.com/books?id=YQg3HAAACAAJ Systems programming: coping with parallelism by R. K. Treiber]
+and the spsc_queue is considered as 'folklore' and is implemented in several open-source projects including the linux kernel. All
+data structures are discussed in detail in [@http://books.google.com/books?id=pFSwuqtJgxYC "The Art of Multiprocessor Programming" by Herlihy & Shavit].
+
+[endsect]
+
+[section Memory Management]
+
+The lock-free [classref boost::lockfree::queue] and [classref boost::lockfree::stack] classes are node-based data structures,
+based on a linked list. Memory management of lock-free data structures is a non-trivial problem, because we need to avoid that
+one thread frees an internal node, while another thread still uses it. _lockfree_ uses a simple approach not returning any memory
+to the operating system. Instead they maintain a *free-list* in order to reuse them later. This is done for two reasons:
+first, depending on the implementation of the memory allocator freeing the memory may block (so the implementation would not
+be lock-free anymore), and second, most memory reclamation algorithms are patented.
+
+[endsect]
+
+[section ABA Prevention]
+
+The ABA problem is a common problem when implementing lock-free data structures. The problem occurs when updating an atomic
+variable using a =compare_exchange= operation: if the value A was read, thread 1 changes it to say C and tries to update
+the variable, it uses =compare_exchange= to write C, only if the current value is A. This might be a problem if in the meanwhile
+thread 2 changes the value from A to B and back to A, because thread 1 does not observe the change of the state. The common way to
+avoid the ABA problem is to associate a version counter with the value and change both atomically.
+
+_lockfree_ uses a =tagged_ptr= helper class which associates a pointer with an integer tag. This usually requires a double-width
+=compare_exchange=, which is not available on all platforms. IA32 did not provide the =cmpxchg8b= opcode before the pentium
+processor and it is also lacking on many RISC architectures like PPC. Early X86-64 processors also did not provide a =cmpxchg16b=
+instruction.
+On 64bit platforms one can work around this issue, because often not the full 64bit address space is used. On X86_64 for example,
+only 48bit are used for the address, so we can use the remaining 16bit for the ABA prevention tag. For details please consult the
+implementation of the =boost::lockfree::detail::tagged_ptr= class.
+
+For lock-free operations on 32bit platforms without double-width =compare_exchange=, we support a third approach: by using a
+fixed-sized array to store the internal nodes we can avoid the use of 32bit pointers, but instead 16bit indices into the array
+are sufficient. However this is only possible for fixed-sized data structures, that have an upper bound of internal nodes.
+
+[endsect]
+
+[section Interprocess Support]
+
+The _lockfree_ data structures have basic support for [@boost:/libs/interprocess/index.html Boost.Interprocess]. The only
+problem is the blocking emulation of lock-free atomics, which in the current implementation is not guaranteed to be interprocess-safe.
+
+[endsect]
+
+[endsect]
+
+[xinclude autodoc.xml]
+
+[section Appendices]
+
+[section Supported Platforms & Compilers]
+
+_lockfree_ has been tested on the following platforms:
+
+* g++ 4.4, 4.5 and 4.6, linux, x86 & x86_64
+* clang++ 3.0, linux, x86 & x86_64
+
+[endsect]
+
+[section Future Developments]
+
+* More data structures (set, hash table, dequeue)
+* Backoff schemes (exponential backoff or elimination)
+
+[endsect]
+
+[section References]
+
+# [@http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.3574 Simple, Fast, and Practical Non-Blocking and Blocking Concurrent Queue Algorithms by Michael Scott and Maged Michael],
+In Symposium on Principles of Distributed Computing, pages 267–275, 1996.
+# [@http://books.google.com/books?id=pFSwuqtJgxYC M. Herlihy & Nir Shavit. The Art of Multiprocessor Programming], Morgan Kaufmann Publishers, 2008
+
+[endsect]
+
+[endsect]

Added: branches/release/libs/lockfree/examples/Jamfile.v2
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/examples/Jamfile.v2 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,13 @@
+# (C) Copyright 2009: Tim Blechmann
+# Distributed under the Boost Software License, Version 1.0.
+# (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+
+project boost/lockfree/example
+ : requirements
+ <library>../../thread/build//boost_thread/
+ <library>../../atomic/build//boost_atomic
+ ;
+
+exe queue : queue.cpp ;
+exe stack : stack.cpp ;
+exe spsc_queue : spsc_queue.cpp ;

Added: branches/release/libs/lockfree/examples/queue.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/examples/queue.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,69 @@
+// Copyright (C) 2009 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+//[queue_example
+#include <boost/thread/thread.hpp>
+#include <boost/lockfree/queue.hpp>
+#include <iostream>
+
+#include <boost/atomic.hpp>
+
+boost::atomic_int producer_count(0);
+boost::atomic_int consumer_count(0);
+
+boost::lockfree::queue<int> queue(128);
+
+const int iterations = 10000000;
+const int producer_thread_count = 4;
+const int consumer_thread_count = 4;
+
+void producer(void)
+{
+ for (int i = 0; i != iterations; ++i) {
+ int value = ++producer_count;
+ while (!queue.push(value))
+ ;
+ }
+}
+
+boost::atomic<bool> done (false);
+void consumer(void)
+{
+ int value;
+ while (!done) {
+ while (queue.pop(value))
+ ++consumer_count;
+ }
+
+ while (queue.pop(value))
+ ++consumer_count;
+}
+
+int main(int argc, char* argv[])
+{
+ using namespace std;
+ cout << "boost::lockfree::queue is ";
+ if (!queue.is_lock_free())
+ cout << "not ";
+ cout << "lockfree" << endl;
+
+ boost::thread_group producer_threads, consumer_threads;
+
+ for (int i = 0; i != producer_thread_count; ++i)
+ producer_threads.create_thread(producer);
+
+ for (int i = 0; i != consumer_thread_count; ++i)
+ consumer_threads.create_thread(consumer);
+
+ producer_threads.join_all();
+ done = true;
+
+ consumer_threads.join_all();
+
+ cout << "produced " << producer_count << " objects." << endl;
+ cout << "consumed " << consumer_count << " objects." << endl;
+}
+//]

Added: branches/release/libs/lockfree/examples/spsc_queue.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/examples/spsc_queue.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,62 @@
+// Copyright (C) 2009 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+//[spsc_queue_example
+#include <boost/thread/thread.hpp>
+#include <boost/lockfree/spsc_queue.hpp>
+#include <iostream>
+
+#include <boost/atomic.hpp>
+
+int producer_count = 0;
+boost::atomic_int consumer_count (0);
+
+boost::lockfree::spsc_queue<int, boost::lockfree::capacity<1024> > spsc_queue;
+
+const int iterations = 10000000;
+
+void producer(void)
+{
+ for (int i = 0; i != iterations; ++i) {
+ int value = ++producer_count;
+ while (!spsc_queue.push(value))
+ ;
+ }
+}
+
+boost::atomic<bool> done (false);
+
+void consumer(void)
+{
+ int value;
+ while (!done) {
+ while (spsc_queue.pop(value))
+ ++consumer_count;
+ }
+
+ while (spsc_queue.pop(value))
+ ++consumer_count;
+}
+
+int main(int argc, char* argv[])
+{
+ using namespace std;
+ cout << "boost::lockfree::queue is ";
+ if (!spsc_queue.is_lock_free())
+ cout << "not ";
+ cout << "lockfree" << endl;
+
+ boost::thread producer_thread(producer);
+ boost::thread consumer_thread(consumer);
+
+ producer_thread.join();
+ done = true;
+ consumer_thread.join();
+
+ cout << "produced " << producer_count << " objects." << endl;
+ cout << "consumed " << consumer_count << " objects." << endl;
+}
+//]

Added: branches/release/libs/lockfree/examples/stack.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/examples/stack.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,70 @@
+// Copyright (C) 2009 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+//[stack_example
+#include <boost/thread/thread.hpp>
+#include <boost/lockfree/stack.hpp>
+#include <iostream>
+
+#include <boost/atomic.hpp>
+
+boost::atomic_int producer_count(0);
+boost::atomic_int consumer_count(0);
+
+boost::lockfree::stack<int> stack(128);
+
+const int iterations = 1000000;
+const int producer_thread_count = 4;
+const int consumer_thread_count = 4;
+
+void producer(void)
+{
+ for (int i = 0; i != iterations; ++i) {
+ int value = ++producer_count;
+ while (!stack.push(value))
+ ;
+ }
+}
+
+boost::atomic<bool> done (false);
+
+void consumer(void)
+{
+ int value;
+ while (!done) {
+ while (stack.pop(value))
+ ++consumer_count;
+ }
+
+ while (stack.pop(value))
+ ++consumer_count;
+}
+
+int main(int argc, char* argv[])
+{
+ using namespace std;
+ cout << "boost::lockfree::stack is ";
+ if (!stack.is_lock_free())
+ cout << "not ";
+ cout << "lockfree" << endl;
+
+ boost::thread_group producer_threads, consumer_threads;
+
+ for (int i = 0; i != producer_thread_count; ++i)
+ producer_threads.create_thread(producer);
+
+ for (int i = 0; i != consumer_thread_count; ++i)
+ consumer_threads.create_thread(consumer);
+
+ producer_threads.join_all();
+ done = true;
+
+ consumer_threads.join_all();
+
+ cout << "produced " << producer_count << " objects." << endl;
+ cout << "consumed " << consumer_count << " objects." << endl;
+}
+//]

Added: branches/release/libs/lockfree/index.html
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/index.html 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,13 @@
+<html>
+<head>
+<meta http-equiv="refresh" content="0; URL=../../doc/html/lockfree.html">
+</head>
+<body>
+Automatic redirection failed, please go to
+../../doc/html/lockfree.html &nbsp;<hr>
+<p>&copy; Copyright Beman Dawes, 2001</p>
+<p>Distributed under the Boost Software License, Version 1.0. (See accompanying
+file LICENSE_1_0.txt or copy
+at www.boost.org/LICENSE_1_0.txt)</p>
+</body>
+</html>

Added: branches/release/libs/lockfree/test/Jamfile.v2
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/test/Jamfile.v2 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,42 @@
+# (C) Copyright 2010: Tim Blechmann
+# Distributed under the Boost Software License, Version 1.0.
+# (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+
+import testing ;
+
+lib boost_unit_test_framework ;
+lib boost_thread ;
+lib boost_system ;
+
+project
+ : source-location .
+ : requirements
+ <hardcode-dll-paths>true
+ <library>../../test/build//boost_test_exec_monitor
+ <library>../../atomic/build//boost_atomic
+ ;
+
+
+rule test_all
+{
+ local all_rules = ;
+
+ for local fileb in [ glob *.cpp ]
+ {
+ all_rules += [ run $(fileb)
+ : # additional args
+ : # test-files
+ : # requirements
+ <toolset>acc:<linkflags>-lrt
+ <toolset>acc-pa_risc:<linkflags>-lrt
+ <toolset>gcc-mingw:<linkflags>"-lole32 -loleaut32 -lpsapi -ladvapi32"
+ <host-os>hpux,<toolset>gcc:<linkflags>"-Wl,+as,mpas"
+ <library>../../thread/build//boost_thread/
+ <threading>multi
+ ] ;
+ }
+
+ return $(all_rules) ;
+}
+
+test-suite lockfree : [ test_all r ] : <threading>multi ;

Added: branches/release/libs/lockfree/test/freelist_test.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/test/freelist_test.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,230 @@
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+// enables error checks via dummy::~dtor
+#define BOOST_LOCKFREE_FREELIST_INIT_RUNS_DTOR
+
+#include <boost/lockfree/detail/freelist.hpp>
+#include <boost/lockfree/queue.hpp>
+
+#include <boost/foreach.hpp>
+#include <boost/thread.hpp>
+
+#define BOOST_TEST_MAIN
+#ifdef BOOST_LOCKFREE_INCLUDE_TESTS
+#include <boost/test/included/unit_test.hpp>
+#else
+#include <boost/test/unit_test.hpp>
+#endif
+
+#include <boost/foreach.hpp>
+
+#include <set>
+
+#include "test_helpers.hpp"
+
+using boost::lockfree::detail::atomic;
+
+atomic<bool> test_running(false);
+
+struct dummy
+{
+ dummy(void)
+ {
+ if (test_running.load(boost::lockfree::detail::memory_order_relaxed))
+ assert(allocated == 0);
+ allocated = 1;
+ }
+
+ ~dummy(void)
+ {
+ if (test_running.load(boost::lockfree::detail::memory_order_relaxed))
+ assert(allocated == 1);
+ allocated = 0;
+ }
+
+ size_t padding[2]; // for used for the freelist node
+ int allocated;
+};
+
+template <typename freelist_type,
+ bool threadsafe,
+ bool bounded>
+void run_test(void)
+{
+ freelist_type fl(std::allocator<int>(), 8);
+
+ std::set<dummy*> nodes;
+
+ dummy d;
+ if (bounded)
+ test_running.store(true);
+
+ for (int i = 0; i != 4; ++i) {
+ dummy * allocated = fl.template construct<threadsafe, bounded>();
+ BOOST_REQUIRE(nodes.find(allocated) == nodes.end());
+ nodes.insert(allocated);
+ }
+
+ BOOST_FOREACH(dummy * d, nodes)
+ fl.template destruct<threadsafe>(d);
+
+ nodes.clear();
+ for (int i = 0; i != 4; ++i)
+ nodes.insert(fl.template construct<threadsafe, bounded>());
+
+ BOOST_FOREACH(dummy * d, nodes)
+ fl.template destruct<threadsafe>(d);
+
+ for (int i = 0; i != 4; ++i)
+ nodes.insert(fl.template construct<threadsafe, bounded>());
+
+ if (bounded)
+ test_running.store(false);
+}
+
+template <bool bounded>
+void run_tests(void)
+{
+ run_test<boost::lockfree::detail::freelist_stack<dummy>, true, bounded>();
+ run_test<boost::lockfree::detail::freelist_stack<dummy>, false, bounded>();
+ run_test<boost::lockfree::detail::fixed_size_freelist<dummy>, true, bounded>();
+}
+
+BOOST_AUTO_TEST_CASE( freelist_tests )
+{
+ run_tests<false>();
+ run_tests<true>();
+}
+
+template <typename freelist_type, bool threadsafe>
+void oom_test(void)
+{
+ const bool bounded = true;
+ freelist_type fl(std::allocator<int>(), 8);
+
+ for (int i = 0; i != 8; ++i)
+ fl.template construct<threadsafe, bounded>();
+
+ dummy * allocated = fl.template construct<threadsafe, bounded>();
+ BOOST_REQUIRE(allocated == NULL);
+}
+
+BOOST_AUTO_TEST_CASE( oom_tests )
+{
+ oom_test<boost::lockfree::detail::freelist_stack<dummy>, true >();
+ oom_test<boost::lockfree::detail::freelist_stack<dummy>, false >();
+ oom_test<boost::lockfree::detail::fixed_size_freelist<dummy>, true >();
+ oom_test<boost::lockfree::detail::fixed_size_freelist<dummy>, false >();
+}
+
+
+template <typename freelist_type, bool bounded>
+struct freelist_tester
+{
+ static const int size = 128;
+ static const int thread_count = 4;
+#ifndef BOOST_LOCKFREE_STRESS_TEST
+ static const int operations_per_thread = 1000;
+#else
+ static const int operations_per_thread = 100000;
+#endif
+
+ freelist_type fl;
+ boost::lockfree::queue<dummy*> allocated_nodes;
+
+ atomic<bool> running;
+ static_hashed_set<dummy*, 1<<16 > working_set;
+
+
+ freelist_tester(void):
+ fl(std::allocator<int>(), size), allocated_nodes(256)
+ {}
+
+ void run()
+ {
+ running = true;
+
+ if (bounded)
+ test_running.store(true);
+ boost::thread_group alloc_threads;
+ boost::thread_group dealloc_threads;
+
+ for (int i = 0; i != thread_count; ++i)
+ dealloc_threads.create_thread(boost::bind(&freelist_tester::deallocate, this));
+
+ for (int i = 0; i != thread_count; ++i)
+ alloc_threads.create_thread(boost::bind(&freelist_tester::allocate, this));
+ alloc_threads.join_all();
+ test_running.store(false);
+ running = false;
+ dealloc_threads.join_all();
+ }
+
+ void allocate(void)
+ {
+ for (long i = 0; i != operations_per_thread; ++i) {
+ for (;;) {
+ dummy * node = fl.template construct<true, bounded>();
+ if (node) {
+ bool success = working_set.insert(node);
+ assert(success);
+ allocated_nodes.push(node);
+ break;
+ }
+ }
+ }
+ }
+
+ void deallocate(void)
+ {
+ for (;;) {
+ dummy * node;
+ if (allocated_nodes.pop(node)) {
+ bool success = working_set.erase(node);
+ assert(success);
+ fl.template destruct<true>(node);
+ }
+
+ if (running.load() == false)
+ break;
+ }
+
+ dummy * node;
+ while (allocated_nodes.pop(node)) {
+ bool success = working_set.erase(node);
+ assert(success);
+ fl.template destruct<true>(node);
+ }
+ }
+};
+
+template <typename Tester>
+void run_tester()
+{
+ boost::scoped_ptr<Tester> tester (new Tester);
+ tester->run();
+}
+
+
+BOOST_AUTO_TEST_CASE( unbounded_freelist_test )
+{
+ typedef freelist_tester<boost::lockfree::detail::freelist_stack<dummy>, false > test_type;
+ run_tester<test_type>();
+}
+
+
+BOOST_AUTO_TEST_CASE( bounded_freelist_test )
+{
+ typedef freelist_tester<boost::lockfree::detail::freelist_stack<dummy>, true > test_type;
+ run_tester<test_type>();
+}
+
+BOOST_AUTO_TEST_CASE( fixed_size_freelist_test )
+{
+ typedef freelist_tester<boost::lockfree::detail::fixed_size_freelist<dummy>, true > test_type;
+ run_tester<test_type>();
+}

Added: branches/release/libs/lockfree/test/queue_bounded_stress_test.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/test/queue_bounded_stress_test.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,25 @@
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <boost/lockfree/queue.hpp>
+
+#define BOOST_TEST_MAIN
+#ifdef BOOST_LOCKFREE_INCLUDE_TESTS
+#include <boost/test/included/unit_test.hpp>
+#else
+#include <boost/test/unit_test.hpp>
+#endif
+
+#include "test_common.hpp"
+
+BOOST_AUTO_TEST_CASE( queue_test_bounded )
+{
+ typedef queue_stress_tester<true> tester_type;
+ boost::scoped_ptr<tester_type> tester(new tester_type(4, 4) );
+
+ boost::lockfree::queue<long> q(128);
+ tester->run(q);
+}

Added: branches/release/libs/lockfree/test/queue_fixedsize_stress_test.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/test/queue_fixedsize_stress_test.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,26 @@
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <boost/lockfree/queue.hpp>
+
+#define BOOST_TEST_MAIN
+#ifdef BOOST_LOCKFREE_INCLUDE_TESTS
+#include <boost/test/included/unit_test.hpp>
+#else
+#include <boost/test/unit_test.hpp>
+#endif
+
+#include "test_common.hpp"
+
+
+BOOST_AUTO_TEST_CASE( queue_test_fixed_size )
+{
+ typedef queue_stress_tester<> tester_type;
+ boost::scoped_ptr<tester_type> tester(new tester_type(4, 4) );
+
+ boost::lockfree::queue<long, boost::lockfree::capacity<8> > q;
+ tester->run(q);
+}

Added: branches/release/libs/lockfree/test/queue_interprocess_test.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/test/queue_interprocess_test.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,57 @@
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <cstdlib> //std::system
+#include <sstream>
+
+#include <boost/interprocess/managed_shared_memory.hpp>
+#include <boost/lockfree/queue.hpp>
+#include <boost/thread/thread.hpp>
+
+using namespace boost::interprocess;
+typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator;
+typedef boost::lockfree::queue<int,
+ boost::lockfree::allocator<ShmemAllocator>,
+ boost::lockfree::capacity<2048>
+ > queue;
+
+int main (int argc, char *argv[])
+{
+ if(argc == 1){
+ struct shm_remove
+ {
+ shm_remove() { shared_memory_object::remove("boost_queue_interprocess_test_shm"); }
+ ~shm_remove(){ shared_memory_object::remove("boost_queue_interprocess_test_shm"); }
+ } remover;
+
+ managed_shared_memory segment(create_only, "boost_queue_interprocess_test_shm", 262144);
+ ShmemAllocator alloc_inst (segment.get_segment_manager());
+
+ queue * q = segment.construct<queue>("queue")(alloc_inst);
+ for (int i = 0; i != 1024; ++i)
+ q->push(i);
+
+ std::string s(argv[0]); s += " child ";
+ if(0 != std::system(s.c_str()))
+ return 1;
+
+ while (!q->empty())
+ boost::thread::yield();
+ return 0;
+ } else {
+ managed_shared_memory segment(open_only, "boost_queue_interprocess_test_shm");
+ queue * q = segment.find<queue>("queue").first;
+
+ int from_queue;
+ for (int i = 0; i != 1024; ++i) {
+ bool success = q->pop(from_queue);
+ assert (success);
+ assert (from_queue == i);
+ }
+ segment.destroy<queue>("queue");
+ }
+ return 0;
+}

Added: branches/release/libs/lockfree/test/queue_test.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/test/queue_test.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,125 @@
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <boost/lockfree/queue.hpp>
+#include <boost/thread.hpp>
+
+#define BOOST_TEST_MAIN
+#ifdef BOOST_LOCKFREE_INCLUDE_TESTS
+#include <boost/test/included/unit_test.hpp>
+#else
+#include <boost/test/unit_test.hpp>
+#endif
+
+#include <memory>
+
+using namespace boost;
+using namespace boost::lockfree;
+using namespace std;
+
+BOOST_AUTO_TEST_CASE( simple_queue_test )
+{
+ queue<int> f(64);
+
+ BOOST_WARN(f.is_lock_free());
+
+ BOOST_REQUIRE(f.empty());
+ f.push(1);
+ f.push(2);
+
+ int i1(0), i2(0);
+
+ BOOST_REQUIRE(f.pop(i1));
+ BOOST_REQUIRE_EQUAL(i1, 1);
+
+ BOOST_REQUIRE(f.pop(i2));
+ BOOST_REQUIRE_EQUAL(i2, 2);
+ BOOST_REQUIRE(f.empty());
+}
+
+BOOST_AUTO_TEST_CASE( simple_queue_test_capacity )
+{
+ queue<int, capacity<64> > f;
+
+ BOOST_WARN(f.is_lock_free());
+
+ BOOST_REQUIRE(f.empty());
+ f.push(1);
+ f.push(2);
+
+ int i1(0), i2(0);
+
+ BOOST_REQUIRE(f.pop(i1));
+ BOOST_REQUIRE_EQUAL(i1, 1);
+
+ BOOST_REQUIRE(f.pop(i2));
+ BOOST_REQUIRE_EQUAL(i2, 2);
+ BOOST_REQUIRE(f.empty());
+}
+
+
+BOOST_AUTO_TEST_CASE( unsafe_queue_test )
+{
+ queue<int> f(64);
+
+ BOOST_WARN(f.is_lock_free());
+ BOOST_REQUIRE(f.empty());
+
+ int i1(0), i2(0);
+
+ f.unsynchronized_push(1);
+ f.unsynchronized_push(2);
+
+ BOOST_REQUIRE(f.unsynchronized_pop(i1));
+ BOOST_REQUIRE_EQUAL(i1, 1);
+
+ BOOST_REQUIRE(f.unsynchronized_pop(i2));
+ BOOST_REQUIRE_EQUAL(i2, 2);
+ BOOST_REQUIRE(f.empty());
+}
+
+
+BOOST_AUTO_TEST_CASE( queue_convert_pop_test )
+{
+ queue<int*> f(128);
+ BOOST_REQUIRE(f.empty());
+ f.push(new int(1));
+ f.push(new int(2));
+ f.push(new int(3));
+ f.push(new int(4));
+
+ {
+ int * i1;
+
+ BOOST_REQUIRE(f.pop(i1));
+ BOOST_REQUIRE_EQUAL(*i1, 1);
+ delete i1;
+ }
+
+
+ {
+ boost::shared_ptr<int> i2;
+ BOOST_REQUIRE(f.pop(i2));
+ BOOST_REQUIRE_EQUAL(*i2, 2);
+ }
+
+ {
+ auto_ptr<int> i3;
+ BOOST_REQUIRE(f.pop(i3));
+
+ BOOST_REQUIRE_EQUAL(*i3, 3);
+ }
+
+ {
+ boost::shared_ptr<int> i4;
+ BOOST_REQUIRE(f.pop(i4));
+
+ BOOST_REQUIRE_EQUAL(*i4, 4);
+ }
+
+
+ BOOST_REQUIRE(f.empty());
+}

Added: branches/release/libs/lockfree/test/queue_unbounded_stress_test.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/test/queue_unbounded_stress_test.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,25 @@
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <boost/lockfree/queue.hpp>
+
+#define BOOST_TEST_MAIN
+#ifdef BOOST_LOCKFREE_INCLUDE_TESTS
+#include <boost/test/included/unit_test.hpp>
+#else
+#include <boost/test/unit_test.hpp>
+#endif
+
+#include "test_common.hpp"
+
+BOOST_AUTO_TEST_CASE( queue_test_unbounded )
+{
+ typedef queue_stress_tester<false> tester_type;
+ boost::scoped_ptr<tester_type> tester(new tester_type(4, 4) );
+
+ boost::lockfree::queue<long> q(128);
+ tester->run(q);
+}

Added: branches/release/libs/lockfree/test/spsc_queue_test.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/test/spsc_queue_test.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,427 @@
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <boost/lockfree/spsc_queue.hpp>
+
+#include <boost/thread.hpp>
+
+#define BOOST_TEST_MAIN
+#ifdef BOOST_LOCKFREE_INCLUDE_TESTS
+#include <boost/test/included/unit_test.hpp>
+#else
+#include <boost/test/unit_test.hpp>
+#endif
+
+#include <iostream>
+#include <memory>
+
+#include "test_helpers.hpp"
+#include "test_common.hpp"
+
+using namespace boost;
+using namespace boost::lockfree;
+using namespace std;
+
+BOOST_AUTO_TEST_CASE( simple_spsc_queue_test )
+{
+ spsc_queue<int, capacity<64> > f;
+
+ BOOST_REQUIRE(f.empty());
+ f.push(1);
+ f.push(2);
+
+ int i1(0), i2(0);
+
+ BOOST_REQUIRE(f.pop(i1));
+ BOOST_REQUIRE_EQUAL(i1, 1);
+
+ BOOST_REQUIRE(f.pop(i2));
+ BOOST_REQUIRE_EQUAL(i2, 2);
+ BOOST_REQUIRE(f.empty());
+}
+
+BOOST_AUTO_TEST_CASE( simple_spsc_queue_test_compile_time_size )
+{
+ spsc_queue<int> f(64);
+
+ BOOST_REQUIRE(f.empty());
+ f.push(1);
+ f.push(2);
+
+ int i1(0), i2(0);
+
+ BOOST_REQUIRE(f.pop(i1));
+ BOOST_REQUIRE_EQUAL(i1, 1);
+
+ BOOST_REQUIRE(f.pop(i2));
+ BOOST_REQUIRE_EQUAL(i2, 2);
+ BOOST_REQUIRE(f.empty());
+}
+
+BOOST_AUTO_TEST_CASE( ranged_push_test )
+{
+ spsc_queue<int> stk(64);
+
+ int data[2] = {1, 2};
+
+ BOOST_REQUIRE_EQUAL(stk.push(data, data + 2), data + 2);
+
+ int out;
+ BOOST_REQUIRE(stk.pop(out)); BOOST_REQUIRE_EQUAL(out, 1);
+ BOOST_REQUIRE(stk.pop(out)); BOOST_REQUIRE_EQUAL(out, 2);
+ BOOST_REQUIRE(!stk.pop(out));
+}
+
+
+enum {
+ pointer_and_size,
+ reference_to_array,
+ iterator_pair,
+ output_iterator_
+};
+
+
+template <int EnqueueMode>
+void spsc_queue_buffer_push_return_value(void)
+{
+ const size_t xqueue_size = 64;
+ const size_t buffer_size = 100;
+ spsc_queue<int, capacity<100> > rb;
+
+ int data[xqueue_size];
+ for (size_t i = 0; i != xqueue_size; ++i)
+ data[i] = i*2;
+
+ switch (EnqueueMode) {
+ case pointer_and_size:
+ BOOST_REQUIRE_EQUAL(rb.push(data, xqueue_size), xqueue_size);
+ break;
+
+ case reference_to_array:
+ BOOST_REQUIRE_EQUAL(rb.push(data), xqueue_size);
+ break;
+
+ case iterator_pair:
+ BOOST_REQUIRE_EQUAL(rb.push(data, data + xqueue_size), data + xqueue_size);
+ break;
+
+ default:
+ assert(false);
+ }
+
+ switch (EnqueueMode) {
+ case pointer_and_size:
+ BOOST_REQUIRE_EQUAL(rb.push(data, xqueue_size), buffer_size - xqueue_size - 1);
+ break;
+
+ case reference_to_array:
+ BOOST_REQUIRE_EQUAL(rb.push(data), buffer_size - xqueue_size - 1);
+ break;
+
+ case iterator_pair:
+ BOOST_REQUIRE_EQUAL(rb.push(data, data + xqueue_size), data + buffer_size - xqueue_size - 1);
+ break;
+
+ default:
+ assert(false);
+ }
+}
+
+BOOST_AUTO_TEST_CASE( spsc_queue_buffer_push_return_value_test )
+{
+ spsc_queue_buffer_push_return_value<pointer_and_size>();
+ spsc_queue_buffer_push_return_value<reference_to_array>();
+ spsc_queue_buffer_push_return_value<iterator_pair>();
+}
+
+template <int EnqueueMode,
+ int ElementCount,
+ int BufferSize,
+ int NumberOfIterations
+ >
+void spsc_queue_buffer_push(void)
+{
+ const size_t xqueue_size = ElementCount;
+ spsc_queue<int, capacity<BufferSize> > rb;
+
+ int data[xqueue_size];
+ for (size_t i = 0; i != xqueue_size; ++i)
+ data[i] = i*2;
+
+ std::vector<int> vdata(data, data + xqueue_size);
+
+ for (int i = 0; i != NumberOfIterations; ++i) {
+ BOOST_REQUIRE(rb.empty());
+ switch (EnqueueMode) {
+ case pointer_and_size:
+ BOOST_REQUIRE_EQUAL(rb.push(data, xqueue_size), xqueue_size);
+ break;
+
+ case reference_to_array:
+ BOOST_REQUIRE_EQUAL(rb.push(data), xqueue_size);
+ break;
+
+ case iterator_pair:
+ BOOST_REQUIRE_EQUAL(rb.push(data, data + xqueue_size), data + xqueue_size);
+ break;
+
+ default:
+ assert(false);
+ }
+
+ int out[xqueue_size];
+ BOOST_REQUIRE_EQUAL(rb.pop(out, xqueue_size), xqueue_size);
+ for (size_t i = 0; i != xqueue_size; ++i)
+ BOOST_REQUIRE_EQUAL(data[i], out[i]);
+ }
+}
+
+BOOST_AUTO_TEST_CASE( spsc_queue_buffer_push_test )
+{
+ spsc_queue_buffer_push<pointer_and_size, 7, 16, 64>();
+ spsc_queue_buffer_push<reference_to_array, 7, 16, 64>();
+ spsc_queue_buffer_push<iterator_pair, 7, 16, 64>();
+}
+
+template <int EnqueueMode,
+ int ElementCount,
+ int BufferSize,
+ int NumberOfIterations
+ >
+void spsc_queue_buffer_pop(void)
+{
+ const size_t xqueue_size = ElementCount;
+ spsc_queue<int, capacity<BufferSize> > rb;
+
+ int data[xqueue_size];
+ for (size_t i = 0; i != xqueue_size; ++i)
+ data[i] = i*2;
+
+ std::vector<int> vdata(data, data + xqueue_size);
+
+ for (int i = 0; i != NumberOfIterations; ++i) {
+ BOOST_REQUIRE(rb.empty());
+ BOOST_REQUIRE_EQUAL(rb.push(data), xqueue_size);
+
+ int out[xqueue_size];
+ vector<int> vout;
+
+ switch (EnqueueMode) {
+ case pointer_and_size:
+ BOOST_REQUIRE_EQUAL(rb.pop(out, xqueue_size), xqueue_size);
+ break;
+
+ case reference_to_array:
+ BOOST_REQUIRE_EQUAL(rb.pop(out), xqueue_size);
+ break;
+
+ case output_iterator_:
+ BOOST_REQUIRE_EQUAL(rb.pop(std::back_inserter(vout)), xqueue_size);
+ break;
+
+ default:
+ assert(false);
+ }
+
+ if (EnqueueMode == output_iterator_) {
+ BOOST_REQUIRE_EQUAL(vout.size(), xqueue_size);
+ for (size_t i = 0; i != xqueue_size; ++i)
+ BOOST_REQUIRE_EQUAL(data[i], vout[i]);
+ } else {
+ for (size_t i = 0; i != xqueue_size; ++i)
+ BOOST_REQUIRE_EQUAL(data[i], out[i]);
+ }
+ }
+}
+
+BOOST_AUTO_TEST_CASE( spsc_queue_buffer_pop_test )
+{
+ spsc_queue_buffer_pop<pointer_and_size, 7, 16, 64>();
+ spsc_queue_buffer_pop<reference_to_array, 7, 16, 64>();
+ spsc_queue_buffer_pop<output_iterator_, 7, 16, 64>();
+}
+
+
+#ifndef BOOST_LOCKFREE_STRESS_TEST
+static const boost::uint32_t nodes_per_thread = 100000;
+#else
+static const boost::uint32_t nodes_per_thread = 100000000;
+#endif
+
+struct spsc_queue_tester
+{
+ spsc_queue<int, capacity<128> > sf;
+
+ boost::lockfree::detail::atomic<long> spsc_queue_cnt, received_nodes;
+
+ static_hashed_set<int, 1<<16 > working_set;
+
+ spsc_queue_tester(void):
+ spsc_queue_cnt(0), received_nodes(0)
+ {}
+
+ void add(void)
+ {
+ for (boost::uint32_t i = 0; i != nodes_per_thread; ++i) {
+ int id = generate_id<int>();
+ working_set.insert(id);
+
+ while (sf.push(id) == false)
+ {}
+
+ ++spsc_queue_cnt;
+ }
+ }
+
+ bool get_element(void)
+ {
+ int data;
+ bool success = sf.pop(data);
+
+ if (success) {
+ ++received_nodes;
+ --spsc_queue_cnt;
+ bool erased = working_set.erase(data);
+ assert(erased);
+ return true;
+ } else
+ return false;
+ }
+
+ boost::lockfree::detail::atomic<bool> running;
+
+ void get(void)
+ {
+ for(;;) {
+ bool success = get_element();
+ if (!running && !success)
+ return;
+ }
+ }
+
+ void run(void)
+ {
+ running = true;
+
+ BOOST_REQUIRE(sf.empty());
+
+ thread reader(boost::bind(&spsc_queue_tester::get, this));
+ thread writer(boost::bind(&spsc_queue_tester::add, this));
+ cout << "reader and writer threads created" << endl;
+
+ writer.join();
+ cout << "writer threads joined. waiting for readers to finish" << endl;
+
+ running = false;
+ reader.join();
+
+ BOOST_REQUIRE_EQUAL(received_nodes, nodes_per_thread);
+ BOOST_REQUIRE_EQUAL(spsc_queue_cnt, 0);
+ BOOST_REQUIRE(sf.empty());
+ BOOST_REQUIRE(working_set.count_nodes() == 0);
+ }
+};
+
+BOOST_AUTO_TEST_CASE( spsc_queue_test_caching )
+{
+ boost::shared_ptr<spsc_queue_tester> test1(new spsc_queue_tester);
+ test1->run();
+}
+
+struct spsc_queue_tester_buffering
+{
+ spsc_queue<int, capacity<128> > sf;
+
+ boost::lockfree::detail::atomic<long> spsc_queue_cnt;
+
+ static_hashed_set<int, 1<<16 > working_set;
+ boost::lockfree::detail::atomic<long> received_nodes;
+
+ spsc_queue_tester_buffering(void):
+ spsc_queue_cnt(0), received_nodes(0)
+ {}
+
+ static const size_t buf_size = 5;
+
+ void add(void)
+ {
+ boost::array<int, buf_size> input_buffer;
+ for (boost::uint32_t i = 0; i != nodes_per_thread; i+=buf_size) {
+ for (size_t i = 0; i != buf_size; ++i) {
+ int id = generate_id<int>();
+ working_set.insert(id);
+ input_buffer[i] = id;
+ }
+
+ size_t pushed = 0;
+
+ do {
+ pushed += sf.push(input_buffer.c_array() + pushed,
+ input_buffer.size() - pushed);
+ } while (pushed != buf_size);
+
+ spsc_queue_cnt+=buf_size;
+ }
+ }
+
+ bool get_elements(void)
+ {
+ boost::array<int, buf_size> output_buffer;
+
+ size_t popd = sf.pop(output_buffer.c_array(), output_buffer.size());
+
+ if (popd) {
+ received_nodes += popd;
+ spsc_queue_cnt -= popd;
+
+ for (size_t i = 0; i != popd; ++i) {
+ bool erased = working_set.erase(output_buffer[i]);
+ assert(erased);
+ }
+
+ return true;
+ } else
+ return false;
+ }
+
+ boost::lockfree::detail::atomic<bool> running;
+
+ void get(void)
+ {
+ for(;;) {
+ bool success = get_elements();
+ if (!running && !success)
+ return;
+ }
+ }
+
+ void run(void)
+ {
+ running = true;
+
+ thread reader(boost::bind(&spsc_queue_tester_buffering::get, this));
+ thread writer(boost::bind(&spsc_queue_tester_buffering::add, this));
+ cout << "reader and writer threads created" << endl;
+
+ writer.join();
+ cout << "writer threads joined. waiting for readers to finish" << endl;
+
+ running = false;
+ reader.join();
+
+ BOOST_REQUIRE_EQUAL(received_nodes, nodes_per_thread);
+ BOOST_REQUIRE_EQUAL(spsc_queue_cnt, 0);
+ BOOST_REQUIRE(sf.empty());
+ BOOST_REQUIRE(working_set.count_nodes() == 0);
+ }
+};
+
+
+BOOST_AUTO_TEST_CASE( spsc_queue_test_buffering )
+{
+ boost::shared_ptr<spsc_queue_tester_buffering> test1(new spsc_queue_tester_buffering);
+ test1->run();
+}

Added: branches/release/libs/lockfree/test/stack_bounded_stress_test.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/test/stack_bounded_stress_test.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,25 @@
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <boost/lockfree/stack.hpp>
+
+#define BOOST_TEST_MAIN
+#ifdef BOOST_LOCKFREE_INCLUDE_TESTS
+#include <boost/test/included/unit_test.hpp>
+#else
+#include <boost/test/unit_test.hpp>
+#endif
+
+#include "test_common.hpp"
+
+BOOST_AUTO_TEST_CASE( stack_test_bounded )
+{
+ typedef queue_stress_tester<true> tester_type;
+ boost::scoped_ptr<tester_type> tester(new tester_type(4, 4) );
+
+ boost::lockfree::stack<long> q(128);
+ tester->run(q);
+}

Added: branches/release/libs/lockfree/test/stack_fixedsize_stress_test.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/test/stack_fixedsize_stress_test.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,26 @@
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <boost/lockfree/stack.hpp>
+
+#define BOOST_TEST_MAIN
+#ifdef BOOST_LOCKFREE_INCLUDE_TESTS
+#include <boost/test/included/unit_test.hpp>
+#else
+#include <boost/test/unit_test.hpp>
+#endif
+
+#include "test_common.hpp"
+
+
+BOOST_AUTO_TEST_CASE( stack_test_fixed_size )
+{
+ typedef queue_stress_tester<> tester_type;
+ boost::scoped_ptr<tester_type> tester(new tester_type(4, 4) );
+
+ boost::lockfree::stack<long, boost::lockfree::capacity<8> > q;
+ tester->run(q);
+}

Added: branches/release/libs/lockfree/test/stack_interprocess_test.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/test/stack_interprocess_test.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,57 @@
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <cstdlib> //std::system
+#include <sstream>
+
+#include <boost/interprocess/managed_shared_memory.hpp>
+#include <boost/lockfree/stack.hpp>
+#include <boost/thread/thread.hpp>
+
+using namespace boost::interprocess;
+typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator;
+typedef boost::lockfree::stack<int,
+ boost::lockfree::allocator<ShmemAllocator>,
+ boost::lockfree::capacity<2048>
+ > stack;
+
+int main (int argc, char *argv[])
+{
+ if(argc == 1){
+ struct shm_remove
+ {
+ shm_remove() { shared_memory_object::remove("MySharedMemory"); }
+ ~shm_remove(){ shared_memory_object::remove("MySharedMemory"); }
+ } remover;
+
+ managed_shared_memory segment(create_only, "MySharedMemory", 65536);
+ ShmemAllocator alloc_inst (segment.get_segment_manager());
+
+ stack * queue = segment.construct<stack>("stack")(alloc_inst);
+ for (int i = 0; i != 1024; ++i)
+ queue->push(i);
+
+ std::string s(argv[0]); s += " child ";
+ if(0 != std::system(s.c_str()))
+ return 1;
+
+ while (!queue->empty())
+ boost::thread::yield();
+ return 0;
+ } else {
+ managed_shared_memory segment(open_only, "MySharedMemory");
+ stack * queue = segment.find<stack>("stack").first;
+
+ int from_queue;
+ for (int i = 0; i != 1024; ++i) {
+ bool success = queue->pop(from_queue);
+ assert (success);
+ assert (from_queue == 1023 - i);
+ }
+ segment.destroy<stack>("stack");
+ }
+ return 0;
+}

Added: branches/release/libs/lockfree/test/stack_test.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/test/stack_test.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,109 @@
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+
+#include <boost/thread.hpp>
+#include <boost/lockfree/stack.hpp>
+
+#define BOOST_TEST_MAIN
+#ifdef BOOST_LOCKFREE_INCLUDE_TESTS
+#include <boost/test/included/unit_test.hpp>
+#else
+#include <boost/test/unit_test.hpp>
+#endif
+
+BOOST_AUTO_TEST_CASE( simple_stack_test )
+{
+ boost::lockfree::stack<long> stk(128);
+
+ stk.push(1);
+ stk.push(2);
+ long out;
+ BOOST_REQUIRE(stk.pop(out)); BOOST_REQUIRE_EQUAL(out, 2);
+ BOOST_REQUIRE(stk.pop(out)); BOOST_REQUIRE_EQUAL(out, 1);
+ BOOST_REQUIRE(!stk.pop(out));
+}
+
+BOOST_AUTO_TEST_CASE( unsafe_stack_test )
+{
+ boost::lockfree::stack<long> stk(128);
+
+ stk.unsynchronized_push(1);
+ stk.unsynchronized_push(2);
+ long out;
+ BOOST_REQUIRE(stk.unsynchronized_pop(out)); BOOST_REQUIRE_EQUAL(out, 2);
+ BOOST_REQUIRE(stk.unsynchronized_pop(out)); BOOST_REQUIRE_EQUAL(out, 1);
+ BOOST_REQUIRE(!stk.unsynchronized_pop(out));
+}
+
+BOOST_AUTO_TEST_CASE( ranged_push_test )
+{
+ boost::lockfree::stack<long> stk(128);
+
+ long data[2] = {1, 2};
+
+ BOOST_REQUIRE_EQUAL(stk.push(data, data + 2), data + 2);
+
+ long out;
+ BOOST_REQUIRE(stk.unsynchronized_pop(out)); BOOST_REQUIRE_EQUAL(out, 2);
+ BOOST_REQUIRE(stk.unsynchronized_pop(out)); BOOST_REQUIRE_EQUAL(out, 1);
+ BOOST_REQUIRE(!stk.unsynchronized_pop(out));
+}
+
+BOOST_AUTO_TEST_CASE( ranged_unsynchronized_push_test )
+{
+ boost::lockfree::stack<long> stk(128);
+
+ long data[2] = {1, 2};
+
+ BOOST_REQUIRE_EQUAL(stk.unsynchronized_push(data, data + 2), data + 2);
+
+ long out;
+ BOOST_REQUIRE(stk.unsynchronized_pop(out)); BOOST_REQUIRE_EQUAL(out, 2);
+ BOOST_REQUIRE(stk.unsynchronized_pop(out)); BOOST_REQUIRE_EQUAL(out, 1);
+ BOOST_REQUIRE(!stk.unsynchronized_pop(out));
+}
+
+BOOST_AUTO_TEST_CASE( fixed_size_stack_test )
+{
+ boost::lockfree::stack<long, boost::lockfree::capacity<128> > stk;
+
+ stk.push(1);
+ stk.push(2);
+ long out;
+ BOOST_REQUIRE(stk.pop(out)); BOOST_REQUIRE_EQUAL(out, 2);
+ BOOST_REQUIRE(stk.pop(out)); BOOST_REQUIRE_EQUAL(out, 1);
+ BOOST_REQUIRE(!stk.pop(out));
+ BOOST_REQUIRE(stk.empty());
+}
+
+BOOST_AUTO_TEST_CASE( fixed_size_stack_test_exhausted )
+{
+ boost::lockfree::stack<long, boost::lockfree::capacity<2> > stk;
+
+ stk.push(1);
+ stk.push(2);
+ BOOST_REQUIRE(!stk.push(3));
+ long out;
+ BOOST_REQUIRE(stk.pop(out)); BOOST_REQUIRE_EQUAL(out, 2);
+ BOOST_REQUIRE(stk.pop(out)); BOOST_REQUIRE_EQUAL(out, 1);
+ BOOST_REQUIRE(!stk.pop(out));
+ BOOST_REQUIRE(stk.empty());
+}
+
+BOOST_AUTO_TEST_CASE( bounded_stack_test_exhausted )
+{
+ boost::lockfree::stack<long> stk(2);
+
+ stk.bounded_push(1);
+ stk.bounded_push(2);
+ BOOST_REQUIRE(!stk.bounded_push(3));
+ long out;
+ BOOST_REQUIRE(stk.pop(out)); BOOST_REQUIRE_EQUAL(out, 2);
+ BOOST_REQUIRE(stk.pop(out)); BOOST_REQUIRE_EQUAL(out, 1);
+ BOOST_REQUIRE(!stk.pop(out));
+ BOOST_REQUIRE(stk.empty());
+}

Added: branches/release/libs/lockfree/test/stack_unbounded_stress_test.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/test/stack_unbounded_stress_test.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,26 @@
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <boost/lockfree/stack.hpp>
+
+#define BOOST_TEST_MAIN
+#ifdef BOOST_LOCKFREE_INCLUDE_TESTS
+#include <boost/test/included/unit_test.hpp>
+#else
+#include <boost/test/unit_test.hpp>
+#endif
+
+#include "test_common.hpp"
+
+
+BOOST_AUTO_TEST_CASE( stack_test_unbounded )
+{
+ typedef queue_stress_tester<false> tester_type;
+ boost::scoped_ptr<tester_type> tester(new tester_type(4, 4) );
+
+ boost::lockfree::stack<long> q(128);
+ tester->run(q);
+}

Added: branches/release/libs/lockfree/test/tagged_ptr_test.cpp
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/test/tagged_ptr_test.cpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,39 @@
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <boost/lockfree/detail/tagged_ptr.hpp>
+
+#define BOOST_TEST_MAIN
+#ifdef BOOST_LOCKFREE_INCLUDE_TESTS
+#include <boost/test/included/unit_test.hpp>
+#else
+#include <boost/test/unit_test.hpp>
+#endif
+
+BOOST_AUTO_TEST_CASE( tagged_ptr_test )
+{
+ using namespace boost::lockfree::detail;
+ int a(1), b(2);
+
+ {
+ tagged_ptr<int> i (&a, 0);
+ tagged_ptr<int> j (&b, 1);
+
+ i = j;
+
+ BOOST_REQUIRE_EQUAL(i.get_ptr(), &b);
+ BOOST_REQUIRE_EQUAL(i.get_tag(), 1);
+ }
+
+ {
+ tagged_ptr<int> i (&a, 0);
+ tagged_ptr<int> j (i);
+
+ BOOST_REQUIRE_EQUAL(i.get_ptr(), j.get_ptr());
+ BOOST_REQUIRE_EQUAL(i.get_tag(), j.get_tag());
+ }
+
+}

Added: branches/release/libs/lockfree/test/test_common.hpp
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/test/test_common.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,124 @@
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#include <cassert>
+#include "test_helpers.hpp"
+
+#include <boost/array.hpp>
+#include <boost/thread.hpp>
+
+namespace impl
+{
+
+using boost::array;
+using namespace boost;
+using namespace std;
+
+using boost::lockfree::detail::atomic;
+
+template <bool Bounded = false>
+struct queue_stress_tester
+{
+ static const unsigned int buckets = 1<<13;
+#ifndef BOOST_LOCKFREE_STRESS_TEST
+ static const long node_count = 5000;
+#else
+ static const long node_count = 500000;
+#endif
+ const int reader_threads;
+ const int writer_threads;
+
+ static_hashed_set<long, buckets> data;
+ static_hashed_set<long, buckets> dequeued;
+ array<std::set<long>, buckets> returned;
+
+ boost::lockfree::detail::atomic<int> push_count, pop_count;
+
+ queue_stress_tester(int reader, int writer):
+ reader_threads(reader), writer_threads(writer), push_count(0), pop_count(0)
+ {}
+
+ template <typename queue>
+ void add_items(queue & stk)
+ {
+ for (long i = 0; i != node_count; ++i) {
+ long id = generate_id<long>();
+
+ bool inserted = data.insert(id);
+ assert(inserted);
+
+ if (Bounded)
+ while(stk.bounded_push(id) == false)
+ /*thread::yield()*/;
+ else
+ while(stk.push(id) == false)
+ /*thread::yield()*/;
+ ++push_count;
+ }
+ }
+
+ boost::lockfree::detail::atomic<bool> running;
+
+ template <typename queue>
+ void get_items(queue & stk)
+ {
+ for (;;) {
+ long id;
+
+ bool got = stk.pop(id);
+ if (got) {
+ bool erased = data.erase(id);
+ bool inserted = dequeued.insert(id);
+ assert(erased);
+ assert(inserted);
+ ++pop_count;
+ } else
+ if (!running.load())
+ return;
+ }
+ }
+
+ template <typename queue>
+ void run(queue & stk)
+ {
+ BOOST_WARN(stk.is_lock_free());
+
+ running.store(true);
+
+ thread_group writer;
+ thread_group reader;
+
+ BOOST_REQUIRE(stk.empty());
+
+ for (int i = 0; i != reader_threads; ++i)
+ reader.create_thread(boost::bind(&queue_stress_tester::template get_items<queue>, this, boost::ref(stk)));
+
+ for (int i = 0; i != writer_threads; ++i)
+ writer.create_thread(boost::bind(&queue_stress_tester::template add_items<queue>, this, boost::ref(stk)));
+
+ using namespace std;
+ cout << "threads created" << endl;
+
+ writer.join_all();
+
+ cout << "writer threads joined, waiting for readers" << endl;
+
+ running = false;
+ reader.join_all();
+
+ cout << "reader threads joined" << endl;
+
+ BOOST_REQUIRE_EQUAL(data.count_nodes(), (size_t)0);
+ BOOST_REQUIRE(stk.empty());
+
+ BOOST_REQUIRE_EQUAL(push_count, pop_count);
+ BOOST_REQUIRE_EQUAL(push_count, writer_threads * node_count);
+ }
+};
+
+}
+
+using impl::queue_stress_tester;

Added: branches/release/libs/lockfree/test/test_helpers.hpp
==============================================================================
--- (empty file)
+++ branches/release/libs/lockfree/test/test_helpers.hpp 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -0,0 +1,88 @@
+// Copyright (C) 2011 Tim Blechmann
+//
+// Distributed under the Boost Software License, Version 1.0. (See
+// accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+#ifndef BOOST_LOCKFREE_TEST_HELPERS
+#define BOOST_LOCKFREE_TEST_HELPERS
+
+#include <set>
+#include <boost/array.hpp>
+#include <boost/lockfree/detail/atomic.hpp>
+#include <boost/thread.hpp>
+
+#include <boost/cstdint.hpp>
+
+template <typename int_type>
+int_type generate_id(void)
+{
+ static boost::lockfree::detail::atomic<int_type> generator(0);
+ return ++generator;
+}
+
+template <typename int_type, size_t buckets>
+class static_hashed_set
+{
+
+public:
+ int calc_index(int_type id)
+ {
+ // knuth hash ... does not need to be good, but has to be portable
+ size_t factor = size_t((float)buckets * 1.616f);
+
+ return ((size_t)id * factor) % buckets;
+ }
+
+ bool insert(int_type const & id)
+ {
+ std::size_t index = calc_index(id);
+
+ boost::mutex::scoped_lock lock (ref_mutex[index]);
+
+ std::pair<typename std::set<int_type>::iterator, bool> p;
+ p = data[index].insert(id);
+
+ return p.second;
+ }
+
+ bool find (int_type const & id)
+ {
+ std::size_t index = calc_index(id);
+
+ boost::mutex::scoped_lock lock (ref_mutex[index]);
+
+ return data[index].find(id) != data[index].end();
+ }
+
+ bool erase(int_type const & id)
+ {
+ std::size_t index = calc_index(id);
+
+ boost::mutex::scoped_lock lock (ref_mutex[index]);
+
+ if (data[index].find(id) != data[index].end()) {
+ data[index].erase(id);
+ assert(data[index].find(id) == data[index].end());
+ return true;
+ }
+ else
+ return false;
+ }
+
+ std::size_t count_nodes(void) const
+ {
+ std::size_t ret = 0;
+ for (int i = 0; i != buckets; ++i) {
+ boost::mutex::scoped_lock lock (ref_mutex[i]);
+ ret += data[i].size();
+ }
+ return ret;
+ }
+
+private:
+ boost::array<std::set<int_type>, buckets> data;
+ mutable boost::array<boost::mutex, buckets> ref_mutex;
+};
+
+#endif

Modified: branches/release/libs/maintainers.txt
==============================================================================
--- branches/release/libs/maintainers.txt (original)
+++ branches/release/libs/maintainers.txt 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -16,6 +16,7 @@
 array Marshall Clow <marshall -at- idio.com>
 asio Chris Kohlhoff <chris -at- kohlhoff.com>
 assign Thorsten Ottosen <nesotto -at- cs.auc.dk>
+atomic Helge Bahmann <hcb -at- chaoticmind.net>, Tim Blechmann <tim -at- klingt.org>
 bimap Matias Capeletto <matias.capeletto -at- gmail.com>
 bind Peter Dimov <pdimov -at- mmltd.net>
 chrono Vicente J. Botet Escriba <vicente.botet -at- wanadoo.fr>
@@ -59,6 +60,7 @@
 local_function Lorenzo Caminiti <lorcaminiti -at- gmail.com>
 locale Artyom Beilis <artyomtnk -at- yahoo.com>
 logic Douglas Gregor <dgregor -at- cs.indiana.edu>
+lockfree Tim Blechmann <tim -at- klingt.org>
 math Hubert Holin <Hubert.Holin -at- meteo.fr>, John Maddock <john -at- johnmaddock.co.uk>
 move Ion Gaztanaga <igaztanaga -at- gmail.com>
 mpl Aleksey Gurtovoy <agurtovoy -at- meta-comm.com>

Modified: branches/release/status/Jamfile.v2
==============================================================================
--- branches/release/status/Jamfile.v2 (original)
+++ branches/release/status/Jamfile.v2 2012-12-15 13:28:27 EST (Sat, 15 Dec 2012)
@@ -53,6 +53,7 @@
     array/test # test-suite array
     asio/test # test-suite asio
     assign/test # test-suite assign
+ atomic/test # test-suite atomic
     any/test # test-suite any
     bimap/test # test-suite bimap
     bind/test # test-suite bind
@@ -101,6 +102,7 @@
     local_function/test # test-suite local_function
     locale/test # test-suite locale
     logic/test # test-suite logic
+ lockfree/test # test-suite lockfree
     math/test # test-suite math
     multiprecision/test # test-suite multiprecision
     move/example # test-suite move_example


Boost-Commit list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk