|
Boost-Commit : |
Subject: [Boost-commit] svn:boost r60563 - in sandbox/stm/branches/vbe/libs/stm/doc: . html html/toward_boost_stm html/toward_boost_stm/appendices html/toward_boost_stm/overview html/toward_boost_stm/reference html/toward_boost_stm/users_guide reference
From: vicente.botet_at_[hidden]
Date: 2010-03-13 20:04:02
Author: viboes
Date: 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
New Revision: 60563
URL: http://svn.boost.org/trac/boost/changeset/60563
Log:
Boost.STM/vbe:
* Updated doc
Text files modified:
sandbox/stm/branches/vbe/libs/stm/doc/html/index.html | 8
sandbox/stm/branches/vbe/libs/stm/doc/html/standalone_HTML.manifest | 4
sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices.html | 2
sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices/changes.html | 4
sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices/implementation.html | 10
sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices/rationale.html | 1112 +++++++++++++++++++++++++++++++++++++--
sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices/todo.html | 32
sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/examples.html | 7
sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/overview/intro.html | 835 -----------------------------
sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/reference.html | 30
sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/reference/built_in_transactional_objects.html | 8
sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/reference/contention_managers.html | 13
sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/reference/core.html | 585 +++++++++++++++-----
sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/users_guide/ext_references.html | 4
sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/users_guide/getting_started.html | 12
sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/users_guide/tutorial.html | 22
sandbox/stm/branches/vbe/libs/stm/doc/introduction.qbk | 287 ----------
sandbox/stm/branches/vbe/libs/stm/doc/rationale.qbk | 317 ++++++++++
sandbox/stm/branches/vbe/libs/stm/doc/reference.qbk | 25
sandbox/stm/branches/vbe/libs/stm/doc/reference/language_like.qbk | 313 +++++++++-
20 files changed, 2148 insertions(+), 1482 deletions(-)
Modified: sandbox/stm/branches/vbe/libs/stm/doc/html/index.html
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/html/index.html (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/html/index.html 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -58,8 +58,14 @@
Objects</a></span></dt>
<dt><span class="section"><a href="toward_boost_stm/reference/built_in_transactional_objects.html">Built-in
Transactional Objects</a></span></dt>
+<dt><span class="section"><a href="toward_boost_stm/reference/transaction_specific_memory_managers.html">Transaction
+ Specific Memory Managers</a></span></dt>
+<dt><span class="section"><a href="toward_boost_stm/reference/user_memory_managers.html">User
+ Memory Managers</a></span></dt>
+<dt><span class="section">Synchronization</span></dt>
<dt><span class="section"><a href="toward_boost_stm/reference/contention_managers.html">Contention
Managers</a></span></dt>
+<dt><span class="section">Utilities</span></dt>
</dl></dd>
<dt><span class="section">Examples</span></dt>
<dt><span class="section">Appendices</span></dt>
@@ -96,7 +102,7 @@
</p>
</div>
<table xmlns:rev="http://www.cs.rpi.edu/~gregod/boost/tools/doc/revision" width="100%"><tr>
-<td align="left"><p><small>Last revised: March 07, 2010 at 09:47:10 GMT</small></p></td>
+<td align="left"><p><small>Last revised: March 07, 2010 at 16:59:18 GMT</small></p></td>
<td align="right"><div class="copyright-footer"></div></td>
</tr></table>
<hr>
Modified: sandbox/stm/branches/vbe/libs/stm/doc/html/standalone_HTML.manifest
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/html/standalone_HTML.manifest (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/html/standalone_HTML.manifest 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -11,7 +11,11 @@
toward_boost_stm/reference/core.html
toward_boost_stm/reference/transactional_objects.html
toward_boost_stm/reference/built_in_transactional_objects.html
+toward_boost_stm/reference/transaction_specific_memory_managers.html
+toward_boost_stm/reference/user_memory_managers.html
+toward_boost_stm/reference/synchronization.html
toward_boost_stm/reference/contention_managers.html
+toward_boost_stm/reference/utilities.html
toward_boost_stm/examples.html
toward_boost_stm/appendices.html
toward_boost_stm/appendices/changes.html
Modified: sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices.html
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices.html (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices.html 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -30,6 +30,8 @@
<dt><span class="section"> Appendix A: History</span></dt>
<dt><span class="section"> Appendix B: Rationale</span></dt>
<dd><dl>
+<dt><span class="section"><a href="appendices/rationale.html#toward_boost_stm.appendices.rationale.intro"> Transactional
+ Memory versus Locking</a></span></dt>
<dt><span class="section"><a href="appendices/rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts">TM-Specific
Concepts</a></span></dt>
<dt><span class="section"><a href="appendices/rationale.html#toward_boost_stm.appendices.rationale.c___and_library_specific_concepts">C++
Modified: sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices/changes.html
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices/changes.html (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices/changes.html 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -27,7 +27,7 @@
<a name="toward_boost_stm.appendices.changes"></a> Appendix A: History
</h3></div></div></div>
<a name="toward_boost_stm.appendices.changes._emphasis_role__bold__version_0_1__xx_yy__2009__emphasis___emphasis_announcement_of_stm__emphasis_"></a><h4>
-<a name="id4859036"></a>
+<a name="id4860284"></a>
<a href="changes.html#toward_boost_stm.appendices.changes._emphasis_role__bold__version_0_1__xx_yy__2009__emphasis___emphasis_announcement_of_stm__emphasis_"><span class="bold"><strong>Version 0.1, XX YY, 2009</strong></span> <span class="emphasis"><em>Announcement of
STM</em></span></a>
</h4>
@@ -47,7 +47,7 @@
</li>
</ul></div>
<a name="toward_boost_stm.appendices.changes._emphasis_role__bold__tickets___emphasis_"></a><h4>
-<a name="id4859109"></a>
+<a name="id4860358"></a>
<a href="changes.html#toward_boost_stm.appendices.changes._emphasis_role__bold__tickets___emphasis_"><span class="bold"><strong>Tickets:</strong></span></a>
</h4>
<p>
Modified: sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices/implementation.html
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices/implementation.html (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices/implementation.html 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -56,7 +56,7 @@
constructs are shown in Figures 6 and 8, respectively.
</p>
<a name="toward_boost_stm.appendices.implementation.language_like_macro_blocks.locking_macros"></a><h5>
-<a name="id4874692"></a>
+<a name="id4884046"></a>
<a href="implementation.html#toward_boost_stm.appendices.implementation.language_like_macro_blocks.locking_macros">Locking
Macros</a>
</h5>
@@ -79,7 +79,7 @@
are executed once and only once.
</p>
<a name="toward_boost_stm.appendices.implementation.language_like_macro_blocks.transaction_macros"></a><h5>
-<a name="id4874742"></a>
+<a name="id4884096"></a>
<a href="implementation.html#toward_boost_stm.appendices.implementation.language_like_macro_blocks.transaction_macros">Transaction
Macros</a>
</h5>
@@ -128,7 +128,7 @@
transaction is aborted.
</p>
<a name="toward_boost_stm.appendices.implementation.language_like_macro_blocks.correcting_non_compliant_compilers"></a><h5>
-<a name="id4874837"></a>
+<a name="id4884208"></a>
<a href="implementation.html#toward_boost_stm.appendices.implementation.language_like_macro_blocks.correcting_non_compliant_compilers">Correcting
Non-Compliant Compilers</a>
</h5>
@@ -161,11 +161,11 @@
<a name="toward_boost_stm.appendices.implementation.cache"></a>Cache
</h4></div></div></div>
<a name="toward_boost_stm.appendices.implementation.cache.dispersed"></a><h5>
-<a name="id4874976"></a>
+<a name="id4884347"></a>
<a href="implementation.html#toward_boost_stm.appendices.implementation.cache.dispersed">Dispersed</a>
</h5>
<a name="toward_boost_stm.appendices.implementation.cache.compact"></a><h5>
-<a name="id4874998"></a>
+<a name="id4884370"></a>
<a href="implementation.html#toward_boost_stm.appendices.implementation.cache.compact">Compact</a>
</h5>
</div>
Modified: sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices/rationale.html
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices/rationale.html (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices/rationale.html 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -28,6 +28,8 @@
<a name="toward_boost_stm.appendices.rationale"></a> Appendix B: Rationale
</h3></div></div></div>
<div class="toc"><dl>
+<dt><span class="section"><a href="rationale.html#toward_boost_stm.appendices.rationale.intro"> Transactional
+ Memory versus Locking</a></span></dt>
<dt><span class="section"><a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts">TM-Specific
Concepts</a></span></dt>
<dd><dl>
@@ -82,6 +84,861 @@
</dl></div>
<div class="section" lang="en">
<div class="titlepage"><div><div><h4 class="title">
+<a name="toward_boost_stm.appendices.rationale.intro"></a><a href="rationale.html#toward_boost_stm.appendices.rationale.intro" title=" Transactional
+ Memory versus Locking"> Transactional
+ Memory versus Locking</a>
+</h4></div></div></div>
+<a name="toward_boost_stm.appendices.rationale.intro.transactional_memory"></a><h5>
+<a name="id4860430"></a>
+ <a href="rationale.html#toward_boost_stm.appendices.rationale.intro.transactional_memory">Transactional
+ Memory</a>
+ </h5>
+<p>
+ Transactional memory(TM) is a modern type of concurrency control that uses
+ transactions as its synchronization mechanism. A transaction is a finite
+ sequence of operations that are executed in an <span class="bold"><strong>atomic</strong></span>,
+ <span class="bold"><strong>isolated</strong></span> and <span class="bold"><strong>consistent</strong></span>
+ manner. The atomicity, isolation and consistency (<span class="bold"><strong>ACI</strong></span>)
+ of transaction's are derived from the ACID principle in the database community.
+ TM does not exhibit the D (durability) of ACID because unlike database
+ transactions, TM transactions are not saved to permanent storage (e.g.,
+ hard drives).
+ </p>
+<p>
+ Transactions are executed speculatively (optimistically) and are checked
+ for consistency at various points in the transaction's lifetime. Programmers
+ specify the starting and ending points of a transaction. All of the operations
+ between those points make up the transaction's execution body. Transactions
+ are commonly represented using the atomic structure shown in the figure
+ below.
+ </p>
+<pre class="programlisting"><span class="number">1</span> <span class="identifier">transaction</span>
+<span class="number">2</span> <span class="special">{</span>
+<span class="number">3</span> <span class="special">++</span><span class="identifier">x</span><span class="special">;</span>
+<span class="number">4</span> <span class="special">--</span><span class="identifier">y</span><span class="special">;</span>
+<span class="number">5</span> <span class="special">}</span>
+
+<span class="identifier">Simple</span> <span class="identifier">Transaction</span> <span class="identifier">Using</span> <span class="identifier">the</span> <span class="identifier">atomic</span> <span class="identifier">Keyword</span>
+</pre>
+<p>
+ Once a transaction has started it either commits or aborts. A transaction's
+ operations are only seen once the transaction has committed, providing
+ the illusion that all of the operations occurred at a single instance in
+ time. The instructions of a committed transaction are viewed as if they
+ occurred as an indivisible event, not as a set of operations executed serially.
+ The operations of an aborted transaction are never seen by other threads,
+ even if such operations were executed within a transaction and then rolled
+ back.
+ </p>
+<p>
+ In the case of the above example, if the transaction was committed both
+ operations <code class="computeroutput"><span class="special">++</span><span class="identifier">x</span></code>
+ and <code class="computeroutput"><span class="special">--</span><span class="identifier">y</span></code>
+ would be made visible to other threads at the same instance in time. If
+ the transaction the above example was aborted, neither operation (<code class="computeroutput"><span class="special">++</span><span class="identifier">x</span></code> and
+ <code class="computeroutput"><span class="special">--</span><span class="identifier">y</span></code>)
+ would appear to have occurred even if the local transaction executed one
+ or both operations.
+ </p>
+<p>
+ TM offers three distinct advantages over other parallel programming abstractions.
+ </p>
+<div class="orderedlist"><ol type="1">
+<li>
+ TM is simple; transactions, the synchronization mechanism of TM, are
+ easier to program than other synchronization mechanisms because they
+ move shared memory management into the underlying TM subsystem, removing
+ its complexity from the programmer's view. Moreover, TM exposes a simple
+ programmer interface, reducing (or in some cases, removing) the potential
+ for deadlock, livelock and priority inversion.
+ </li>
+<li>
+ TM is scalable; it achieves increased computational throughput when compared
+ to other parallel programming abstractions by allowing multiple threads
+ to speculatively execute the same critical section. When concurrently
+ executing threads do not exhibit shared data conflicts, they are guaranteed
+ to make forward progress.
+ </li>
+<li>
+ TM is modular; transactions can be nested to any depth and function as
+ a single unit. This behavior allows application programmers to extend
+ atomic library algorithms into atomic domain-specific algorithms without
+ requiring the application programmers to understand the implementation
+ details of the library algorithm.
+ </li>
+</ol></div>
+<p>
+ For these reasons, transactions are considered an important synchronization
+ mechanism and TM is viewed as an important type of concurrency control.
+ The remainder of this section presents TM from a viewpoint of (1) simplicity,
+ (2) scalability and (3) modularity.
+ </p>
+<a name="toward_boost_stm.appendices.rationale.intro.simplicity"></a><h5>
+<a name="id4860748"></a>
+ Simplicity
+ </h5>
+<p>
+ Synchronization problems, such as deadlocks, livelocks and priority inversion
+ are common in software systems using mutual exclusion. TM avoids many of
+ these problems by providing a synchronization mechanism that does not expose
+ any of its implementation details to the programmer. The only low level
+ interfaces the programmer needs to use for TM are:
+ </p>
+<div class="itemizedlist"><ul type="disc">
+<li>
+ begin_tx() - the signaled start of the transaction.
+ </li>
+<li>
+ read(loc) - reads the specified memory location, storing its location
+ in the transaction's read set and returning its current value.
+ </li>
+<li>
+ write(loc, val) - writes the specified memory location to the supplied
+ val, storing its location in the transaction's write set.
+ </li>
+<li>
+ end_tx() - the signaled end of the transaction. end_tx() returns true
+ if the transaction commits, otherwise it returns false.
+ </li>
+</ul></div>
+<p>
+ The above interfaces allow the programmer to create a transaction (using
+ begin_tx()), specify its memory operations (using read() and write()) and
+ terminate (using end_tx()()). Moreover, none of the interfaces specify
+ details of the TM subsystem's implementation. This leaves the TM system's
+ implementation disjoint from the interfaces it supplies, a key characteristic
+ for TM's simplicity.
+ </p>
+<p>
+ All TM implementations use some combination of the above interfaces. TMs
+ implemented within compilers tend to implicitly annotate transactional
+ read() and write() operations, whereas those implemented within software
+ libraries tend to require the programmer explicitly state which operations
+ are transactional reads and writes. An example of a transaction using the
+ above interfaces alongside an actual STM library implementation is shown
+ in Figure 3.
+ </p>
+<pre class="programlisting"><span class="number">1</span> <span class="comment">// TM interfaces // __Boost_STM__
+</span><span class="number">2</span> <span class="keyword">do</span> <span class="special">{</span> <span class="identifier">BOOST_STM_TRANSACTION</span>
+<span class="number">3</span> <span class="identifier">begin_tx</span><span class="special">();</span> <span class="special">{</span>
+<span class="number">4</span> <span class="identifier">write</span><span class="special">(</span><span class="identifier">x</span><span class="special">,</span> <span class="identifier">read</span><span class="special">(</span><span class="identifier">x</span><span class="special">)+</span><span class="number">1</span><span class="special">);</span> <span class="special">++</span><span class="identifier">x</span><span class="special">;</span>
+<span class="number">5</span> <span class="identifier">write</span><span class="special">(</span><span class="identifier">y</span><span class="special">,</span> <span class="identifier">read</span><span class="special">(</span><span class="identifier">y</span><span class="special">)-</span><span class="number">1</span><span class="special">);</span> <span class="special">--</span><span class="identifier">y</span><span class="special">;</span>
+<span class="number">6</span> <span class="special">}</span> <span class="keyword">while</span> <span class="special">(</span><span class="identifier">end_tx</span><span class="special">());</span> <span class="special">}</span> <span class="identifier">BOOST_STM_END_TRANSACTION</span>
+
+<span class="identifier">Figure</span> <span class="number">3.</span> <span class="identifier">Transaction</span> <span class="identifier">Using</span> <span class="special">(</span><span class="number">1</span><span class="special">)</span> <span class="identifier">Explicit</span> <span class="identifier">TM</span> <span class="identifier">Interfaces</span> <span class="keyword">and</span> <span class="special">(</span><span class="number">2</span><span class="special">)</span> TBoost.STM<span class="special">.</span>
+</pre>
+<p>
+ Figure 3 implements the same transaction as shown in Figure 2, except all
+ transactional memory accesses, including the transaction's retry behavior
+ (e.g., its loop), are demonstrated from a simple TM interface perspective
+ and an actual library implementation (TBoost.STM). While most TM systems
+ handle some portion of these interface calls implicitly, as is shown in
+ the TBoost.STM transaction, it is important to note that even when all
+ operations are made visible to the end programmer, transactions are still
+ devoid of many concurrency problems, such as data races and deadlocks (explained
+ below), that plague other types of concurrency control.
+ </p>
+<p>
+ For example, as long as the programmer properly annotates the access to
+ the shared variables x and y as shown in Figure 3, it is impossible for
+ race conditions or deadlocks to occur. Furthermore, the programmer does
+ not need any program-specific knowledge to use shared data; he or she simply
+ uses the TM interfaces supplied by the system and the resulting behavior
+ is guaranteed to be consistent. This is explained in greater detail in
+ Section ???3.1.
+ </p>
+<p>
+ Other types of concurrency control, such as mutual exclusion, cannot achieve
+ the same interface simplicity, because part of their implementation is
+ associated with, or exposed through, their interface. To demonstrate this,
+ consider the fine-grained locking example of Figure 1 as shown below.
+ </p>
+<pre class="programlisting"><span class="number">1</span> <span class="comment">// fine-grained locking
+</span><span class="number">2</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">mutexX</span><span class="special">);</span>
+<span class="number">3</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">mutexY</span><span class="special">);</span>
+<span class="number">4</span> <span class="special">++</span><span class="identifier">x</span><span class="special">;</span>
+<span class="number">5</span> <span class="special">--</span><span class="identifier">y</span><span class="special">;</span>
+<span class="number">6</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">mutexX</span><span class="special">);</span>
+<span class="number">7</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">mutexY</span><span class="special">);</span>
+</pre>
+<p>
+ There is no universal interface that can be used to properly access the
+ shared data protected by the mutual exclusion in the above fine-grained
+ locking example. Instead, the programmer must be aware that mutexX and
+ mutexY protect shared data x and y and, therefore, the locks must be obtained
+ before accessing the shared data. In short, the programmer is responsible
+ for knowing not only that mutual exclusion is used, but also how it is
+ used (e.g., which locks protect which shared variables). In this case,
+ mutexX must be obtained before mutexY. If another section of code implements
+ the following, a deadlock scenario will eventually occur.
+ </p>
+<pre class="programlisting"><span class="number">1</span> <span class="comment">// fine-grained locking
+</span><span class="number">2</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">mutexY</span><span class="special">);</span>
+<span class="number">3</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">mutexX</span><span class="special">);</span> <span class="comment">// deadlock here
+</span><span class="number">4</span> <span class="special">--</span><span class="identifier">y</span><span class="special">;</span>
+<span class="number">5</span> <span class="special">++</span><span class="identifier">x</span><span class="special">;</span>
+<span class="number">6</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">mutexY</span><span class="special">);</span>
+<span class="number">7</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">mutexX</span><span class="special">);</span>
+</pre>
+<a name="toward_boost_stm.appendices.rationale.intro.understanding_concurrency_hazards"></a><h5>
+<a name="id4861526"></a>
+ <a href="rationale.html#toward_boost_stm.appendices.rationale.intro.understanding_concurrency_hazards">Understanding
+ Concurrency Hazards</a>
+ </h5>
+<p>
+ Informally, a concurrency hazard is a condition existing in a set of operations
+ that, if executed with a specific concurrent interleaving, results in one
+ or many unwanted sideeffects. Most errors in parallel systems, such as
+ deadlocks and priority inversion, are the specific execution of concurrency
+ hazards resulting in the unwanted side-effect(s) they contain. If the concurrency
+ hazards are eliminated, the parallel system errors contained within the
+ concurrency hazards are also eliminated. Unfortunately, detecting existing
+ concurrency hazards is non-trivial and therefore eliminating them is also
+ non-trivial.
+ </p>
+<p>
+ Mutual exclusion exhibits more concurrency hazards than TM because its
+ implementation details (i.e., its locks) must be exposed and used by the
+ end programmer. While the locks used to enforce mutual exclusion by themselves
+ are not concurrency hazards, their use can lead to a number of hazards.
+ As such, using locks leads to concurrency hazards.
+ </p>
+<p>
+ Because the mutual exclusion locking details are exposed to the programmer
+ and because the programmer must maintain a universal and informal contract
+ to use these locks, concurrency hazards can arise due to the number of
+ possible misuses that can be introduced by the programmer. In particular,
+ if the programmer accidentally deviates from the informal locking contract,
+ he or she may inadvertently introduce a concurrency hazard that can cause
+ the program to deadlock, invert priority or lead to inconsistent data.
+ </p>
+<p>
+ In contrast, TM has no universal or informal contract between shared data
+ that the end programmer needs to understand and follow as is required in
+ mutual exclusion. Due to this, TM can hide its implementation details which
+ results in reduced concurrency hazards. In particular, each transaction
+ tracks the memory it uses in its read and write sets. When a transaction
+ begins its commit phase, it verifies its state is consistent and commits
+ its changes. If a transaction finds its state is inconsistent, it discards
+ its changes and restarts. All of this can be achieved using the basic TM
+ interfaces shown in Section 3 without exposing any implementation details.
+ In order to use TM, the end programmer only needs to know how to correctly
+ create a transaction. Once the transaction is executed, regardless of how
+ it is executed, it results in a program state that is guaranteed to be
+ consistent.
+ </p>
+<p>
+ Fundamentally, TM exhibits less concurrency hazards than mutual exclusion
+ because its implementation details are divorced from its interface and
+ can therefore be hidden within its subsystem. Any number of implementations
+ can be used in a TM subsystem using only the basic TM interfaces shown
+ in Section 3. The same is not true for mutual exclusion. Mutual exclusion,
+ regardless of how it is implemented, exposes details of its implementation
+ to the programmer. As demonstrated in Section 5, mutual exclusion does
+ not provide software modularity specifically because extending an existing
+ module requires an understanding and extension of that module's implementation.
+ When such locking implementations are hidden inside of software libraries,
+ extending these modules can range from difficult to impossible.
+ </p>
+<a name="toward_boost_stm.appendices.rationale.intro.testing__race_conditions_and_interleavings"></a><h5>
+<a name="id4861646"></a>
+ <a href="rationale.html#toward_boost_stm.appendices.rationale.intro.testing__race_conditions_and_interleavings">Testing:
+ Race Conditions and Interleavings</a>
+ </h5>
+<p>
+ A race condition is a common concurrency hazard that exists in parallel
+ or distributed software. As with all concurrency hazards, race conditions
+ rely on a specific interleaving of concurrent execution to cause their
+ unwanted side-effect. In this section we demonstrate that race conditions
+ do not exist in TM and therefore, software testing is greatly simplified
+ because all possible interleavings do not need to be tested to ensure correct
+ system behavior. In order to demonstrate that race conditions are absent
+ from TM, we must first show that they are present in other types of concurrency
+ control.
+ </p>
+<pre class="programlisting"><span class="number">1</span> <span class="comment">// Thread T1 // Thread T
+</span><span class="number">2</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">L2</span><span class="special">);</span>
+<span class="number">3</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">L1</span><span class="special">);</span>
+<span class="number">4</span> <span class="special">...</span>
+<span class="number">5</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">L1</span><span class="special">);</span>
+<span class="number">6</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">L2</span><span class="special">);</span>
+<span class="number">7</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">L1</span><span class="special">);</span>
+<span class="number">8</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">L2</span><span class="special">);</span>
+<span class="number">9</span> <span class="special">...</span>
+
+<span class="identifier">Figure</span> <span class="number">4.</span> <span class="identifier">Mutual</span> <span class="identifier">Exclusion</span> <span class="identifier">Race</span> <span class="identifier">Condition</span><span class="special">.</span>
+</pre>
+<p>
+ Consider the race condition present in the mutual exclusion example shown
+ in Figure 4. The race condition present in the example results in a deadlock
+ if thread T1 executes line 7 followed by thread T2 executing line 2. However,
+ if the program executes the lines in order (e.g., line 1, then line 2,
+ then line 3, etc.), the system will execute properly. The fundamental problem
+ in Figure 4 is that it contains a concurrency hazard; in particular, it
+ contains a race condition. To further complicate matters, the race condition
+ can only be observed in two of many possible concurrent executions. Those
+ two executions are: T1 executes line 7 followed by T2 executing line 2
+ or T2 executes line 2 followed by T1 executing line 7. All other possible
+ concurrent interleavings of threads T1 and T2 avoid the deadlock race condition.
+ More specifically, as long as T1 executes lines 7-8 atomically or T2 executes
+ line 2-3 atomically, all remaining concurrent interleavings are free of
+ the deadlock race condition.
+ </p>
+<p>
+ Because it is unlikely that the deadlock race condition will occur, the
+ programmer may never observe it, no matter how many times the program is
+ tested. Only exhaustive testing, which tests all possible concurrent interleavings,
+ is guaranteed to identify the presence of the deadlock. Regrettably, exhaustive
+ testing is an unrealistic solution for most programs due to the time it
+ would take to execute all possible concurrent interleavings of the program.
+ </p>
+<p>
+ An alternative to exhaustive testing is for programmers to use types of
+ concurrency control that are devoid of certain concurrency hazards. For
+ example, if mutual exclusion did not emit the race condition concurrency
+ hazard, it would be impossible for a program using it to deadlock. Therefore,
+ exhaustive testing would not be necessary. While this scenario is hypothetical,
+ it illustrates our larger argument: in order to avoid common parallel problems
+ in a practical fashion, programmers may need to only use types of concurrency
+ control that are devoid of certain concurrency hazards. By doing this,
+ the program using the specific type of concurrency control will be guaranteed
+ to be free of certain common parallel problems.
+ </p>
+<p>
+ TMs are required to be devoid of race conditions within their implementations
+ because they must enforce the ACI (atomic, consistent and isolated) principles.
+ Transactions must execute as atomic and isolated and, therefore, TMs are
+ not capable of supporting concurrent interleavings between multiple transactions
+ as that would violate the atomic and isolated principles of ACI. Due to
+ this, programs only using TM are guaranteed to be free of deadlocks (i.e.,
+ deadlockfreedom). Moreover, because TM implementations can guarantee freedom
+ of race condition concurrency hazards, programmers only need to verify
+ their transactional code is correct in a sequential (non-parallel) manner.
+ Once the sequential execution of the transactional code has been verified,
+ no more testing is required as the TM system is required to behave in a
+ consistent manner for all serial orders.
+ </p>
+<a name="toward_boost_stm.appendices.rationale.intro.development__mutual_exclusion_and_tm"></a><h5>
+<a name="id4861981"></a>
+ <a href="rationale.html#toward_boost_stm.appendices.rationale.intro.development__mutual_exclusion_and_tm">Development:
+ Mutual Exclusion and TM</a>
+ </h5>
+<p>
+ The development of fine-grained locking is notoriously difficult. Designing
+ such software is equally as hard. The difficulty in developing and designing
+ fine-grained locking systems is rooted in conflicting heuristics. A primary
+ goal of software design is to identify the most simplistic software solution
+ that exists for a particular problem. A primary goal of fine-grained locking
+ is the most efficient concurrent implementation of a software system. The
+ goals of software design and fine-grained locking are conflicting because
+ the most efficient fine-grained locking solution usually requires some
+ of the most complex software design implementations to achieve such performance.
+ </p>
+<p>
+ TM achieves scalability by using optimistic concurrency that is implemented
+ within its subsystem (see Section 4). Since the TM subsystem is the efficiency
+ throttle for TM, which is unexposed to the programmer, the software architecture
+ and design never needs to be complicated (nor can it be) in order to achieve
+ increased parallelism when using transactions. As will be demonstrated
+ in the following section, transactions run efficiently using the interfaces
+ shown in this section and are never complicated in order to achieve improved
+ performance, as is commonly found in fine-grained mutual exclusion implementations.
+ </p>
+<a name="toward_boost_stm.appendices.rationale.intro.scalability"></a><h5>
+<a name="id4862037"></a>
+ Scalability
+ </h5>
+<p>
+ In this section we analyze the scalability of TM compared to mutual exclusion.
+ We measure scalability by two metrics: consistency and performance. A concurrency
+ control type has consistent scalability if it guarantees correct behavior
+ for an arbitrarily large number of concurrently executing processes.4 Performance
+ scalability is measured by the maximum number of consistent processes supported
+ by a concurrency control type while executing concurrently.
+ </p>
+<a name="toward_boost_stm.appendices.rationale.intro.pessimistic_and_optimistic_critical_sections"></a><h5>
+<a name="id4862072"></a>
+ <a href="rationale.html#toward_boost_stm.appendices.rationale.intro.pessimistic_and_optimistic_critical_sections">Pessimistic
+ and Optimistic Critical Sections</a>
+ </h5>
+<p>
+ Critical sections can be pessimistic or optimistic. Pessimistic critical
+ sections limit their critical section execution to a single thread. Locks
+ are an example of a synchronization mechanism that use pessimistic critical
+ sections. Optimistic critical sections allow unlimited concurrent threaded
+ execution. Transactions are an example of a synchronization mechanism that
+ use optimistic critical sections.
+ </p>
+<a name="toward_boost_stm.appendices.rationale.intro.truly_optimistic_critical_sections"></a><h5>
+<a name="id4862108"></a>
+ <a href="rationale.html#toward_boost_stm.appendices.rationale.intro.truly_optimistic_critical_sections">Truly
+ Optimistic Critical Sections</a>
+ </h5>
+<p>
+ Truly optimistic critical sections are those critical sections which allow
+ multiple conflicting threads to simultaneously execute the same critical
+ section. A deferred update (or lazy acquire) TM system supports truly optimistic
+ critical section. A direct update (or eager acquire) TM system does not
+ support truly optimistic critical sections. More details on deferred and
+ direct update TM systems are presented in the subsequent sections.
+ </p>
+<p>
+ Truly optimistic critical sections are important because they allow simultaneous
+ conflicting critical section execution, as opposed to disallowing such
+ behavior. It is important to allow conflicting critical section execution
+ because prematurely preventing concurrently executing threads pessimistically
+ degrades performance. To demonstrate this, consider two transactions, called
+ T1 and T2, executing the same critical section. Transaction T1 starts first
+ and tentatively writes to memory locationM. Transaction T2 then starts
+ and tries to write to memory locationM. In a truly optimistic TM system,
+ T2 would be allowed to tentatively write to location M while T1 is also
+ writing to M. This behavior then allows T2 to commit before T1 in the event
+ T2 completes before T1.
+ </p>
+<p>
+ In comparison, if the TM system is not truly optimistic, once T1 writes
+ to M, T2 must stall until T1 completes. This pessimistically degrades the
+ performance of the system by prematurely deciding that T1's transactional
+ execution should have higher priority than T2's.
+ </p>
+<pre class="programlisting"><span class="number">1</span> <span class="comment">// global variables
+</span><span class="number">2</span> <span class="keyword">int</span> <span class="identifier">g1</span> <span class="special">=</span> <span class="number">0</span><span class="special">;</span> <span class="keyword">int</span> <span class="identifier">g2</span> <span class="special">=</span> <span class="number">0</span><span class="special">;</span>
+<span class="number">3</span>
+<span class="number">4</span> <span class="keyword">void</span> <span class="identifier">set1</span><span class="special">(</span><span class="keyword">int</span> <span class="identifier">val</span><span class="special">)</span> <span class="special">{</span> <span class="identifier">atomic</span> <span class="special">{</span> <span class="identifier">g1</span> <span class="special">=</span> <span class="identifier">val</span><span class="special">;</span> <span class="special">}</span> <span class="special">}</span>
+<span class="number">5</span> <span class="keyword">void</span> <span class="identifier">set2</span><span class="special">(</span><span class="keyword">int</span> <span class="identifier">val</span><span class="special">)</span> <span class="special">{</span> <span class="identifier">atomic</span> <span class="special">{</span> <span class="identifier">g2</span> <span class="special">=</span> <span class="identifier">val</span><span class="special">;</span> <span class="special">}</span> <span class="special">}</span>
+<span class="number">6</span> <span class="keyword">int</span> <span class="identifier">get1</span><span class="special">()</span> <span class="special">{</span> <span class="identifier">atomic</span> <span class="special">{</span> <span class="keyword">return</span> <span class="identifier">g1</span><span class="special">;</span> <span class="special">}</span> <span class="special">}</span>
+<span class="number">7</span> <span class="keyword">int</span> <span class="identifier">get2</span><span class="special">()</span> <span class="special">{</span> <span class="identifier">atomic</span> <span class="special">{</span> <span class="keyword">return</span> <span class="identifier">g2</span><span class="special">;</span> <span class="special">}</span> <span class="special">}</span>
+
+<span class="identifier">Figure</span> <span class="number">5.</span> <span class="identifier">Truly</span> <span class="identifier">Optimistic</span> <span class="identifier">Concurrency</span> <span class="identifier">Diagram</span><span class="special">.</span>
+</pre>
+<p>
+ Furthermore, and perhaps more importantly, truly optimistic critical sections
+ allow readers and writers of the same memory location to execute concurrently.
+ This behavior is important because in many cases, both the readers and
+ writers of the same memory can commit with consistent views of memory.
+ </p>
+<p>
+ An example of this is shown in Figure 5. As demonstrated in Figure 5 thread
+ 1 and thread 2, which we'll refer to as T1 and T2 respectively, operate
+ on the same memory locations (g1 and g2). Because the TM system supports
+ optimistic concurrency, T2 is allowed to execute concurrently alongside
+ T1 even though their memory accesses conflict. However, in this scenario,
+ because T2 completes its workload before T1, both transactions are allowed
+ to commit. T2 captures the state of g1=0,g2=0 while T1 sets the state of
+ g1=1,g2=1. As the example addresses, both g1=0,g2=0 and g1=1,g2=1 are legal
+ states.
+ </p>
+<a name="toward_boost_stm.appendices.rationale.intro.direct_and_deferred_update"></a><h5>
+<a name="id4862609"></a>
+ <a href="rationale.html#toward_boost_stm.appendices.rationale.intro.direct_and_deferred_update">Direct
+ and Deferred Update</a>
+ </h5>
+<p>
+ Updating is the process of committing transactional writes to global memory
+ and is performed in either a direct or deferred manner. Figure 6 presents
+ a step-by-step analysis of direct and deferred updating.
+ </p>
+<p>
+ Deferred update creates a local copy of global memory, performs modifications
+ to the local copy, and then writes those changes to global memory if the
+ transaction commits. If the transaction aborts, no additional work is done.
+ Direct update makes an original backup copy of global memory and then writes
+ directly to global memory. If the transaction commits, the transaction
+ does nothing. If the transaction aborts, the transaction restores global
+ memory with its backup copy. Some TM systems favor direct update due to
+ its natural optimization of commits (BSTM, McRTSTM and LogTM). However,
+ other TM systems favor deferred update due to its support for truly optimistic
+ critical sections (TBoost.STM and RingSTM).
+ </p>
+<p>
+ Direct update enables greater TM throughput when aborts are relatively
+ low because it optimizes the common commit case. Deferred update enables
+ greater TM throughput when
+ </p>
+<div class="orderedlist"><ol type="1">
+<li>
+ aborts are relatively high or
+ </li>
+<li>
+ short running transactions (e.g., those that complete quickly) are executed
+ alongside long running transactions (e.g., those that do not complete
+ quickly) because long running transactions do not stall shorter running
+ ones as they would in direct update systems, and therefore the fastest
+ transactions can commit first.
+ </li>
+</ol></div>
+<p>
+ It is important to note that deferred update supports truly optimistic
+ critical sections without special effort, while direct update does not.
+ Truly optimistic critical sections enable the speculative execution of
+ transactions that arrive after a memory location has already been tentatively
+ written to by another transaction. This allows the first transaction, of
+ potentially many competing transactions, to complete its commit, whether
+ it be the later arriving transaction or the earlier arriving writer. This
+ scenario is not possible with direct update without special effort.
+ </p>
+<a name="toward_boost_stm.appendices.rationale.intro.scalability__mutual_exclusion_and_transactional_memory"></a><h5>
+<a name="id4862701"></a>
+ <a href="rationale.html#toward_boost_stm.appendices.rationale.intro.scalability__mutual_exclusion_and_transactional_memory">Scalability:
+ Mutual Exclusion and Transactional Memory</a>
+ </h5>
+<p>
+ The scalability of mutual exclusion is limited to pessimistic concurrency.
+ By definition, mutual exclusion's critical sections must be pessimistic,
+ otherwise they would not be isolated to a single thread (i.e., they would
+ not be mutually exclusive). TM, however, is generally implemented using
+ optimistic concurrency, but it can enforce pessimistic concurrency amongst
+ transactions if that behavior is required for certain conditions. In certain
+ cases, TMs becomemore strict and execute pessimistically to enable inevitable
+ or irrevocable transactions. Such transactions have significant importance
+ for handling operations that, once started, must complete (e.g., I/O operations).
+ </p>
+<p>
+ Since TM can execute optimistically and pessimistically, it is clear that
+ whatever benefits pessimistic concurrency has can be acquired by TM. However,
+ since mutual exclusion can only execute pessimistically, the advantages
+ found in optimistic concurrency can never be obtained by mutual exclusion.
+ </p>
+<p>
+ When one first analyzes pessimistic and optimistic concurrency, it may
+ seem that the only benefit optimistic concurrency has over pessimistic
+ concurrency is that multiple critical sections, which conflict on the memory
+ they access, can execute concurrently. The simultaneous execution of such
+ conflicting critical sections allows the execution speed of such critical
+ sections to guide the system in deciding which execution should be allowed
+ to commit and which should be aborted. In particular, the first process
+ to complete the critical section can be allowed to abort the other process
+ of the system. The same scenario cannot be achieved by pessimistic critical
+ sections and is demonstrated in Section 4.1.1.
+ </p>
+<p>
+ A counterargument to this scenario is that such optimistic concurrency
+ only allows one critical section to commit, while one must be aborted.
+ Because mutual exclusion only allows one conflicting critical section execution
+ at a time, and because mutual exclusion does not support failure atomicity
+ (i.e., rollbacking of the critical section), mutual exclusion's pessimistic
+ behavior is superior in terms of energy and efficiency. Mutual exclusion,
+ unlike TM, suffers no wasted work because conflicting critical sections
+ are limited to a single thread of execution, reducing the energy it uses.
+ Furthermore, because mutual exclusion does not require original data to
+ copied, as needed for TM's direct or deferred update, it executes faster.
+ </p>
+<p>
+ While there is merit to this counterargument, an important scenario is
+ not captured by it: truly optimistic critical sections can support multiple
+ reader / single write executions which, if executed so the readers commit
+ before the writer, all critical sections will succeed. This scenario is
+ impossible to achieve using pessimistic critical sections. Although mutual
+ exclusion can use read/write locking, as soon as a writer thread begins
+ execution on a conflicting critical section, all readers must be stalled.
+ TM's truly optimistic concurrency does not suffer from this overly pessimistic
+ limitation of throughput and is therefore capable of producing an immeasurable
+ amount of concurrent throughput under such conditions.
+ </p>
+<p>
+ From a theoretical perspective, given L memory locations and P processes,
+ mutual exclusion can support the consistent concurrent execution of P*L
+ number of readers or L writers. TM can support the consistent concurrent
+ execution of P*L number of readers and L writers. Using the above variables,
+ the mathematical expression of the performance scalability of mutual exclusion
+ (S(ME)) is:
+ </p>
+<pre class="programlisting"><span class="identifier">S</span><span class="special">(</span><span class="identifier">ME</span><span class="special">)</span> <span class="special">=</span> <span class="special">(</span><span class="identifier">P</span><span class="special">*</span><span class="identifier">L</span><span class="special">)</span> <span class="special">+</span> <span class="identifier">L</span>
+</pre>
+<p>
+ Using the same variables, the mathematical expression of the performance
+ scalability of transactional memory is:
+ </p>
+<pre class="programlisting"><span class="identifier">S</span><span class="special">(</span><span class="identifier">TM</span><span class="special">)</span> <span class="special">=</span> <span class="special">(</span><span class="identifier">P</span> <span class="special">*</span> <span class="identifier">L</span><span class="special">)</span> <span class="special">+</span> <span class="identifier">L</span>
+</pre>
+<p>
+ As should be clear from the above equations, mutual exclusion cannot achieve
+ the same performance scalability of TM. This is because TM supports truly
+ optimistic concurrency and mutual exclusion is confined to pessimistic
+ concurrency. While other examples exist that demonstrate optimistic concurrency
+ can increase throughput via contention management, the above equations
+ capture the indisputable mathematical limitations in mutual exclusion's
+ performance scalability.
+ </p>
+<a name="toward_boost_stm.appendices.rationale.intro.modularity"></a><h5>
+<a name="id4862962"></a>
+ Modularity
+ </h5>
+<p>
+ Software modularity is an important aspect of software that is necessary
+ for its reuse. Formally, software is modular if it can be composed in a
+ new system without altering its internal implementation. Informally, software
+ is modular if it can be used, in its entirety, through its interface.
+ </p>
+<p>
+ By making software modular, it can be freely used in an unlimited number
+ of software systems. Without software modularity, software can only be
+ used in the original system where it was written. Clearly, without software
+ modularity, software cannot be reused. Because most software developments
+ are based on extensive library use, software reuse is an integral part
+ of software development. As such, limiting software reuse, would result
+ in severely hampered development capabilities and overall development time.
+ For these reasons, software modularity is vital for any software paradigm
+ to be practical. Software paradigms that do not support software modularity
+ are, in short, impractical.
+ </p>
+<a name="toward_boost_stm.appendices.rationale.intro.mutual_exclusion_and_software_modularity"></a><h5>
+<a name="id4863025"></a>
+ <a href="rationale.html#toward_boost_stm.appendices.rationale.intro.mutual_exclusion_and_software_modularity">Mutual
+ Exclusion and Software Modularity</a>
+ </h5>
+<p>
+ In this section, we show that mutual exclusion, regardless of its implementation,
+ fails to deliver software modularity. We demonstrate this through a running
+ example started in Figure 7 which implements inc(), mult() and get(); these
+ functions use lock G to respectively implement an increment, multiply and
+ get operations for the shared data.
+ </p>
+<pre class="programlisting"><span class="number">1</span> <span class="keyword">void</span> <span class="identifier">inc</span><span class="special">(</span><span class="keyword">int</span> <span class="identifier">v</span><span class="special">)</span> <span class="special">{</span>
+<span class="number">2</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">G</span><span class="special">);</span> <span class="identifier">g</span> <span class="special">+=</span> <span class="identifier">v</span><span class="special">;</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">G</span><span class="special">);</span>
+<span class="number">3</span> <span class="special">}</span>
+<span class="number">4</span>
+<span class="number">5</span> <span class="keyword">void</span> <span class="identifier">mult</span><span class="special">(</span><span class="keyword">int</span> <span class="identifier">v</span><span class="special">)</span> <span class="special">{</span>
+<span class="number">6</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">G</span><span class="special">);</span> <span class="identifier">g</span> <span class="special">*=</span> <span class="identifier">v</span><span class="special">;</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">G</span><span class="special">);</span>
+<span class="number">7</span> <span class="special">}</span>
+<span class="number">8</span>
+<span class="number">9</span> <span class="keyword">int</span> <span class="identifier">get</span><span class="special">()</span> <span class="special">{</span>
+<span class="number">10</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">G</span><span class="special">);</span> <span class="keyword">int</span> <span class="identifier">v</span> <span class="special">=</span> <span class="identifier">g</span><span class="special">;</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">G</span><span class="special">);</span>
+<span class="number">11</span> <span class="keyword">return</span> <span class="identifier">v</span><span class="special">;</span>
+<span class="number">12</span> <span class="special">}</span>
+
+<span class="identifier">Figure</span> <span class="number">7.</span> <span class="identifier">Mutual</span> <span class="identifier">Exclusion</span> <span class="keyword">for</span> <span class="identifier">Increment</span><span class="special">,</span> <span class="identifier">Multiply</span> <span class="keyword">and</span> <span class="identifier">Get</span> <span class="identifier">of</span> <span class="identifier">Shared</span> <span class="identifier">Variable</span><span class="special">.</span>
+</pre>
+<p>
+ Now suppose a programmer wants to increment and multiply by some values
+ within the same atomic operation. The initial implementation may look like
+ the following.
+ </p>
+<pre class="programlisting"><span class="number">1</span> <span class="identifier">inc</span><span class="special">(</span><span class="identifier">a</span><span class="special">);</span>
+<span class="number">2</span> <span class="identifier">mult</span><span class="special">(-</span><span class="identifier">b</span><span class="special">);</span>
+</pre>
+<p>
+ An unwanted side-effect of such an implementation is the exposure of the
+ intermediate state of g between inc() and mult(). A second thread performing
+ a get() may read an inconsistent value of g; the value of g between inc()
+ and mult(). This is demonstrated in the timing diagram of Figure 8 .
+ </p>
+<pre class="programlisting"><span class="identifier">Figure</span> <span class="number">8.</span> <span class="identifier">Example</span> <span class="identifier">of</span> <span class="identifier">Exposed</span> <span class="identifier">State</span> <span class="identifier">of</span> <span class="identifier">Mutual</span> <span class="identifier">Exclusion</span><span class="special">.</span>
+</pre>
+<p>
+ If the programmer needs the inc() and mult() operations to be executed
+ together, without an intermediate state being revealed, he or she could
+ make lock G reentrant. Reentrant locks are locks that can be obtained multiple
+ times by a single thread without deadlocking. If G is made reentrant, the
+ following code could be used to make inc(a) and mult(-b) atomic. A basic
+ implementation of a reentrant lock is to associate a counter with its lock
+ and increment the counter each time the lock() interface is called and
+ to decrement the counter each time the unlock() interface is called. The
+ reentrant lock is only truly locked when a call to lock() is made when
+ its associated counter is 0. Likewise, the reentrant lock is only truly
+ unlocked when a call to unlock() is made when its associated counter is
+ 1.
+ </p>
+<pre class="programlisting"><span class="number">1</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">G</span><span class="special">);</span>
+<span class="number">2</span> <span class="identifier">inc</span><span class="special">(</span><span class="identifier">a</span><span class="special">);</span>
+<span class="number">3</span> <span class="identifier">mult</span><span class="special">(-</span><span class="identifier">b</span><span class="special">);</span>
+<span class="number">4</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">G</span><span class="special">);</span>
+</pre>
+<p>
+ If the above code uses reentrant locks, it will achieve the programmer's
+ intended atomicity for inc() and mult(), isolating the state between inc()
+ and mult(), which disallows the unwanted side-effect shown in Figure 8.
+ While the atomicity of the operations is achieved, it is only achieved
+ by exposing the implementation details of inc() and mult(). In particular,
+ if the programmer had not known that lock G was used within inc() and mult(),
+ making an atomic operation of inc() and mult() would be impossible.
+ </p>
+<p>
+ An external atomic grouping of operations is impossible using embedded
+ mutual exclusion without exposing the implementation details because the
+ heart of mutual exclusion is based on named variables which the programmer
+ specifies to guard their critical sections. Because these variables are
+ named, they cannot be abstracted away and any programmer wishing to reuse
+ the mutually exclusive code must be able to access and extend the implementation
+ details.
+ </p>
+<pre class="programlisting"><span class="number">1</span> <span class="keyword">void</span> <span class="identifier">inc</span><span class="special">(</span><span class="keyword">int</span> <span class="identifier">v</span><span class="special">)</span> <span class="special">{</span> <span class="identifier">transaction</span> <span class="special">{</span> <span class="identifier">g</span> <span class="special">+=</span> <span class="identifier">v</span><span class="special">;</span> <span class="special">}</span> <span class="special">}</span>
+<span class="number">2</span>
+<span class="number">3</span> <span class="keyword">void</span> <span class="identifier">mult</span><span class="special">(</span><span class="keyword">int</span> <span class="identifier">v</span><span class="special">)</span> <span class="special">{</span> <span class="identifier">transaction</span> <span class="special">{</span> <span class="identifier">g</span> <span class="special">*=</span> <span class="identifier">v</span><span class="special">;</span> <span class="special">}</span> <span class="special">}</span>
+<span class="number">4</span>
+<span class="number">5</span> <span class="keyword">int</span> <span class="identifier">get</span><span class="special">()</span> <span class="special">{</span> <span class="identifier">transaction</span> <span class="special">{</span> <span class="keyword">return</span> <span class="identifier">g</span><span class="special">;</span> <span class="special">}</span> <span class="special">}</span>
+
+<span class="identifier">Figure</span> <span class="number">9.</span> <span class="identifier">TM</span> <span class="identifier">of</span> <span class="identifier">Increment</span><span class="special">,</span> <span class="identifier">Multiply</span> <span class="keyword">and</span> <span class="identifier">Get</span> <span class="identifier">of</span> <span class="identifier">Shared</span> <span class="identifier">Variable</span><span class="special">.</span>
+</pre>
+<a name="toward_boost_stm.appendices.rationale.intro.summary_of_mutual_exclusion_modularity"></a><h5>
+<a name="id4864121"></a>
+ <a href="rationale.html#toward_boost_stm.appendices.rationale.intro.summary_of_mutual_exclusion_modularity">Summary
+ of Mutual Exclusion Modularity</a>
+ </h5>
+<p>
+ As we presented at the beginning of this section, software modularity can
+ be informally understood as a component's ability to be used entirely from
+ its interface. Therefore, components that cannot be used entirely from
+ their interface, components that must expose their implementation details
+ to be extended, are not modular. As such, the paradigm of mutual exclusion
+ does not support software modularity.
+ </p>
+<a name="toward_boost_stm.appendices.rationale.intro.transactional_memory_and_software_modularity"></a><h5>
+<a name="id4864158"></a>
+ <a href="rationale.html#toward_boost_stm.appendices.rationale.intro.transactional_memory_and_software_modularity">Transactional
+ Memory and Software Modularity</a>
+ </h5>
+<p>
+ Transactional memory works in a fundamentally different manner than mutual
+ exclusion, with regard to its interface and implementation. To begin, as
+ demonstrated in Section 3, TMs do not generally expose any of their implementation
+ details to client code. In fact, in many TMs, client code is more versatile
+ if it knows and assumes nothing about the active implementation of the
+ TM. By abstracting away details of TM implementation, a TM subsystem can
+ adapt its behavior to the most efficient configuration for the program's
+ current workload, much like the algorithms used for efficient operation
+ of processes controlled by operating systems. TM uses such abstractions
+ to optimize the performance of concurrent programs using various consistency
+ checking methods, conflict detection times, updating policies, and contention
+ management schemes.
+ </p>
+<a name="toward_boost_stm.appendices.rationale.intro.achieving_tm_software_modularity"></a><h5>
+<a name="id4864201"></a>
+ <a href="rationale.html#toward_boost_stm.appendices.rationale.intro.achieving_tm_software_modularity">Achieving
+ TM Software Modularity</a>
+ </h5>
+<p>
+ TM achieves software modularity by allowing transactions to nest. With
+ transactional nesting, individual transactions can be wrapped inside of
+ other transactions which call the methods where they reside, resulting
+ in a transaction composed of both the parent and child transaction. Furthermore,
+ this is achieved without altering or understanding the child transaction's
+ implementation. To best demonstrate transactional nesting, we reuse the
+ prior mutual exclusion example shown in Figure 7 and implement it using
+ transactions as shown in Figure 9.
+ </p>
+<p>
+ As before, the programmer's goal is to implement a combination of inc()
+ and mult() executed in an atomic fashion. The basic, and incorrect implementation
+ is demonstrated below:
+ </p>
+<pre class="programlisting"><span class="number">1</span> <span class="identifier">inc</span><span class="special">(</span><span class="identifier">a</span><span class="special">);</span>
+<span class="number">2</span> <span class="identifier">mult</span><span class="special">(-</span><span class="identifier">b</span><span class="special">);</span>
+</pre>
+<p>
+ Even with transactions, this approach fails because the transactions within
+ inc() and mult() begin and end inside their respective functions. However,
+ to make the above operations atomic, the programmer need only make the
+ following modification shown in Figure 10.
+ </p>
+<pre class="programlisting"><span class="number">1</span> <span class="identifier">transaction</span> <span class="special">{</span> <span class="comment">// transaction {
+</span><span class="number">2</span> <span class="identifier">inc</span><span class="special">(</span><span class="identifier">a</span><span class="special">);</span> <span class="comment">// transaction { g += a; }
+</span><span class="number">3</span> <span class="identifier">mult</span><span class="special">(-</span><span class="identifier">b</span><span class="special">);</span> <span class="comment">// transaction { g *= -b; }
+</span><span class="number">4</span> <span class="special">}</span> <span class="comment">// }
+</span>
+<span class="identifier">Figure</span> <span class="number">10.</span> <span class="identifier">Modularity</span><span class="special">:</span> <span class="identifier">Transaction</span> <span class="identifier">of</span> <span class="identifier">Increment</span> <span class="keyword">and</span> <span class="identifier">Multiply</span><span class="special">.</span>
+</pre>
+<p>
+ In effect, the TM system subsumes the transactions that are nested inside
+ of the inc() and mult() operations. The left side of Figure 10 shows the
+ actual code of the transaction, while the right side shows the child transactions
+ that are subsumed by the parent transaction.
+ </p>
+<p>
+ Because transactions are isolated and atomic, the resulting state of g,
+ from operations inc() and mult(), is invisible to outside observers until
+ the transaction is committed. As such, outside threads cannot view any
+ intermediate state constructed by partial transaction execution. The result
+ of such isolated behavior is the guaranteed consistent concurrent execution
+ of interleaved accesses to shared memory from in-flight transactions. This
+ is demonstrated in Figure 11; let g=0 and assume deferred update is the
+ active updating policy, as explained in Section 4.2.
+ </p>
+<pre class="programlisting"><span class="identifier">Figure</span> <span class="number">11.</span> <span class="identifier">Example</span> <span class="identifier">of</span> <span class="identifier">Isolated</span> <span class="keyword">and</span> <span class="identifier">Consistent</span> <span class="identifier">State</span> <span class="identifier">of</span> <span class="identifier">TM</span><span class="special">.</span>
+</pre>
+<p>
+ As shown in Figure 11, multiple concurrently executing threads can read
+ and write to the same shared memory in a consistent and isolated fashion
+ when using TM. In the example, thread T2 performs x = get() after T1 has
+ already executed inc(a). However, since T1 has not yet committed its transaction,
+ T2's view of shared data g is consistent (g=0). When T2 begins the commit
+ phase of its transaction, the TM subsystem verifies that shared data g
+ has not been updated since it initially read it. Since no other transaction
+ has updated shared data g, T2's transaction is permitted to commit. Thread
+ T1 then continues with its mult() operation and then enters its commit
+ phase. The TM subsystem also verifies the consistency of T1's transaction
+ before it is allowed to commit. Again, since no one transaction has updated
+ shared data g between its reads and writes to it, T1's transaction is permitted
+ to commit.
+ </p>
+<p>
+ The above analysis demonstrates that software modularity can be achieved
+ in TM through transactional nesting (Figure 10). In this case, the specific
+ software modularity achieved is extension to an existing critical section.
+ Critical section extension was also possible with mutual exclusion, as
+ demonstrated in Section 5.1, but only through exposing the details behind
+ the mutual exclusion implementation. Due to this, mutual exclusion fails
+ to deliver a practical level of software modularity.
+ </p>
+<a name="toward_boost_stm.appendices.rationale.intro.summary_of_transactional_memory_modularity"></a><h5>
+<a name="id4864590"></a>
+ <a href="rationale.html#toward_boost_stm.appendices.rationale.intro.summary_of_transactional_memory_modularity">Summary
+ of Transactional Memory Modularity</a>
+ </h5>
+<p>
+ TM supports software modularity by allowing transactions to nest, to any
+ depth, while logically grouping the shared data accesses within the transactions
+ into an atomic, consistent and isolated (ACI) operation. Transactional
+ nesting is natural to the programmer because nested transactions behave
+ in the same manner as unnested transactions. TM's ACI support ensures transactions
+ will behave in a correct manner regardless of if the transaction is used
+ by itself or subsumed into a larger transaction.
+ </p>
+<a name="toward_boost_stm.appendices.rationale.intro.c___library_language_like_solution"></a><h5>
+<a name="id4864628"></a>
+ <a href="rationale.html#toward_boost_stm.appendices.rationale.intro.c___library_language_like_solution">C++
+ library language-like solution</a>
+ </h5>
+<p>
+ Research in parallel programming has recently seen a flurry of attention.
+ Among the active research is a push for high-level languages to offer native
+ support for parallel programming primitives. The next version of C++ will
+ incorporate library support for threads, while numerous researchers are
+ exploring ways to extend C++ to support transactional memory (TM).
+ </p>
+<p>
+ A strength of C++ is its support for automatic objects. Rather than requiring
+ that parallel primitives be added directly to the language, automatic objects
+ in C++ can be used to implement much of their necessary infrastructure.
+ The automatic object approach is natural from a language perspective, provides
+ full algorithmic control to the end programmer, and demonstrates C++'s
+ linguistic elegance. The disadvantage of this approach is its added programmatic
+ overhead. Using only automatic objects, certain programming errors, such
+ as accidental scope removal and incorrectly programmed transactional retry
+ behavior, can arise.
+ </p>
+<p>
+ In light of this, there are unique trade-offs between language based and
+ library-based parallel primitives. Language-based solutions minimize syntactic
+ clutter which reduce programmer related errors, but are seemingly irreversible
+ and, if incorrect, can have crippling effects upon the language. Library-based
+ solutions increase programmer control and flexibility, but place substantial
+ pressure on the programmer to avoid minute programming errors. A good compromise
+ is a solution that behaves like a language extension, but is implemented
+ within a library. By implementing parallel primitives within a library
+ that uses language-like interfaces, programmer pressure is reduced, implementation
+ updates are seamless, and full programmer control is achieved through library
+ extensibility.
+ </p>
+<p>
+ TBoost.STM present such a language-like solution for C++ using generic
+ library coupled with a deliberate use of the preprocessor. The culmination
+ of these components facilitate a simple, yet powerful, parallel programming
+ interface in C++.
+ </p>
+</div>
+<div class="section" lang="en">
+<div class="titlepage"><div><div><h4 class="title">
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts"></a><a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts" title="TM-Specific
Concepts">TM-Specific
Concepts</a>
@@ -267,8 +1124,8 @@
read).
</p>
<div class="table">
-<a name="id4859643"></a><p class="title"><b>Table 1.1. Comparaison with other STM systems</b></p>
-<table class="table" summary="Comparaison with other STM systems">
+<a name="id4865169"></a><p class="title"><b>Table 1.1. Conflict Detection</b></p>
+<table class="table" summary="Conflict Detection">
<colgroup>
<col>
<col>
@@ -322,7 +1179,7 @@
</td>
<td>
<p>
- <span class="bold"><strong>NO</strong></span>
+ NO
</p>
</td>
</tr>
@@ -472,7 +1329,7 @@
handle each specific problem with the most appropriate configuration.
</p>
<div class="table">
-<a name="id4860026"></a><p class="title"><b>Table 1.2. Consistency versus Updating policies composition</b></p>
+<a name="id4865546"></a><p class="title"><b>Table 1.2. Consistency versus Updating policies composition</b></p>
<table class="table" summary="Consistency versus Updating policies composition">
<colgroup>
<col>
@@ -627,7 +1484,7 @@
management</a>
</h5></div></div></div>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.contention_management.priority_based_tasks"></a><h6>
-<a name="id4860355"></a>
+<a name="id4865865"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.contention_management.priority_based_tasks">Priority-Based
Tasks</a>
</h6>
@@ -642,7 +1499,7 @@
cases, user-defined priority-based transactions are necessary.
</p>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.contention_management.approach"></a><h6>
-<a name="id4860394"></a>
+<a name="id4865904"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.contention_management.approach">Approach</a>
</h6>
<p>
@@ -658,7 +1515,7 @@
checking models. Last, we present our experimental results.
</p>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.contention_management.attacking__amp__victim_transactions"></a><h6>
-<a name="id4860432"></a>
+<a name="id4865942"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.contention_management.attacking__amp__victim_transactions">Attacking
& Victim Transactions</a>
</h6>
@@ -678,7 +1535,7 @@
transaction since Ta may abort it.
</p>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.contention_management.user_defined_priority_based_transactions"></a><h6>
-<a name="id4860481"></a>
+<a name="id4865991"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.contention_management.user_defined_priority_based_transactions">User-Defined
Priority-Based Transactions</a>
</h6>
@@ -699,7 +1556,7 @@
priority-based transactional environments.
</p>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.contention_management.extensible_polymorphic_contention_management_interface"></a><h6>
-<a name="id4860530"></a>
+<a name="id4866040"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.contention_management.extensible_polymorphic_contention_management_interface">Extensible
Polymorphic Contention Management Interface</a>
</h6>
@@ -794,7 +1651,7 @@
of experimental benchmarks on .
</p>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.introduction"></a><h6>
-<a name="id4861239"></a>
+<a name="id4866749"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.introduction">Introduction</a>
</h6>
<p>
@@ -847,7 +1704,7 @@
</li>
</ol></div>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.background"></a><h6>
-<a name="id4861367"></a>
+<a name="id4866849"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.background">Background</a>
</h6>
<p>
@@ -958,7 +1815,7 @@
later sections.
</p>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.locks_outside_of_transactions__lot_"></a><h6>
-<a name="id4861949"></a>
+<a name="id4867423"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.locks_outside_of_transactions__lot_">Locks
Outside of Transactions (LoT)</a>
</h6>
@@ -1009,7 +1866,7 @@
<span class="number">27</span> <span class="keyword">int</span> <span class="identifier">lock3</span><span class="special">()</span> <span class="special">{</span> <span class="comment">/* no conflict */</span> <span class="special">}</span>
</pre>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.lot_full_lock_protection"></a><h6>
-<a name="id4863112"></a>
+<a name="id4868563"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.lot_full_lock_protection">LoT
Full Lock Protection</a>
</h6>
@@ -1039,7 +1896,7 @@
stalling.
</p>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.lot_tm_lock_protection"></a><h6>
-<a name="id4863173"></a>
+<a name="id4868624"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.lot_tm_lock_protection">LoT
TM-Lock Protection</a>
</h6>
@@ -1082,7 +1939,7 @@
the third lock protection policy.
</p>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.lot_tx_lock_protection"></a><h6>
-<a name="id4863352"></a>
+<a name="id4868796"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.lot_tx_lock_protection">LoT
TX-Lock Protection</a>
</h6>
@@ -1125,7 +1982,7 @@
cooperative performance while still adhering to the rule.
</p>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.locks_inside_of_transactions__lit_"></a><h6>
-<a name="id4863876"></a>
+<a name="id4869330"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.locks_inside_of_transactions__lit_">Locks
Inside of Transactions (LiT)</a>
</h6>
@@ -1239,7 +2096,7 @@
for the completeness of the example.
</p>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.lit_full_lock_protection"></a><h6>
-<a name="id4864910"></a>
+<a name="id4870405"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.lit_full_lock_protection">LiT
Full-Lock Protection</a>
</h6>
@@ -1269,7 +2126,7 @@
are all allowed to resume.
</p>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.lit_tm_lock_protection"></a><h6>
-<a name="id4864971"></a>
+<a name="id4870467"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.lit_tm_lock_protection">LiT
TM-Lock Protection</a>
</h6>
@@ -1299,7 +2156,7 @@
LiT protection policy, LiT TX-lock protection.
</p>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.lit_tx_lock_protection"></a><h6>
-<a name="id4865034"></a>
+<a name="id4870529"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.lit_tx_lock_protection">LiT
TX-Lock Protection</a>
</h6>
@@ -1339,7 +2196,7 @@
passed. When lockL3 completes tx3 begins and runs through to completion.
</p>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.lock_composition"></a><h6>
-<a name="id4865111"></a>
+<a name="id4870625"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.lock_composition">Lock
Composition</a>
</h6>
@@ -1423,7 +2280,7 @@
lock until the transaction commits.
</p>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.criticality_of_lit_lock_composition"></a><h6>
-<a name="id4865816"></a>
+<a name="id4871330"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.criticality_of_lit_lock_composition">Criticality
of LiT Lock Composition</a>
</h6>
@@ -1474,7 +2331,7 @@
cumulative affects of the move() operation.
</p>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.understanding_lit_lock_composition"></a><h6>
-<a name="id4866426"></a>
+<a name="id4871941"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.understanding_lit_lock_composition">Understanding
LiT Lock Composition</a>
</h6>
@@ -1563,7 +2420,7 @@
shared memory x2.
</p>
<a name="toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.lit_lock_identification"></a><h6>
-<a name="id4867520"></a>
+<a name="id4873017"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.tm_specific_concepts.lock_aware_transaction.lit_lock_identification">LiT
Lock Identification</a>
</h6>
@@ -1825,7 +2682,7 @@
<div class="itemizedlist"><ul type="disc">
<li>
mixin: As the user defines its own class he can define the exact
- constructors avoiding the problem added the base parameter
+ constructors avoiding the problem added the base parameter.
</li>
<li>
intrinsic: the wrapper must define generic constructors having
@@ -1873,9 +2730,14 @@
The following table is a compilation of the preceding analysis:
</p>
<div class="table">
-<a name="id4868096"></a><p class="title"><b>Table 1.3. Comparaison with other STM systems</b></p>
-<table class="table" summary="Comparaison with other STM systems">
-<colgroup><col></colgroup>
+<a name="id4873576"></a><p class="title"><b>Table 1.3. Mixin versus wrapper</b></p>
+<table class="table" summary="Mixin versus wrapper">
+<colgroup>
+<col>
+<col>
+<col>
+<col>
+</colgroup>
<thead><tr>
<th>
<p>
@@ -1907,59 +2769,156 @@
</td>
<td>
<p>
- [No
+ No
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>Yes</strong></span>
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>Yes</strong></span>
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ <span class="bold"><strong>existing variable</strong></span>
+ </p>
+ </td>
+<td>
+ <p>
+ No
+ </p>
+ </td>
+<td>
+ <p>
+ No
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>Yes</strong></span>
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ <span class="bold"><strong>existing class hierarchy</strong></span>
+ </p>
+ </td>
+<td>
+ <p>
+ No
+ </p>
+ </td>
+<td>
+ <p>
+ No
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>Yes</strong></span>
</p>
</td>
</tr>
<tr>
<td>
<p>
- *Yes
+ <span class="bold"><strong>avoid the forwarding problem for base classes</strong></span>
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>Yes</strong></span>
+ </p>
+ </td>
+<td>
+ <p>
+ No
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>Yes</strong></span>
</p>
</td>
-<td class="auto-generated"> </td>
</tr>
<tr>
<td>
<p>
- *Yes
+ <span class="bold"><strong>avoid the forwarding problem for derived
+ classes</strong></span>
+ </p>
+ </td>
+<td>
+ <p>
+ No
+ </p>
+ </td>
+<td>
+ <p>
+ No
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>Yes</strong></span>
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ <span class="bold"><strong>Works for class hierarchies</strong></span>
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>Yes</strong></span>
+ </p>
+ </td>
+<td>
+ <p>
+ No
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>Yes</strong></span>
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ <span class="bold"><strong>Performs optimally</strong></span>
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>Yes</strong></span>
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>Yes</strong></span>
+ </p>
+ </td>
+<td>
+ <p>
+ No
</p>
</td>
-<td class="auto-generated"> </td>
</tr>
</tbody>
</table>
</div>
-<pre class="programlisting"><span class="special">[</span>
- <span class="special">[[*</span><span class="identifier">existing</span> <span class="identifier">variable</span><span class="special">]]</span> <span class="special">[[</span><span class="identifier">No</span><span class="special">]]</span> <span class="special">[[</span><span class="identifier">No</span><span class="special">]]</span> <span class="special">[[*</span><span class="identifier">Yes</span><span class="special">]]</span>
-<span class="special">]</span>
-<span class="special">[</span>
- <span class="special">[[*</span><span class="identifier">existing</span> <span class="keyword">class</span> <span class="identifier">hierarchy</span><span class="special">]]</span> <span class="special">[[</span><span class="identifier">No</span><span class="special">]]</span> <span class="special">[[</span><span class="identifier">No</span><span class="special">]]</span> <span class="special">[[*</span><span class="identifier">Yes</span><span class="special">]]</span>
-<span class="special">]</span>
-<span class="special">[</span>
- <span class="special">[[*</span><span class="identifier">avoid</span> <span class="identifier">the</span> <span class="identifier">forwarding</span> <span class="identifier">problem</span> <span class="keyword">for</span> <span class="identifier">base</span> <span class="identifier">classes</span><span class="special">]]</span> <span class="special">[[</span><span class="identifier">Yes</span><span class="special">]]</span> <span class="special">[[</span><span class="identifier">No</span><span class="special">]]</span> <span class="special">[[*</span><span class="identifier">Yes</span><span class="special">]]</span>
-<span class="special">]</span>
-<span class="special">[</span>
- <span class="special">[[*</span><span class="identifier">avoid</span> <span class="identifier">the</span> <span class="identifier">forwarding</span> <span class="identifier">problem</span> <span class="keyword">for</span> <span class="identifier">derived</span> <span class="identifier">classes</span><span class="special">]]</span> <span class="special">[[</span><span class="identifier">No</span><span class="special">]]</span> <span class="special">[[</span><span class="identifier">No</span><span class="special">]]</span> <span class="special">[[*</span><span class="identifier">Yes</span><span class="special">]]</span>
-<span class="special">]</span>
-<span class="special">[</span>
- <span class="special">[[*</span><span class="identifier">Works</span> <span class="keyword">for</span> <span class="keyword">class</span> <span class="identifier">hierarchies</span><span class="special">]]</span> <span class="special">[[*</span><span class="identifier">Yes</span><span class="special">]]</span> <span class="special">[[</span><span class="identifier">No</span><span class="special">]]</span> <span class="special">[[*</span><span class="identifier">Yes</span><span class="special">]]</span>
-<span class="special">]</span>
-<span class="special">[</span>
- <span class="special">[[*</span><span class="identifier">Performs</span> <span class="identifier">optimaly</span><span class="special">]]</span> <span class="special">[[*</span><span class="identifier">Yes</span><span class="special">]]</span> <span class="special">[[*</span><span class="identifier">Yes</span><span class="special">]]</span> <span class="special">[[</span><span class="identifier">No</span><span class="special">]]</span>
-<span class="special">]</span>
-</pre>
-<p>
- [ [<span class="bold"><strong>existing variable</strong></span>] [[No]] [[No]]
- [<span class="bold"><strong>Yes</strong></span>] ] [ [<span class="bold"><strong>existing
- class hierarchy</strong></span>] [[No]] [[No]] [<span class="bold"><strong>Yes</strong></span>]
- ] [ [<span class="bold"><strong>avoid the forwarding problem for base classes</strong></span>]
- [[Yes]] [[No]] [<span class="bold"><strong>Yes</strong></span>] ] [ [<span class="bold"><strong>avoid the forwarding problem for derived classes</strong></span>]
- [[No]] [[No]] [<span class="bold"><strong>Yes</strong></span>] ] [ [<span class="bold"><strong>Works
- for class hierarchies</strong></span>] [<span class="bold"><strong>Yes</strong></span>]
- [[No]] [<span class="bold"><strong>Yes</strong></span>] ] [ [<span class="bold"><strong>Performs
- optimaly</strong></span>] [<span class="bold"><strong>Yes</strong></span>] [<span class="bold"><strong>Yes</strong></span>] [[No]] ] ]
- </p>
</div>
<div class="section" lang="en">
<div class="titlepage"><div><div><h5 class="title">
@@ -2159,8 +3118,10 @@
support in the near future [4].
</p>
<div class="table">
-<a name="id4869236"></a><p class="title"><b>Table 1.4. Comparaison with other STM systems</b></p>
-<table class="table" summary="Comparaison with other STM systems">
+<a name="id4878604"></a><p class="title"><b>Table 1.4. __Boost_STM__ Mutual Exclusion Locking Parallel
+ Constructs</b></p>
+<table class="table" summary="__Boost_STM__ Mutual Exclusion Locking Parallel
+ Constructs">
<colgroup>
<col>
<col>
@@ -2253,11 +3214,8 @@
</tbody>
</table>
</div>
-<p>
- Table 1. TBoost.STM Mutual Exclusion Locking Parallel Constructs.
- </p>
<a name="toward_boost_stm.appendices.rationale.c___and_library_specific_concepts.language_like_macro_blocks.library_based_lock_implementations"></a><h6>
-<a name="id4869444"></a>
+<a name="id4878808"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.c___and_library_specific_concepts.language_like_macro_blocks.library_based_lock_implementations">Library-based
Lock Implementations</a>
</h6>
@@ -2305,7 +3263,7 @@
scoping and programmer error.
</p>
<a name="toward_boost_stm.appendices.rationale.c___and_library_specific_concepts.language_like_macro_blocks.pitfalls_in_scoping_of_automatic_object_locks"></a><h6>
-<a name="id4869852"></a>
+<a name="id4879205"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.c___and_library_specific_concepts.language_like_macro_blocks.pitfalls_in_scoping_of_automatic_object_locks">Pitfalls
in Scoping of Automatic Object Locks</a>
</h6>
@@ -2378,7 +3336,7 @@
of locks results in unoptimized performance.
</p>
<a name="toward_boost_stm.appendices.rationale.c___and_library_specific_concepts.language_like_macro_blocks.library_based_transaction_implementations"></a><h6>
-<a name="id4870287"></a>
+<a name="id4879651"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.c___and_library_specific_concepts.language_like_macro_blocks.library_based_transaction_implementations">Library-based
Transaction Implementations</a>
</h6>
@@ -2432,7 +3390,7 @@
behaviors.
</p>
<a name="toward_boost_stm.appendices.rationale.c___and_library_specific_concepts.language_like_macro_blocks.pitfalls_in_transactional_execution_of_automatic_objects"></a><h6>
-<a name="id4870660"></a>
+<a name="id4880024"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.c___and_library_specific_concepts.language_like_macro_blocks.pitfalls_in_transactional_execution_of_automatic_objects">Pitfalls
in Transactional Execution of Automatic Objects</a>
</h6>
@@ -2527,7 +3485,7 @@
of direct language integration of TM instead of API-only approaches.
</p>
<a name="toward_boost_stm.appendices.rationale.c___and_library_specific_concepts.language_like_macro_blocks.disadvantages_of_language_based_transactional_integration"></a><h6>
-<a name="id4871659"></a>
+<a name="id4881023"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.c___and_library_specific_concepts.language_like_macro_blocks.disadvantages_of_language_based_transactional_integration">Disadvantages
of Language Based Transactional Integration</a>
</h6>
@@ -2564,7 +3522,7 @@
objects nor language-based parallel abstractions alone can provide.
</p>
<a name="toward_boost_stm.appendices.rationale.c___and_library_specific_concepts.language_like_macro_blocks.parallel_constructs_for_mutually_exclusive_locks"></a><h6>
-<a name="id4871736"></a>
+<a name="id4881100"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.c___and_library_specific_concepts.language_like_macro_blocks.parallel_constructs_for_mutually_exclusive_locks">Parallel
Constructs for Mutually Exclusive Locks</a>
</h6>
@@ -2600,7 +3558,7 @@
in client code.
</p>
<a name="toward_boost_stm.appendices.rationale.c___and_library_specific_concepts.language_like_macro_blocks.exception_based_timed_locks"></a><h6>
-<a name="id4872238"></a>
+<a name="id4881610"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.c___and_library_specific_concepts.language_like_macro_blocks.exception_based_timed_locks">Exception-based
Timed Locks</a>
</h6>
@@ -2687,7 +3645,7 @@
<span class="identifier">Figure</span> <span class="number">12.</span> <span class="identifier">Optimized</span> <span class="identifier">Timed</span> <span class="identifier">Locking</span> <span class="identifier">with</span> TBoost.STM<span class="special">.</span>
</pre>
<a name="toward_boost_stm.appendices.rationale.c___and_library_specific_concepts.language_like_macro_blocks.parallel_constructs_for_transactional_memory"></a><h6>
-<a name="id4873497"></a>
+<a name="id4882869"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.c___and_library_specific_concepts.language_like_macro_blocks.parallel_constructs_for_transactional_memory">Parallel
Constructs for Transactional Memory</a>
</h6>
@@ -2765,7 +3723,7 @@
correct behavior and then throwing an exception upward.
</p>
<a name="toward_boost_stm.appendices.rationale.c___and_library_specific_concepts.language_like_macro_blocks.transaction_nesting"></a><h6>
-<a name="id4874159"></a>
+<a name="id4883513"></a>
<a href="rationale.html#toward_boost_stm.appendices.rationale.c___and_library_specific_concepts.language_like_macro_blocks.transaction_nesting">Transaction
Nesting</a>
</h6>
@@ -2826,7 +3784,7 @@
with other STM systems</a>
</h4></div></div></div>
<div class="table">
-<a name="id4874269"></a><p class="title"><b>Table 1.5. Comparaison
+<a name="id4883624"></a><p class="title"><b>Table 1.5. Comparaison
with other STM systems</b></p>
<table class="table" summary="Comparaison
with other STM systems">
Modified: sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices/todo.html
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices/todo.html (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/appendices/todo.html 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -44,7 +44,7 @@
to do before review</a>
</h4></div></div></div>
<a name="toward_boost_stm.appendices.todo.tasks_to_do_before_review.interface"></a><h5>
-<a name="id4881575"></a>
+<a name="id4886679"></a>
<a href="todo.html#toward_boost_stm.appendices.todo.tasks_to_do_before_review.interface">Interface</a>
</h5>
<p>
@@ -192,7 +192,7 @@
</li>
</ul></div>
<a name="toward_boost_stm.appendices.todo.tasks_to_do_before_review.adding_some_components_to_boost_to_preparing_boostification_of_stm"></a><h5>
-<a name="id4881902"></a>
+<a name="id4887006"></a>
<a href="todo.html#toward_boost_stm.appendices.todo.tasks_to_do_before_review.adding_some_components_to_boost_to_preparing_boostification_of_stm">Adding
some components to Boost to preparing Boostification of STM</a>
</h5>
@@ -232,7 +232,7 @@
</li>
</ul></div>
<a name="toward_boost_stm.appendices.todo.tasks_to_do_before_review.boostifying_stm"></a><h5>
-<a name="id4882006"></a>
+<a name="id4887110"></a>
<a href="todo.html#toward_boost_stm.appendices.todo.tasks_to_do_before_review.boostifying_stm">Boostifying
STM</a>
</h5>
@@ -308,7 +308,7 @@
</li>
</ul></div>
<a name="toward_boost_stm.appendices.todo.tasks_to_do_before_review.implementation"></a><h5>
-<a name="id4882182"></a>
+<a name="id4887290"></a>
<a href="todo.html#toward_boost_stm.appendices.todo.tasks_to_do_before_review.implementation">Implementation</a>
</h5>
<p>
@@ -335,28 +335,28 @@
</li>
</ul></div>
<a name="toward_boost_stm.appendices.todo.tasks_to_do_before_review.tests"></a><h5>
-<a name="id4882252"></a>
+<a name="id4887361"></a>
<a href="todo.html#toward_boost_stm.appendices.todo.tasks_to_do_before_review.tests">Tests</a>
</h5>
<div class="itemizedlist"><ul type="disc"><li>
Add unit tests
</li></ul></div>
<a name="toward_boost_stm.appendices.todo.tasks_to_do_before_review.examples"></a><h5>
-<a name="id4882283"></a>
+<a name="id4887392"></a>
<a href="todo.html#toward_boost_stm.appendices.todo.tasks_to_do_before_review.examples">Examples</a>
</h5>
<div class="itemizedlist"><ul type="disc"><li>
Add unit tests
</li></ul></div>
<a name="toward_boost_stm.appendices.todo.tasks_to_do_before_review.benchmarks"></a><h5>
-<a name="id4882313"></a>
+<a name="id4887422"></a>
<a href="todo.html#toward_boost_stm.appendices.todo.tasks_to_do_before_review.benchmarks">Benchmarks</a>
</h5>
<div class="itemizedlist"><ul type="disc"><li>
Add some specific benchmarks.
</li></ul></div>
<a name="toward_boost_stm.appendices.todo.tasks_to_do_before_review.documentation"></a><h5>
-<a name="id4882344"></a>
+<a name="id4887452"></a>
<a href="todo.html#toward_boost_stm.appendices.todo.tasks_to_do_before_review.documentation">Documentation</a>
</h5>
<div class="itemizedlist"><ul type="disc">
@@ -464,12 +464,12 @@
could be done after acceptation.
</p>
<a name="toward_boost_stm.appendices.todo.for_later_releases.integrate_with_stm_test_benchmarks_as_stamp_or_stmbench7"></a><h5>
-<a name="id4882586"></a>
+<a name="id4887694"></a>
<a href="todo.html#toward_boost_stm.appendices.todo.for_later_releases.integrate_with_stm_test_benchmarks_as_stamp_or_stmbench7">Integrate
with STM test benchmarks as STAMP or STMBench7</a>
</h5>
<a name="toward_boost_stm.appendices.todo.for_later_releases.providing_closed_nested_transaction_that_are_not_flat"></a><h5>
-<a name="id4882612"></a>
+<a name="id4887720"></a>
<a href="todo.html#toward_boost_stm.appendices.todo.for_later_releases.providing_closed_nested_transaction_that_are_not_flat">Providing
Closed Nested transaction that are not flat</a>
</h5>
@@ -479,12 +479,12 @@
the thread.
</p>
<a name="toward_boost_stm.appendices.todo.for_later_releases.allows_configuration_at_compile_time_and_run_time"></a><h5>
-<a name="id4882646"></a>
+<a name="id4887754"></a>
<a href="todo.html#toward_boost_stm.appendices.todo.for_later_releases.allows_configuration_at_compile_time_and_run_time">Allows
configuration at compile-time and run-time</a>
</h5>
<a name="toward_boost_stm.appendices.todo.for_later_releases.add_explicit_outer_transaction"></a><h5>
-<a name="id4882671"></a>
+<a name="id4887779"></a>
<a href="todo.html#toward_boost_stm.appendices.todo.for_later_releases.add_explicit_outer_transaction">Add
explicit outer transaction</a>
</h5>
@@ -504,22 +504,22 @@
that need a deep research.
</p>
<a name="toward_boost_stm.appendices.todo.more_recherch_needed.transactional_condition_variables"></a><h5>
-<a name="id4882726"></a>
+<a name="id4887834"></a>
<a href="todo.html#toward_boost_stm.appendices.todo.more_recherch_needed.transactional_condition_variables">Transactional
condition variables</a>
</h5>
<a name="toward_boost_stm.appendices.todo.more_recherch_needed.mixing_stm_updating_policies"></a><h5>
-<a name="id4882750"></a>
+<a name="id4887858"></a>
<a href="todo.html#toward_boost_stm.appendices.todo.more_recherch_needed.mixing_stm_updating_policies">Mixing
STM updating policies</a>
</h5>
<a name="toward_boost_stm.appendices.todo.more_recherch_needed.mixing_stm_consistency_checking"></a><h5>
-<a name="id4882775"></a>
+<a name="id4887883"></a>
<a href="todo.html#toward_boost_stm.appendices.todo.more_recherch_needed.mixing_stm_consistency_checking">Mixing
STM consistency checking</a>
</h5>
<a name="toward_boost_stm.appendices.todo.more_recherch_needed.suspend_resume_transactions"></a><h5>
-<a name="id4882800"></a>
+<a name="id4887907"></a>
<a href="todo.html#toward_boost_stm.appendices.todo.more_recherch_needed.suspend_resume_transactions">Suspend/resume
transactions</a>
</h5>
Modified: sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/examples.html
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/examples.html (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/examples.html 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -6,8 +6,7 @@
<meta name="generator" content="DocBook XSL Stylesheets V1.69.1">
<link rel="start" href="../index.html" title="Chapter 1. Toward.Boost.STM">
<link rel="up" href="../index.html" title="Chapter 1. Toward.Boost.STM">
-<link rel="prev" href="reference/contention_managers.html" title="Contention
- Managers">
+<link rel="prev" href="reference/utilities.html" title="Utilities">
<link rel="next" href="appendices.html" title="Appendices">
</head>
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
@@ -21,7 +20,7 @@
</tr></table>
<hr>
<div class="spirit-nav">
-<a accesskey="p" href="reference/contention_managers.html"><img src="../../../../../doc/html/images/prev.png" alt="Prev"></a><a accesskey="u" href="../index.html"><img src="../../../../../doc/html/images/up.png" alt="Up"></a><a accesskey="h" href="../index.html"><img src="../../../../../doc/html/images/home.png" alt="Home"></a><a accesskey="n" href="appendices.html"><img src="../../../../../doc/html/images/next.png" alt="Next"></a>
+<a accesskey="p" href="reference/utilities.html"><img src="../../../../../doc/html/images/prev.png" alt="Prev"></a><a accesskey="u" href="../index.html"><img src="../../../../../doc/html/images/up.png" alt="Up"></a><a accesskey="h" href="../index.html"><img src="../../../../../doc/html/images/home.png" alt="Home"></a><a accesskey="n" href="appendices.html"><img src="../../../../../doc/html/images/next.png" alt="Next"></a>
</div>
<div class="section" lang="en">
<div class="titlepage"><div><div><h2 class="title" style="clear: both">
@@ -41,7 +40,7 @@
</tr></table>
<hr>
<div class="spirit-nav">
-<a accesskey="p" href="reference/contention_managers.html"><img src="../../../../../doc/html/images/prev.png" alt="Prev"></a><a accesskey="u" href="../index.html"><img src="../../../../../doc/html/images/up.png" alt="Up"></a><a accesskey="h" href="../index.html"><img src="../../../../../doc/html/images/home.png" alt="Home"></a><a accesskey="n" href="appendices.html"><img src="../../../../../doc/html/images/next.png" alt="Next"></a>
+<a accesskey="p" href="reference/utilities.html"><img src="../../../../../doc/html/images/prev.png" alt="Prev"></a><a accesskey="u" href="../index.html"><img src="../../../../../doc/html/images/up.png" alt="Up"></a><a accesskey="h" href="../index.html"><img src="../../../../../doc/html/images/home.png" alt="Home"></a><a accesskey="n" href="appendices.html"><img src="../../../../../doc/html/images/next.png" alt="Next"></a>
</div>
</body>
</html>
Modified: sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/overview/intro.html
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/overview/intro.html (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/overview/intro.html 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -22,840 +22,9 @@
<div class="spirit-nav">
<a accesskey="p" href="../overview.html"><img src="../../../../../../doc/html/images/prev.png" alt="Prev"></a><a accesskey="u" href="../overview.html"><img src="../../../../../../doc/html/images/up.png" alt="Up"></a><a accesskey="h" href="../../index.html"><img src="../../../../../../doc/html/images/home.png" alt="Home"></a><a accesskey="n" href="../users_guide.html"><img src="../../../../../../doc/html/images/next.png" alt="Next"></a>
</div>
-<div class="section" lang="en">
-<div class="titlepage"><div><div><h3 class="title">
+<div class="section" lang="en"><div class="titlepage"><div><div><h3 class="title">
<a name="toward_boost_stm.overview.intro"></a> Introduction
-</h3></div></div></div>
-<a name="toward_boost_stm.overview.intro.transactional_memory"></a><h4>
-<a name="id4765159"></a>
- <a href="intro.html#toward_boost_stm.overview.intro.transactional_memory">Transactional
- Memory</a>
- </h4>
-<p>
- Transactional memory(TM) is a modern type of concurrency control that uses
- transactions as its synchronization mechanism. A transaction is a finite
- sequence of operations that are executed in an <span class="bold"><strong>atomic</strong></span>,
- <span class="bold"><strong>isolated</strong></span> and <span class="bold"><strong>consistent</strong></span>
- manner. The atomicity, isolation and consistency (<span class="bold"><strong>ACI</strong></span>)
- of transaction's are derived from the ACID principle in the database community.
- TM does not exhibit the D (durability) of ACID because unlike database transactions,
- TM transactions are not saved to permanent storage (e.g., hard drives).
- </p>
-<p>
- Transactions are executed speculatively (optimistically) and are checked
- for consistency at various points in the transaction's lifetime. Programmers
- specify the starting and ending points of a transaction. All of the operations
- between those points make up the transaction's execution body. Transactions
- are commonly represented using the atomic structure shown in the figure below.
- </p>
-<pre class="programlisting"><span class="number">1</span> <span class="identifier">transaction</span>
-<span class="number">2</span> <span class="special">{</span>
-<span class="number">3</span> <span class="special">++</span><span class="identifier">x</span><span class="special">;</span>
-<span class="number">4</span> <span class="special">--</span><span class="identifier">y</span><span class="special">;</span>
-<span class="number">5</span> <span class="special">}</span>
-
-<span class="identifier">Simple</span> <span class="identifier">Transaction</span> <span class="identifier">Using</span> <span class="identifier">the</span> <span class="identifier">atomic</span> <span class="identifier">Keyword</span>
-</pre>
-<p>
- Once a transaction has started it either commits or aborts. A transaction's
- operations are only seen once the transaction has committed, providing the
- illusion that all of the operations occurred at a single instance in time.
- The instructions of a committed transaction are viewed as if they occurred
- as an indivisible event, not as a set of operations executed serially. The
- operations of an aborted transaction are never seen by other threads, even
- if such operations were executed within a transaction and then rolled back.
- </p>
-<p>
- In the case of the above example, if the transaction was committed both operations
- <code class="computeroutput"><span class="special">++</span><span class="identifier">x</span></code>
- and <code class="computeroutput"><span class="special">--</span><span class="identifier">y</span></code>
- would be made visible to other threads at the same instance in time. If the
- transaction the above example was aborted, neither operation (<code class="computeroutput"><span class="special">++</span><span class="identifier">x</span></code> and
- <code class="computeroutput"><span class="special">--</span><span class="identifier">y</span></code>)
- would appear to have occurred even if the local transaction executed one
- or both operations.
- </p>
-<p>
- TM offers three distinct advantages over other parallel programming abstractions.
- </p>
-<div class="orderedlist"><ol type="1">
-<li>
- TM is simple; transactions, the synchronization mechanism of TM, are easier
- to program than other synchronization mechanisms because they move shared
- memory management into the underlying TM subsystem, removing its complexity
- from the programmer's view. Moreover, TM exposes a simple programmer interface,
- reducing (or in some cases, removing) the potential for deadlock, livelock
- and priority inversion.
- </li>
-<li>
- TM is scalable; it achieves increased computational throughput when compared
- to other parallel programming abstractions by allowing multiple threads
- to speculatively execute the same critical section. When concurrently executing
- threads do not exhibit shared data conflicts, they are guaranteed to make
- forward progress.
- </li>
-<li>
- TM is modular; transactions can be nested to any depth and function as
- a single unit. This behavior allows application programmers to extend atomic
- library algorithms into atomic domain-specific algorithms without requiring
- the application programmers to understand the implementation details of
- the library algorithm.
- </li>
-</ol></div>
-<p>
- For these reasons, transactions are considered an important synchronization
- mechanism and TM is viewed as an important type of concurrency control. The
- remainder of this section presents TM from a viewpoint of (1) simplicity,
- (2) scalability and (3) modularity.
- </p>
-<a name="toward_boost_stm.overview.intro.simplicity"></a><h4>
-<a name="id4765469"></a>
- Simplicity
- </h4>
-<p>
- Synchronization problems, such as deadlocks, livelocks and priority inversion
- are common in software systems using mutual exclusion. TM avoids many of
- these problems by providing a synchronization mechanism that does not expose
- any of its implementation details to the programmer. The only low level interfaces
- the programmer needs to use for TM are:
- </p>
-<div class="itemizedlist"><ul type="disc">
-<li>
- begin_tx() - the signaled start of the transaction.
- </li>
-<li>
- read(loc) - reads the specified memory location, storing its location in
- the transaction's read set and returning its current value.
- </li>
-<li>
- write(loc, val) - writes the specified memory location to the supplied
- val, storing its location in the transaction's write set.
- </li>
-<li>
- end_tx() - the signaled end of the transaction. end_tx() returns true if
- the transaction commits, otherwise it returns false.
- </li>
-</ul></div>
-<p>
- The above interfaces allow the programmer to create a transaction (using
- begin_tx()), specify its memory operations (using read() and write()) and
- terminate (using end_tx()()). Moreover, none of the interfaces specify details
- of the TM subsystem's implementation. This leaves the TM system's implementation
- disjoint from the interfaces it supplies, a key characteristic for TM's simplicity.
- </p>
-<p>
- All TM implementations use some combination of the above interfaces. TMs
- implemented within compilers tend to implicitly annotate transactional read()
- and write() operations, whereas those implemented within software libraries
- tend to require the programmer explicitly state which operations are transactional
- reads and writes. An example of a transaction using the above interfaces
- alongside an actual STM library implementation is shown in Figure 3.
- </p>
-<pre class="programlisting"><span class="number">1</span> <span class="comment">// TM interfaces // __Boost_STM__
-</span><span class="number">2</span> <span class="keyword">do</span> <span class="special">{</span> <span class="identifier">BOOST_STM_TRANSACTION</span>
-<span class="number">3</span> <span class="identifier">begin_tx</span><span class="special">();</span> <span class="special">{</span>
-<span class="number">4</span> <span class="identifier">write</span><span class="special">(</span><span class="identifier">x</span><span class="special">,</span> <span class="identifier">read</span><span class="special">(</span><span class="identifier">x</span><span class="special">)+</span><span class="number">1</span><span class="special">);</span> <span class="special">++</span><span class="identifier">x</span><span class="special">;</span>
-<span class="number">5</span> <span class="identifier">write</span><span class="special">(</span><span class="identifier">y</span><span class="special">,</span> <span class="identifier">read</span><span class="special">(</span><span class="identifier">y</span><span class="special">)-</span><span class="number">1</span><span class="special">);</span> <span class="special">--</span><span class="identifier">y</span><span class="special">;</span>
-<span class="number">6</span> <span class="special">}</span> <span class="keyword">while</span> <span class="special">(</span><span class="identifier">end_tx</span><span class="special">());</span> <span class="special">}</span> <span class="identifier">BOOST_STM_END_TRANSACTION</span>
-
-<span class="identifier">Figure</span> <span class="number">3.</span> <span class="identifier">Transaction</span> <span class="identifier">Using</span> <span class="special">(</span><span class="number">1</span><span class="special">)</span> <span class="identifier">Explicit</span> <span class="identifier">TM</span> <span class="identifier">Interfaces</span> <span class="keyword">and</span> <span class="special">(</span><span class="number">2</span><span class="special">)</span> TBoost.STM<span class="special">.</span>
-</pre>
-<p>
- Figure 3 implements the same transaction as shown in Figure 2, except all
- transactional memory accesses, including the transaction's retry behavior
- (e.g., its loop), are demonstrated from a simple TM interface perspective
- and an actual library implementation (TBoost.STM). While most TM systems
- handle some portion of these interface calls implicitly, as is shown in the
- TBoost.STM transaction, it is important to note that even when all operations
- are made visible to the end programmer, transactions are still devoid of
- many concurrency problems, such as data races and deadlocks (explained below),
- that plague other types of concurrency control.
- </p>
-<p>
- For example, as long as the programmer properly annotates the access to the
- shared variables x and y as shown in Figure 3, it is impossible for race
- conditions or deadlocks to occur. Furthermore, the programmer does not need
- any program-specific knowledge to use shared data; he or she simply uses
- the TM interfaces supplied by the system and the resulting behavior is guaranteed
- to be consistent. This is explained in greater detail in Section ???3.1.
- </p>
-<p>
- Other types of concurrency control, such as mutual exclusion, cannot achieve
- the same interface simplicity, because part of their implementation is associated
- with, or exposed through, their interface. To demonstrate this, consider
- the fine-grained locking example of Figure 1 as shown below.
- </p>
-<pre class="programlisting"><span class="number">1</span> <span class="comment">// fine-grained locking
-</span><span class="number">2</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">mutexX</span><span class="special">);</span>
-<span class="number">3</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">mutexY</span><span class="special">);</span>
-<span class="number">4</span> <span class="special">++</span><span class="identifier">x</span><span class="special">;</span>
-<span class="number">5</span> <span class="special">--</span><span class="identifier">y</span><span class="special">;</span>
-<span class="number">6</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">mutexX</span><span class="special">);</span>
-<span class="number">7</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">mutexY</span><span class="special">);</span>
-</pre>
-<p>
- There is no universal interface that can be used to properly access the shared
- data protected by the mutual exclusion in the above fine-grained locking
- example. Instead, the programmer must be aware that mutexX and mutexY protect
- shared data x and y and, therefore, the locks must be obtained before accessing
- the shared data. In short, the programmer is responsible for knowing not
- only that mutual exclusion is used, but also how it is used (e.g., which
- locks protect which shared variables). In this case, mutexX must be obtained
- before mutexY. If another section of code implements the following, a deadlock
- scenario will eventually occur.
- </p>
-<pre class="programlisting"><span class="number">1</span> <span class="comment">// fine-grained locking
-</span><span class="number">2</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">mutexY</span><span class="special">);</span>
-<span class="number">3</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">mutexX</span><span class="special">);</span> <span class="comment">// deadlock here
-</span><span class="number">4</span> <span class="special">--</span><span class="identifier">y</span><span class="special">;</span>
-<span class="number">5</span> <span class="special">++</span><span class="identifier">x</span><span class="special">;</span>
-<span class="number">6</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">mutexY</span><span class="special">);</span>
-<span class="number">7</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">mutexX</span><span class="special">);</span>
-</pre>
-<a name="toward_boost_stm.overview.intro.understanding_concurrency_hazards"></a><h4>
-<a name="id4758701"></a>
- <a href="intro.html#toward_boost_stm.overview.intro.understanding_concurrency_hazards">Understanding
- Concurrency Hazards</a>
- </h4>
-<p>
- Informally, a concurrency hazard is a condition existing in a set of operations
- that, if executed with a specific concurrent interleaving, results in one
- or many unwanted sideeffects. Most errors in parallel systems, such as deadlocks
- and priority inversion, are the specific execution of concurrency hazards
- resulting in the unwanted side-effect(s) they contain. If the concurrency
- hazards are eliminated, the parallel system errors contained within the concurrency
- hazards are also eliminated. Unfortunately, detecting existing concurrency
- hazards is non-trivial and therefore eliminating them is also non-trivial.
- </p>
-<p>
- Mutual exclusion exhibits more concurrency hazards than TM because its implementation
- details (i.e., its locks) must be exposed and used by the end programmer.
- While the locks used to enforce mutual exclusion by themselves are not concurrency
- hazards, their use can lead to a number of hazards. As such, using locks
- leads to concurrency hazards.
- </p>
-<p>
- Because the mutual exclusion locking details are exposed to the programmer
- and because the programmer must maintain a universal and informal contract
- to use these locks, concurrency hazards can arise due to the number of possible
- misuses that can be introduced by the programmer. In particular, if the programmer
- accidentally deviates from the informal locking contract, he or she may inadvertently
- introduce a concurrency hazard that can cause the program to deadlock, invert
- priority or lead to inconsistent data.
- </p>
-<p>
- In contrast, TM has no universal or informal contract between shared data
- that the end programmer needs to understand and follow as is required in
- mutual exclusion. Due to this, TM can hide its implementation details which
- results in reduced concurrency hazards. In particular, each transaction tracks
- the memory it uses in its read and write sets. When a transaction begins
- its commit phase, it verifies its state is consistent and commits its changes.
- If a transaction finds its state is inconsistent, it discards its changes
- and restarts. All of this can be achieved using the basic TM interfaces shown
- in Section 3 without exposing any implementation details. In order to use
- TM, the end programmer only needs to know how to correctly create a transaction.
- Once the transaction is executed, regardless of how it is executed, it results
- in a program state that is guaranteed to be consistent.
- </p>
-<p>
- Fundamentally, TM exhibits less concurrency hazards than mutual exclusion
- because its implementation details are divorced from its interface and can
- therefore be hidden within its subsystem. Any number of implementations can
- be used in a TM subsystem using only the basic TM interfaces shown in Section
- 3. The same is not true for mutual exclusion. Mutual exclusion, regardless
- of how it is implemented, exposes details of its implementation to the programmer.
- As demonstrated in Section 5, mutual exclusion does not provide software
- modularity specifically because extending an existing module requires an
- understanding and extension of that module's implementation. When such locking
- implementations are hidden inside of software libraries, extending these
- modules can range from difficult to impossible.
- </p>
-<a name="toward_boost_stm.overview.intro.testing__race_conditions_and_interleavings"></a><h4>
-<a name="id4758814"></a>
- <a href="intro.html#toward_boost_stm.overview.intro.testing__race_conditions_and_interleavings">Testing:
- Race Conditions and Interleavings</a>
- </h4>
-<p>
- A race condition is a common concurrency hazard that exists in parallel or
- distributed software. As with all concurrency hazards, race conditions rely
- on a specific interleaving of concurrent execution to cause their unwanted
- side-effect. In this section we demonstrate that race conditions do not exist
- in TM and therefore, software testing is greatly simplified because all possible
- interleavings do not need to be tested to ensure correct system behavior.
- In order to demonstrate that race conditions are absent from TM, we must
- first show that they are present in other types of concurrency control.
- </p>
-<pre class="programlisting"><span class="number">1</span> <span class="comment">// Thread T1 // Thread T
-</span><span class="number">2</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">L2</span><span class="special">);</span>
-<span class="number">3</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">L1</span><span class="special">);</span>
-<span class="number">4</span> <span class="special">...</span>
-<span class="number">5</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">L1</span><span class="special">);</span>
-<span class="number">6</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">L2</span><span class="special">);</span>
-<span class="number">7</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">L1</span><span class="special">);</span>
-<span class="number">8</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">L2</span><span class="special">);</span>
-<span class="number">9</span> <span class="special">...</span>
-
-<span class="identifier">Figure</span> <span class="number">4.</span> <span class="identifier">Mutual</span> <span class="identifier">Exclusion</span> <span class="identifier">Race</span> <span class="identifier">Condition</span><span class="special">.</span>
-</pre>
-<p>
- Consider the race condition present in the mutual exclusion example shown
- in Figure 4. The race condition present in the example results in a deadlock
- if thread T1 executes line 7 followed by thread T2 executing line 2. However,
- if the program executes the lines in order (e.g., line 1, then line 2, then
- line 3, etc.), the system will execute properly. The fundamental problem
- in Figure 4 is that it contains a concurrency hazard; in particular, it contains
- a race condition. To further complicate matters, the race condition can only
- be observed in two of many possible concurrent executions. Those two executions
- are: T1 executes line 7 followed by T2 executing line 2 or T2 executes line
- 2 followed by T1 executing line 7. All other possible concurrent interleavings
- of threads T1 and T2 avoid the deadlock race condition. More specifically,
- as long as T1 executes lines 7-8 atomically or T2 executes line 2-3 atomically,
- all remaining concurrent interleavings are free of the deadlock race condition.
- </p>
-<p>
- Because it is unlikely that the deadlock race condition will occur, the programmer
- may never observe it, no matter how many times the program is tested. Only
- exhaustive testing, which tests all possible concurrent interleavings, is
- guaranteed to identify the presence of the deadlock. Regrettably, exhaustive
- testing is an unrealistic solution for most programs due to the time it would
- take to execute all possible concurrent interleavings of the program.
- </p>
-<p>
- An alternative to exhaustive testing is for programmers to use types of concurrency
- control that are devoid of certain concurrency hazards. For example, if mutual
- exclusion did not emit the race condition concurrency hazard, it would be
- impossible for a program using it to deadlock. Therefore, exhaustive testing
- would not be necessary. While this scenario is hypothetical, it illustrates
- our larger argument: in order to avoid common parallel problems in a practical
- fashion, programmers may need to only use types of concurrency control that
- are devoid of certain concurrency hazards. By doing this, the program using
- the specific type of concurrency control will be guaranteed to be free of
- certain common parallel problems.
- </p>
-<p>
- TMs are required to be devoid of race conditions within their implementations
- because they must enforce the ACI (atomic, consistent and isolated) principles.
- Transactions must execute as atomic and isolated and, therefore, TMs are
- not capable of supporting concurrent interleavings between multiple transactions
- as that would violate the atomic and isolated principles of ACI. Due to this,
- programs only using TM are guaranteed to be free of deadlocks (i.e., deadlockfreedom).
- Moreover, because TM implementations can guarantee freedom of race condition
- concurrency hazards, programmers only need to verify their transactional
- code is correct in a sequential (non-parallel) manner. Once the sequential
- execution of the transactional code has been verified, no more testing is
- required as the TM system is required to behave in a consistent manner for
- all serial orders.
- </p>
-<a name="toward_boost_stm.overview.intro.development__mutual_exclusion_and_tm"></a><h4>
-<a name="id4759176"></a>
- <a href="intro.html#toward_boost_stm.overview.intro.development__mutual_exclusion_and_tm">Development:
- Mutual Exclusion and TM</a>
- </h4>
-<p>
- The development of fine-grained locking is notoriously difficult. Designing
- such software is equally as hard. The difficulty in developing and designing
- fine-grained locking systems is rooted in conflicting heuristics. A primary
- goal of software design is to identify the most simplistic software solution
- that exists for a particular problem. A primary goal of fine-grained locking
- is the most efficient concurrent implementation of a software system. The
- goals of software design and fine-grained locking are conflicting because
- the most efficient fine-grained locking solution usually requires some of
- the most complex software design implementations to achieve such performance.
- </p>
-<p>
- TM achieves scalability by using optimistic concurrency that is implemented
- within its subsystem (see Section 4). Since the TM subsystem is the efficiency
- throttle for TM, which is unexposed to the programmer, the software architecture
- and design never needs to be complicated (nor can it be) in order to achieve
- increased parallelism when using transactions. As will be demonstrated in
- the following section, transactions run efficiently using the interfaces
- shown in this section and are never complicated in order to achieve improved
- performance, as is commonly found in fine-grained mutual exclusion implementations.
- </p>
-<a name="toward_boost_stm.overview.intro.scalability"></a><h4>
-<a name="id4759229"></a>
- Scalability
- </h4>
-<p>
- In this section we analyze the scalability of TM compared to mutual exclusion.
- We measure scalability by two metrics: consistency and performance. A concurrency
- control type has consistent scalability if it guarantees correct behavior
- for an arbitrarily large number of concurrently executing processes.4 Performance
- scalability is measured by the maximum number of consistent processes supported
- by a concurrency control type while executing concurrently.
- </p>
-<a name="toward_boost_stm.overview.intro.pessimistic_and_optimistic_critical_sections"></a><h4>
-<a name="id4759262"></a>
- <a href="intro.html#toward_boost_stm.overview.intro.pessimistic_and_optimistic_critical_sections">Pessimistic
- and Optimistic Critical Sections</a>
- </h4>
-<p>
- Critical sections can be pessimistic or optimistic. Pessimistic critical
- sections limit their critical section execution to a single thread. Locks
- are an example of a synchronization mechanism that use pessimistic critical
- sections. Optimistic critical sections allow unlimited concurrent threaded
- execution. Transactions are an example of a synchronization mechanism that
- use optimistic critical sections.
- </p>
-<a name="toward_boost_stm.overview.intro.truly_optimistic_critical_sections"></a><h4>
-<a name="id4759286"></a>
- <a href="intro.html#toward_boost_stm.overview.intro.truly_optimistic_critical_sections">Truly
- Optimistic Critical Sections</a>
- </h4>
-<p>
- Truly optimistic critical sections are those critical sections which allow
- multiple conflicting threads to simultaneously execute the same critical
- section. A deferred update (or lazy acquire) TM system supports truly optimistic
- critical section. A direct update (or eager acquire) TM system does not support
- truly optimistic critical sections. More details on deferred and direct update
- TM systems are presented in the subsequent sections.
- </p>
-<p>
- Truly optimistic critical sections are important because they allow simultaneous
- conflicting critical section execution, as opposed to disallowing such behavior.
- It is important to allow conflicting critical section execution because prematurely
- preventing concurrently executing threads pessimistically degrades performance.
- To demonstrate this, consider two transactions, called T1 and T2, executing
- the same critical section. Transaction T1 starts first and tentatively writes
- to memory locationM. Transaction T2 then starts and tries to write to memory
- locationM. In a truly optimistic TM system, T2 would be allowed to tentatively
- write to location M while T1 is also writing to M. This behavior then allows
- T2 to commit before T1 in the event T2 completes before T1.
- </p>
-<p>
- In comparison, if the TM system is not truly optimistic, once T1 writes to
- M, T2 must stall until T1 completes. This pessimistically degrades the performance
- of the system by prematurely deciding that T1's transactional execution should
- have higher priority than T2's.
- </p>
-<pre class="programlisting"><span class="number">1</span> <span class="comment">// global variables
-</span><span class="number">2</span> <span class="keyword">int</span> <span class="identifier">g1</span> <span class="special">=</span> <span class="number">0</span><span class="special">;</span> <span class="keyword">int</span> <span class="identifier">g2</span> <span class="special">=</span> <span class="number">0</span><span class="special">;</span>
-<span class="number">3</span>
-<span class="number">4</span> <span class="keyword">void</span> <span class="identifier">set1</span><span class="special">(</span><span class="keyword">int</span> <span class="identifier">val</span><span class="special">)</span> <span class="special">{</span> <span class="identifier">atomic</span> <span class="special">{</span> <span class="identifier">g1</span> <span class="special">=</span> <span class="identifier">val</span><span class="special">;</span> <span class="special">}</span> <span class="special">}</span>
-<span class="number">5</span> <span class="keyword">void</span> <span class="identifier">set2</span><span class="special">(</span><span class="keyword">int</span> <span class="identifier">val</span><span class="special">)</span> <span class="special">{</span> <span class="identifier">atomic</span> <span class="special">{</span> <span class="identifier">g2</span> <span class="special">=</span> <span class="identifier">val</span><span class="special">;</span> <span class="special">}</span> <span class="special">}</span>
-<span class="number">6</span> <span class="keyword">int</span> <span class="identifier">get1</span><span class="special">()</span> <span class="special">{</span> <span class="identifier">atomic</span> <span class="special">{</span> <span class="keyword">return</span> <span class="identifier">g1</span><span class="special">;</span> <span class="special">}</span> <span class="special">}</span>
-<span class="number">7</span> <span class="keyword">int</span> <span class="identifier">get2</span><span class="special">()</span> <span class="special">{</span> <span class="identifier">atomic</span> <span class="special">{</span> <span class="keyword">return</span> <span class="identifier">g2</span><span class="special">;</span> <span class="special">}</span> <span class="special">}</span>
-
-<span class="identifier">Figure</span> <span class="number">5.</span> <span class="identifier">Truly</span> <span class="identifier">Optimistic</span> <span class="identifier">Concurrency</span> <span class="identifier">Diagram</span><span class="special">.</span>
-</pre>
-<p>
- Furthermore, and perhaps more importantly, truly optimistic critical sections
- allow readers and writers of the same memory location to execute concurrently.
- This behavior is important because in many cases, both the readers and writers
- of the same memory can commit with consistent views of memory.
- </p>
-<p>
- An example of this is shown in Figure 5. As demonstrated in Figure 5 thread
- 1 and thread 2, which we'll refer to as T1 and T2 respectively, operate on
- the same memory locations (g1 and g2). Because the TM system supports optimistic
- concurrency, T2 is allowed to execute concurrently alongside T1 even though
- their memory accesses conflict. However, in this scenario, because T2 completes
- its workload before T1, both transactions are allowed to commit. T2 captures
- the state of g1=0,g2=0 while T1 sets the state of g1=1,g2=1. As the example
- addresses, both g1=0,g2=0 and g1=1,g2=1 are legal states.
- </p>
-<a name="toward_boost_stm.overview.intro.direct_and_deferred_update"></a><h4>
-<a name="id4759781"></a>
- <a href="intro.html#toward_boost_stm.overview.intro.direct_and_deferred_update">Direct
- and Deferred Update</a>
- </h4>
-<p>
- Updating is the process of committing transactional writes to global memory
- and is performed in either a direct or deferred manner. Figure 6 presents
- a step-by-step analysis of direct and deferred updating.
- </p>
-<p>
- Deferred update creates a local copy of global memory, performs modifications
- to the local copy, and then writes those changes to global memory if the
- transaction commits. If the transaction aborts, no additional work is done.
- Direct update makes an original backup copy of global memory and then writes
- directly to global memory. If the transaction commits, the transaction does
- nothing. If the transaction aborts, the transaction restores global memory
- with its backup copy. Some TM systems favor direct update due to its natural
- optimization of commits (BSTM, McRTSTM and LogTM). However, other TM systems
- favor deferred update due to its support for truly optimistic critical sections
- (TBoost.STM and RingSTM).
- </p>
-<p>
- Direct update enables greater TM throughput when aborts are relatively low
- because it optimizes the common commit case. Deferred update enables greater
- TM throughput when
- </p>
-<div class="orderedlist"><ol type="1">
-<li>
- aborts are relatively high or
- </li>
-<li>
- short running transactions (e.g., those that complete quickly) are executed
- alongside long running transactions (e.g., those that do not complete quickly)
- because long running transactions do not stall shorter running ones as
- they would in direct update systems, and therefore the fastest transactions
- can commit first.
- </li>
-</ol></div>
-<p>
- It is important to note that deferred update supports truly optimistic critical
- sections without special effort, while direct update does not. Truly optimistic
- critical sections enable the speculative execution of transactions that arrive
- after a memory location has already been tentatively written to by another
- transaction. This allows the first transaction, of potentially many competing
- transactions, to complete its commit, whether it be the later arriving transaction
- or the earlier arriving writer. This scenario is not possible with direct
- update without special effort.
- </p>
-<a name="toward_boost_stm.overview.intro.scalability__mutual_exclusion_and_transactional_memory"></a><h4>
-<a name="id4759870"></a>
- <a href="intro.html#toward_boost_stm.overview.intro.scalability__mutual_exclusion_and_transactional_memory">Scalability:
- Mutual Exclusion and Transactional Memory</a>
- </h4>
-<p>
- The scalability of mutual exclusion is limited to pessimistic concurrency.
- By definition, mutual exclusion's critical sections must be pessimistic,
- otherwise they would not be isolated to a single thread (i.e., they would
- not be mutually exclusive). TM, however, is generally implemented using optimistic
- concurrency, but it can enforce pessimistic concurrency amongst transactions
- if that behavior is required for certain conditions. In certain cases, TMs
- becomemore strict and execute pessimistically to enable inevitable or irrevocable
- transactions. Such transactions have significant importance for handling
- operations that, once started, must complete (e.g., I/O operations).
- </p>
-<p>
- Since TM can execute optimistically and pessimistically, it is clear that
- whatever benefits pessimistic concurrency has can be acquired by TM. However,
- since mutual exclusion can only execute pessimistically, the advantages found
- in optimistic concurrency can never be obtained by mutual exclusion.
- </p>
-<p>
- When one first analyzes pessimistic and optimistic concurrency, it may seem
- that the only benefit optimistic concurrency has over pessimistic concurrency
- is that multiple critical sections, which conflict on the memory they access,
- can execute concurrently. The simultaneous execution of such conflicting
- critical sections allows the execution speed of such critical sections to
- guide the system in deciding which execution should be allowed to commit
- and which should be aborted. In particular, the first process to complete
- the critical section can be allowed to abort the other process of the system.
- The same scenario cannot be achieved by pessimistic critical sections and
- is demonstrated in Section 4.1.1.
- </p>
-<p>
- A counterargument to this scenario is that such optimistic concurrency only
- allows one critical section to commit, while one must be aborted. Because
- mutual exclusion only allows one conflicting critical section execution at
- a time, and because mutual exclusion does not support failure atomicity (i.e.,
- rollbacking of the critical section), mutual exclusion's pessimistic behavior
- is superior in terms of energy and efficiency. Mutual exclusion, unlike TM,
- suffers no wasted work because conflicting critical sections are limited
- to a single thread of execution, reducing the energy it uses. Furthermore,
- because mutual exclusion does not require original data to copied, as needed
- for TM's direct or deferred update, it executes faster.
- </p>
-<p>
- While there is merit to this counterargument, an important scenario is not
- captured by it: truly optimistic critical sections can support multiple reader
- / single write executions which, if executed so the readers commit before
- the writer, all critical sections will succeed. This scenario is impossible
- to achieve using pessimistic critical sections. Although mutual exclusion
- can use read/write locking, as soon as a writer thread begins execution on
- a conflicting critical section, all readers must be stalled. TM's truly optimistic
- concurrency does not suffer from this overly pessimistic limitation of throughput
- and is therefore capable of producing an immeasurable amount of concurrent
- throughput under such conditions.
- </p>
-<p>
- From a theoretical perspective, given L memory locations and P processes,
- mutual exclusion can support the consistent concurrent execution of P*L number
- of readers or L writers. TM can support the consistent concurrent execution
- of P*L number of readers and L writers. Using the above variables, the mathematical
- expression of the performance scalability of mutual exclusion (S(ME)) is:
- </p>
-<pre class="programlisting"><span class="identifier">S</span><span class="special">(</span><span class="identifier">ME</span><span class="special">)</span> <span class="special">=</span> <span class="special">(</span><span class="identifier">P</span><span class="special">*</span><span class="identifier">L</span><span class="special">)</span> <span class="special">+</span> <span class="identifier">L</span>
-</pre>
-<p>
- Using the same variables, the mathematical expression of the performance
- scalability of transactional memory is:
- </p>
-<pre class="programlisting"><span class="identifier">S</span><span class="special">(</span><span class="identifier">TM</span><span class="special">)</span> <span class="special">=</span> <span class="special">(</span><span class="identifier">P</span> <span class="special">*</span> <span class="identifier">L</span><span class="special">)</span> <span class="special">+</span> <span class="identifier">L</span>
-</pre>
-<p>
- As should be clear from the above equations, mutual exclusion cannot achieve
- the same performance scalability of TM. This is because TM supports truly
- optimistic concurrency and mutual exclusion is confined to pessimistic concurrency.
- While other examples exist that demonstrate optimistic concurrency can increase
- throughput via contention management, the above equations capture the indisputable
- mathematical limitations in mutual exclusion's performance scalability.
- </p>
-<a name="toward_boost_stm.overview.intro.modularity"></a><h4>
-<a name="id4813054"></a>
- Modularity
- </h4>
-<p>
- Software modularity is an important aspect of software that is necessary
- for its reuse. Formally, software is modular if it can be composed in a new
- system without altering its internal implementation. Informally, software
- is modular if it can be used, in its entirety, through its interface.
- </p>
-<p>
- By making software modular, it can be freely used in an unlimited number
- of software systems. Without software modularity, software can only be used
- in the original system where it was written. Clearly, without software modularity,
- software cannot be reused. Because most software developments are based on
- extensive library use, software reuse is an integral part of software development.
- As such, limiting software reuse, would result in severely hampered development
- capabilities and overall development time. For these reasons, software modularity
- is vital for any software paradigm to be practical. Software paradigms that
- do not support software modularity are, in short, impractical.
- </p>
-<a name="toward_boost_stm.overview.intro.mutual_exclusion_and_software_modularity"></a><h4>
-<a name="id4813104"></a>
- <a href="intro.html#toward_boost_stm.overview.intro.mutual_exclusion_and_software_modularity">Mutual
- Exclusion and Software Modularity</a>
- </h4>
-<p>
- In this section, we show that mutual exclusion, regardless of its implementation,
- fails to deliver software modularity. We demonstrate this through a running
- example started in Figure 7 which implements inc(), mult() and get(); these
- functions use lock G to respectively implement an increment, multiply and
- get operations for the shared data.
- </p>
-<pre class="programlisting"><span class="number">1</span> <span class="keyword">void</span> <span class="identifier">inc</span><span class="special">(</span><span class="keyword">int</span> <span class="identifier">v</span><span class="special">)</span> <span class="special">{</span>
-<span class="number">2</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">G</span><span class="special">);</span> <span class="identifier">g</span> <span class="special">+=</span> <span class="identifier">v</span><span class="special">;</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">G</span><span class="special">);</span>
-<span class="number">3</span> <span class="special">}</span>
-<span class="number">4</span>
-<span class="number">5</span> <span class="keyword">void</span> <span class="identifier">mult</span><span class="special">(</span><span class="keyword">int</span> <span class="identifier">v</span><span class="special">)</span> <span class="special">{</span>
-<span class="number">6</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">G</span><span class="special">);</span> <span class="identifier">g</span> <span class="special">*=</span> <span class="identifier">v</span><span class="special">;</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">G</span><span class="special">);</span>
-<span class="number">7</span> <span class="special">}</span>
-<span class="number">8</span>
-<span class="number">9</span> <span class="keyword">int</span> <span class="identifier">get</span><span class="special">()</span> <span class="special">{</span>
-<span class="number">10</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">G</span><span class="special">);</span> <span class="keyword">int</span> <span class="identifier">v</span> <span class="special">=</span> <span class="identifier">g</span><span class="special">;</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">G</span><span class="special">);</span>
-<span class="number">11</span> <span class="keyword">return</span> <span class="identifier">v</span><span class="special">;</span>
-<span class="number">12</span> <span class="special">}</span>
-
-<span class="identifier">Figure</span> <span class="number">7.</span> <span class="identifier">Mutual</span> <span class="identifier">Exclusion</span> <span class="keyword">for</span> <span class="identifier">Increment</span><span class="special">,</span> <span class="identifier">Multiply</span> <span class="keyword">and</span> <span class="identifier">Get</span> <span class="identifier">of</span> <span class="identifier">Shared</span> <span class="identifier">Variable</span><span class="special">.</span>
-</pre>
-<p>
- Now suppose a programmer wants to increment and multiply by some values within
- the same atomic operation. The initial implementation may look like the following.
- </p>
-<pre class="programlisting"><span class="number">1</span> <span class="identifier">inc</span><span class="special">(</span><span class="identifier">a</span><span class="special">);</span>
-<span class="number">2</span> <span class="identifier">mult</span><span class="special">(-</span><span class="identifier">b</span><span class="special">);</span>
-</pre>
-<p>
- An unwanted side-effect of such an implementation is the exposure of the
- intermediate state of g between inc() and mult(). A second thread performing
- a get() may read an inconsistent value of g; the value of g between inc()
- and mult(). This is demonstrated in the timing diagram of Figure 8 .
- </p>
-<pre class="programlisting"><span class="identifier">Figure</span> <span class="number">8.</span> <span class="identifier">Example</span> <span class="identifier">of</span> <span class="identifier">Exposed</span> <span class="identifier">State</span> <span class="identifier">of</span> <span class="identifier">Mutual</span> <span class="identifier">Exclusion</span><span class="special">.</span>
-</pre>
-<p>
- If the programmer needs the inc() and mult() operations to be executed together,
- without an intermediate state being revealed, he or she could make lock G
- reentrant. Reentrant locks are locks that can be obtained multiple times
- by a single thread without deadlocking. If G is made reentrant, the following
- code could be used to make inc(a) and mult(-b) atomic. A basic implementation
- of a reentrant lock is to associate a counter with its lock and increment
- the counter each time the lock() interface is called and to decrement the
- counter each time the unlock() interface is called. The reentrant lock is
- only truly locked when a call to lock() is made when its associated counter
- is 0. Likewise, the reentrant lock is only truly unlocked when a call to
- unlock() is made when its associated counter is 1.
- </p>
-<pre class="programlisting"><span class="number">1</span> <span class="identifier">lock</span><span class="special">(</span><span class="identifier">G</span><span class="special">);</span>
-<span class="number">2</span> <span class="identifier">inc</span><span class="special">(</span><span class="identifier">a</span><span class="special">);</span>
-<span class="number">3</span> <span class="identifier">mult</span><span class="special">(-</span><span class="identifier">b</span><span class="special">);</span>
-<span class="number">4</span> <span class="identifier">unlock</span><span class="special">(</span><span class="identifier">G</span><span class="special">);</span>
-</pre>
-<p>
- If the above code uses reentrant locks, it will achieve the programmer's
- intended atomicity for inc() and mult(), isolating the state between inc()
- and mult(), which disallows the unwanted side-effect shown in Figure 8. While
- the atomicity of the operations is achieved, it is only achieved by exposing
- the implementation details of inc() and mult(). In particular, if the programmer
- had not known that lock G was used within inc() and mult(), making an atomic
- operation of inc() and mult() would be impossible.
- </p>
-<p>
- An external atomic grouping of operations is impossible using embedded mutual
- exclusion without exposing the implementation details because the heart of
- mutual exclusion is based on named variables which the programmer specifies
- to guard their critical sections. Because these variables are named, they
- cannot be abstracted away and any programmer wishing to reuse the mutually
- exclusive code must be able to access and extend the implementation details.
- </p>
-<pre class="programlisting"><span class="number">1</span> <span class="keyword">void</span> <span class="identifier">inc</span><span class="special">(</span><span class="keyword">int</span> <span class="identifier">v</span><span class="special">)</span> <span class="special">{</span> <span class="identifier">transaction</span> <span class="special">{</span> <span class="identifier">g</span> <span class="special">+=</span> <span class="identifier">v</span><span class="special">;</span> <span class="special">}</span> <span class="special">}</span>
-<span class="number">2</span>
-<span class="number">3</span> <span class="keyword">void</span> <span class="identifier">mult</span><span class="special">(</span><span class="keyword">int</span> <span class="identifier">v</span><span class="special">)</span> <span class="special">{</span> <span class="identifier">transaction</span> <span class="special">{</span> <span class="identifier">g</span> <span class="special">*=</span> <span class="identifier">v</span><span class="special">;</span> <span class="special">}</span> <span class="special">}</span>
-<span class="number">4</span>
-<span class="number">5</span> <span class="keyword">int</span> <span class="identifier">get</span><span class="special">()</span> <span class="special">{</span> <span class="identifier">transaction</span> <span class="special">{</span> <span class="keyword">return</span> <span class="identifier">g</span><span class="special">;</span> <span class="special">}</span> <span class="special">}</span>
-
-<span class="identifier">Figure</span> <span class="number">9.</span> <span class="identifier">TM</span> <span class="identifier">of</span> <span class="identifier">Increment</span><span class="special">,</span> <span class="identifier">Multiply</span> <span class="keyword">and</span> <span class="identifier">Get</span> <span class="identifier">of</span> <span class="identifier">Shared</span> <span class="identifier">Variable</span><span class="special">.</span>
-</pre>
-<a name="toward_boost_stm.overview.intro.summary_of_mutual_exclusion_modularity"></a><h4>
-<a name="id4814118"></a>
- <a href="intro.html#toward_boost_stm.overview.intro.summary_of_mutual_exclusion_modularity">Summary
- of Mutual Exclusion Modularity</a>
- </h4>
-<p>
- As we presented at the beginning of this section, software modularity can
- be informally understood as a component's ability to be used entirely from
- its interface. Therefore, components that cannot be used entirely from their
- interface, components that must expose their implementation details to be
- extended, are not modular. As such, the paradigm of mutual exclusion does
- not support software modularity.
- </p>
-<a name="toward_boost_stm.overview.intro.transactional_memory_and_software_modularity"></a><h4>
-<a name="id4814158"></a>
- <a href="intro.html#toward_boost_stm.overview.intro.transactional_memory_and_software_modularity">Transactional
- Memory and Software Modularity</a>
- </h4>
-<p>
- Transactional memory works in a fundamentally different manner than mutual
- exclusion, with regard to its interface and implementation. To begin, as
- demonstrated in Section 3, TMs do not generally expose any of their implementation
- details to client code. In fact, in many TMs, client code is more versatile
- if it knows and assumes nothing about the active implementation of the TM.
- By abstracting away details of TM implementation, a TM subsystem can adapt
- its behavior to the most efficient configuration for the program's current
- workload, much like the algorithms used for efficient operation of processes
- controlled by operating systems. TM uses such abstractions to optimize the
- performance of concurrent programs using various consistency checking methods,
- conflict detection times, updating policies, and contention management schemes.
- </p>
-<a name="toward_boost_stm.overview.intro.achieving_tm_software_modularity"></a><h4>
-<a name="id4814199"></a>
- <a href="intro.html#toward_boost_stm.overview.intro.achieving_tm_software_modularity">Achieving
- TM Software Modularity</a>
- </h4>
-<p>
- TM achieves software modularity by allowing transactions to nest. With transactional
- nesting, individual transactions can be wrapped inside of other transactions
- which call the methods where they reside, resulting in a transaction composed
- of both the parent and child transaction. Furthermore, this is achieved without
- altering or understanding the child transaction's implementation. To best
- demonstrate transactional nesting, we reuse the prior mutual exclusion example
- shown in Figure 7 and implement it using transactions as shown in Figure
- 9.
- </p>
-<p>
- As before, the programmer's goal is to implement a combination of inc() and
- mult() executed in an atomic fashion. The basic, and incorrect implementation
- is demonstrated below:
- </p>
-<pre class="programlisting"><span class="number">1</span> <span class="identifier">inc</span><span class="special">(</span><span class="identifier">a</span><span class="special">);</span>
-<span class="number">2</span> <span class="identifier">mult</span><span class="special">(-</span><span class="identifier">b</span><span class="special">);</span>
-</pre>
-<p>
- Even with transactions, this approach fails because the transactions within
- inc() and mult() begin and end inside their respective functions. However,
- to make the above operations atomic, the programmer need only make the following
- modification shown in Figure 10.
- </p>
-<pre class="programlisting"><span class="number">1</span> <span class="identifier">transaction</span> <span class="special">{</span> <span class="comment">// transaction {
-</span><span class="number">2</span> <span class="identifier">inc</span><span class="special">(</span><span class="identifier">a</span><span class="special">);</span> <span class="comment">// transaction { g += a; }
-</span><span class="number">3</span> <span class="identifier">mult</span><span class="special">(-</span><span class="identifier">b</span><span class="special">);</span> <span class="comment">// transaction { g *= -b; }
-</span><span class="number">4</span> <span class="special">}</span> <span class="comment">// }
-</span>
-<span class="identifier">Figure</span> <span class="number">10.</span> <span class="identifier">Modularity</span><span class="special">:</span> <span class="identifier">Transaction</span> <span class="identifier">of</span> <span class="identifier">Increment</span> <span class="keyword">and</span> <span class="identifier">Multiply</span><span class="special">.</span>
-</pre>
-<p>
- In effect, the TM system subsumes the transactions that are nested inside
- of the inc() and mult() operations. The left side of Figure 10 shows the
- actual code of the transaction, while the right side shows the child transactions
- that are subsumed by the parent transaction.
- </p>
-<p>
- Because transactions are isolated and atomic, the resulting state of g, from
- operations inc() and mult(), is invisible to outside observers until the
- transaction is committed. As such, outside threads cannot view any intermediate
- state constructed by partial transaction execution. The result of such isolated
- behavior is the guaranteed consistent concurrent execution of interleaved
- accesses to shared memory from in-flight transactions. This is demonstrated
- in Figure 11; let g=0 and assume deferred update is the active updating policy,
- as explained in Section 4.2.
- </p>
-<pre class="programlisting"><span class="identifier">Figure</span> <span class="number">11.</span> <span class="identifier">Example</span> <span class="identifier">of</span> <span class="identifier">Isolated</span> <span class="keyword">and</span> <span class="identifier">Consistent</span> <span class="identifier">State</span> <span class="identifier">of</span> <span class="identifier">TM</span><span class="special">.</span>
-</pre>
-<p>
- As shown in Figure 11, multiple concurrently executing threads can read and
- write to the same shared memory in a consistent and isolated fashion when
- using TM. In the example, thread T2 performs x = get() after T1 has already
- executed inc(a). However, since T1 has not yet committed its transaction,
- T2's view of shared data g is consistent (g=0). When T2 begins the commit
- phase of its transaction, the TM subsystem verifies that shared data g has
- not been updated since it initially read it. Since no other transaction has
- updated shared data g, T2's transaction is permitted to commit. Thread T1
- then continues with its mult() operation and then enters its commit phase.
- The TM subsystem also verifies the consistency of T1's transaction before
- it is allowed to commit. Again, since no one transaction has updated shared
- data g between its reads and writes to it, T1's transaction is permitted
- to commit.
- </p>
-<p>
- The above analysis demonstrates that software modularity can be achieved
- in TM through transactional nesting (Figure 10). In this case, the specific
- software modularity achieved is extension to an existing critical section.
- Critical section extension was also possible with mutual exclusion, as demonstrated
- in Section 5.1, but only through exposing the details behind the mutual exclusion
- implementation. Due to this, mutual exclusion fails to deliver a practical
- level of software modularity.
- </p>
-<a name="toward_boost_stm.overview.intro.summary_of_transactional_memory_modularity"></a><h4>
-<a name="id4814560"></a>
- <a href="intro.html#toward_boost_stm.overview.intro.summary_of_transactional_memory_modularity">Summary
- of Transactional Memory Modularity</a>
- </h4>
-<p>
- TM supports software modularity by allowing transactions to nest, to any
- depth, while logically grouping the shared data accesses within the transactions
- into an atomic, consistent and isolated (ACI) operation. Transactional nesting
- is natural to the programmer because nested transactions behave in the same
- manner as unnested transactions. TM's ACI support ensures transactions will
- behave in a correct manner regardless of if the transaction is used by itself
- or subsumed into a larger transaction.
- </p>
-<a name="toward_boost_stm.overview.intro.c___library_language_like_solution"></a><h4>
-<a name="id4814594"></a>
- <a href="intro.html#toward_boost_stm.overview.intro.c___library_language_like_solution">C++
- library language-like solution</a>
- </h4>
-<p>
- Research in parallel programming has recently seen a flurry of attention.
- Among the active research is a push for high-level languages to offer native
- support for parallel programming primitives. The next version of C++ will
- incorporate library support for threads, while numerous researchers are exploring
- ways to extend C++ to support transactional memory (TM).
- </p>
-<p>
- A strength of C++ is its support for automatic objects. Rather than requiring
- that parallel primitives be added directly to the language, automatic objects
- in C++ can be used to implement much of their necessary infrastructure. The
- automatic object approach is natural from a language perspective, provides
- full algorithmic control to the end programmer, and demonstrates C++'s linguistic
- elegance. The disadvantage of this approach is its added programmatic overhead.
- Using only automatic objects, certain programming errors, such as accidental
- scope removal and incorrectly programmed transactional retry behavior, can
- arise.
- </p>
-<p>
- In light of this, there are unique trade-offs between language based and
- library-based parallel primitives. Language-based solutions minimize syntactic
- clutter which reduce programmer related errors, but are seemingly irreversible
- and, if incorrect, can have crippling effects upon the language. Library-based
- solutions increase programmer control and flexibility, but place substantial
- pressure on the programmer to avoid minute programming errors. A good compromise
- is a solution that behaves like a language extension, but is implemented
- within a library. By implementing parallel primitives within a library that
- uses language-like interfaces, programmer pressure is reduced, implementation
- updates are seamless, and full programmer control is achieved through library
- extensibility.
- </p>
-<p>
- TBoost.STM present such a language-like solution for C++ using generic library
- coupled with a deliberate use of the preprocessor. The culmination of these
- components facilitate a simple, yet powerful, parallel programming interface
- in C++.
- </p>
-</div>
+</h3></div></div></div></div>
<table xmlns:rev="http://www.cs.rpi.edu/~gregod/boost/tools/doc/revision" width="100%"><tr>
<td align="left"></td>
<td align="right"><div class="copyright-footer">Copyright © 2009 Justin E. Gottchlich<br>Copyright © 2009 Vicente J. Botet Escriba<p>
Modified: sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/reference.html
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/reference.html (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/reference.html 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -69,6 +69,21 @@
<dt><span class="section"><a href="reference/built_in_transactional_objects.html#toward_boost_stm.reference.built_in_transactional_objects.tx_pointer_hpp">
Header <code class="computeroutput"><span class="special"><</span><span class="identifier">boost</span><span class="special">/</span><span class="identifier">stm</span><span class="special">/</span><span class="identifier">tx</span><span class="special">/</span><span class="identifier">pointer</span><span class="special">.</span><span class="identifier">hpp</span><span class="special">></span></code></a></span></dt>
</dl></dd>
+<dt><span class="section"><a href="reference/transaction_specific_memory_managers.html">Transaction
+ Specific Memory Managers</a></span></dt>
+<dt><span class="section"><a href="reference/user_memory_managers.html">User
+ Memory Managers</a></span></dt>
+<dt><span class="section">Synchronization</span></dt>
+<dd><dl>
+<dt><span class="section"><a href="reference/synchronization.html#toward_boost_stm.reference.synchronization.synch_hpp">
+ Header <code class="computeroutput"><span class="special"><</span><span class="identifier">boost</span><span class="special">/</span><span class="identifier">stm</span><span class="special">/</span><span class="identifier">synch</span><span class="special">.</span><span class="identifier">hpp</span><span class="special">></span></code></a></span></dt>
+<dt><span class="section"><a href="reference/synchronization.html#toward_boost_stm.reference.synchronization.sync_mutex_hpp">
+ Header <code class="computeroutput"><span class="special"><</span><span class="identifier">boost</span><span class="special">/</span><span class="identifier">stm</span><span class="special">/</span><span class="identifier">sync</span><span class="special">/</span><span class="identifier">mutex</span><span class="special">.</span><span class="identifier">hpp</span><span class="special">></span></code></a></span></dt>
+<dt><span class="section"><a href="reference/synchronization.html#toward_boost_stm.reference.synchronization.sync_unique_lock_hpp">
+ Header <code class="computeroutput"><span class="special"><</span><span class="identifier">boost</span><span class="special">/</span><span class="identifier">stm</span><span class="special">/</span><span class="identifier">sync</span><span class="special">/</span><span class="identifier">unique_lock</span><span class="special">.</span><span class="identifier">hpp</span><span class="special">></span></code></a></span></dt>
+<dt><span class="section"><a href="reference/synchronization.html#toward_boost_stm.reference.synchronization.sync_synchronized_hpp">
+ Header <code class="computeroutput"><span class="special"><</span><span class="identifier">boost</span><span class="special">/</span><span class="identifier">stm</span><span class="special">/</span><span class="identifier">sync</span><span class="special">/</span><span class="identifier">synchronized</span><span class="special">.</span><span class="identifier">hpp</span><span class="special">></span></code></a></span></dt>
+</dl></dd>
<dt><span class="section"><a href="reference/contention_managers.html">Contention
Managers</a></span></dt>
<dd><dl>
@@ -81,21 +96,8 @@
<dt><span class="section"><a href="reference/contention_managers.html#toward_boost_stm.reference.contention_managers.dynamic_contention_managers">Dynamic
Contention Managers</a></span></dt>
</dl></dd>
+<dt><span class="section">Utilities</span></dt>
</dl></div>
-<p>
- This section presents the interface of TBoost.STM.
- </p>
-<p>
- There are two kind of transactional objets:
- </p>
-<div class="itemizedlist"><ul type="disc">
-<li>
- Mixins
- </li>
-<li>
- Wrappers
- </li>
-</ul></div>
</div>
<table xmlns:rev="http://www.cs.rpi.edu/~gregod/boost/tools/doc/revision" width="100%"><tr>
<td align="left"></td>
Modified: sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/reference/built_in_transactional_objects.html
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/reference/built_in_transactional_objects.html (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/reference/built_in_transactional_objects.html 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -9,8 +9,8 @@
<link rel="up" href="../reference.html" title="Reference">
<link rel="prev" href="transactional_objects.html" title="Transactional
Objects">
-<link rel="next" href="contention_managers.html" title="Contention
- Managers">
+<link rel="next" href="transaction_specific_memory_managers.html" title="Transaction
+ Specific Memory Managers">
</head>
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
<table cellpadding="2" width="100%"><tr>
@@ -23,7 +23,7 @@
</tr></table>
<hr>
<div class="spirit-nav">
-<a accesskey="p" href="transactional_objects.html"><img src="../../../../../../doc/html/images/prev.png" alt="Prev"></a><a accesskey="u" href="../reference.html"><img src="../../../../../../doc/html/images/up.png" alt="Up"></a><a accesskey="h" href="../../index.html"><img src="../../../../../../doc/html/images/home.png" alt="Home"></a><a accesskey="n" href="contention_managers.html"><img src="../../../../../../doc/html/images/next.png" alt="Next"></a>
+<a accesskey="p" href="transactional_objects.html"><img src="../../../../../../doc/html/images/prev.png" alt="Prev"></a><a accesskey="u" href="../reference.html"><img src="../../../../../../doc/html/images/up.png" alt="Up"></a><a accesskey="h" href="../../index.html"><img src="../../../../../../doc/html/images/home.png" alt="Home"></a><a accesskey="n" href="transaction_specific_memory_managers.html"><img src="../../../../../../doc/html/images/next.png" alt="Next"></a>
</div>
<div class="section" lang="en">
<div class="titlepage"><div><div><h3 class="title">
@@ -267,7 +267,7 @@
</tr></table>
<hr>
<div class="spirit-nav">
-<a accesskey="p" href="transactional_objects.html"><img src="../../../../../../doc/html/images/prev.png" alt="Prev"></a><a accesskey="u" href="../reference.html"><img src="../../../../../../doc/html/images/up.png" alt="Up"></a><a accesskey="h" href="../../index.html"><img src="../../../../../../doc/html/images/home.png" alt="Home"></a><a accesskey="n" href="contention_managers.html"><img src="../../../../../../doc/html/images/next.png" alt="Next"></a>
+<a accesskey="p" href="transactional_objects.html"><img src="../../../../../../doc/html/images/prev.png" alt="Prev"></a><a accesskey="u" href="../reference.html"><img src="../../../../../../doc/html/images/up.png" alt="Up"></a><a accesskey="h" href="../../index.html"><img src="../../../../../../doc/html/images/home.png" alt="Home"></a><a accesskey="n" href="transaction_specific_memory_managers.html"><img src="../../../../../../doc/html/images/next.png" alt="Next"></a>
</div>
</body>
</html>
Modified: sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/reference/contention_managers.html
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/reference/contention_managers.html (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/reference/contention_managers.html 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -7,9 +7,8 @@
<meta name="generator" content="DocBook XSL Stylesheets V1.69.1">
<link rel="start" href="../../index.html" title="Chapter 1. Toward.Boost.STM">
<link rel="up" href="../reference.html" title="Reference">
-<link rel="prev" href="built_in_transactional_objects.html" title="Built-in
- Transactional Objects">
-<link rel="next" href="../examples.html" title="Examples">
+<link rel="prev" href="synchronization.html" title="Synchronization">
+<link rel="next" href="utilities.html" title="Utilities">
</head>
<body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
<table cellpadding="2" width="100%"><tr>
@@ -22,7 +21,7 @@
</tr></table>
<hr>
<div class="spirit-nav">
-<a accesskey="p" href="built_in_transactional_objects.html"><img src="../../../../../../doc/html/images/prev.png" alt="Prev"></a><a accesskey="u" href="../reference.html"><img src="../../../../../../doc/html/images/up.png" alt="Up"></a><a accesskey="h" href="../../index.html"><img src="../../../../../../doc/html/images/home.png" alt="Home"></a><a accesskey="n" href="../examples.html"><img src="../../../../../../doc/html/images/next.png" alt="Next"></a>
+<a accesskey="p" href="synchronization.html"><img src="../../../../../../doc/html/images/prev.png" alt="Prev"></a><a accesskey="u" href="../reference.html"><img src="../../../../../../doc/html/images/up.png" alt="Up"></a><a accesskey="h" href="../../index.html"><img src="../../../../../../doc/html/images/home.png" alt="Home"></a><a accesskey="n" href="utilities.html"><img src="../../../../../../doc/html/images/next.png" alt="Next"></a>
</div>
<div class="section" lang="en">
<div class="titlepage"><div><div><h3 class="title">
@@ -52,8 +51,8 @@
<code class="computeroutput"><span class="identifier">ContentionManager</span></code></a>
</h4></div></div></div></div>
<p>
- __TBoost<span class="underline">STM</span>_ allows the user to choose
- between static and dynamic polimorphic contention manager.
+ TBoost.STM allows the user to choose between static and dynamic polimorphic
+ contention manager.
</p>
<div class="section" lang="en">
<div class="titlepage"><div><div><h4 class="title">
@@ -346,7 +345,7 @@
</tr></table>
<hr>
<div class="spirit-nav">
-<a accesskey="p" href="built_in_transactional_objects.html"><img src="../../../../../../doc/html/images/prev.png" alt="Prev"></a><a accesskey="u" href="../reference.html"><img src="../../../../../../doc/html/images/up.png" alt="Up"></a><a accesskey="h" href="../../index.html"><img src="../../../../../../doc/html/images/home.png" alt="Home"></a><a accesskey="n" href="../examples.html"><img src="../../../../../../doc/html/images/next.png" alt="Next"></a>
+<a accesskey="p" href="synchronization.html"><img src="../../../../../../doc/html/images/prev.png" alt="Prev"></a><a accesskey="u" href="../reference.html"><img src="../../../../../../doc/html/images/up.png" alt="Up"></a><a accesskey="h" href="../../index.html"><img src="../../../../../../doc/html/images/home.png" alt="Home"></a><a accesskey="n" href="utilities.html"><img src="../../../../../../doc/html/images/next.png" alt="Next"></a>
</div>
</body>
</html>
Modified: sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/reference/core.html
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/reference/core.html (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/reference/core.html 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -77,18 +77,10 @@
<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp">
Header <code class="computeroutput"><span class="special"><</span><span class="identifier">boost</span><span class="special">/</span><span class="identifier">stm</span><span class="special">/</span><span class="identifier">language_like</span><span class="special">.</span><span class="identifier">hpp</span><span class="special">></span></code></a></span></dt>
<dd><dl>
-<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.macro__atomic_">Macro
- <code class="computeroutput"><span class="identifier">atomic</span></code></a></span></dt>
-<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.macro__try_atomic_">Macro
- <code class="computeroutput"><span class="identifier">try_atomic</span></code></a></span></dt>
-<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.macro__use_atomic_">Macro
- <code class="computeroutput"><span class="identifier">use_atomic</span></code></a></span></dt>
-<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.macro__catch_before_retry_">Macro
- <code class="computeroutput"><span class="identifier">catch_before_retry</span></code></a></span></dt>
-<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.macro__before_retry_">Macro
- <code class="computeroutput"><span class="identifier">before_retry</span></code></a></span></dt>
-<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.macro__end_atom_">Macro
- <code class="computeroutput"><span class="identifier">end_atom</span></code></a></span></dt>
+<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related">Transaction
+ related</a></span></dt>
+<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.memory_management_related">Memory
+ management related</a></span></dt>
</dl></dd>
<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.transaction_hpp"> Header
<code class="computeroutput"><span class="special"><</span><span class="identifier">boost</span><span class="special">/</span><span class="identifier">stm</span><span class="special">/</span><span class="identifier">transaction</span><span class="special">.</span><span class="identifier">hpp</span><span class="special">></span></code></a></span></dt>
@@ -697,185 +689,478 @@
Header <code class="computeroutput"><span class="special"><</span><span class="identifier">boost</span><span class="special">/</span><span class="identifier">stm</span><span class="special">/</span><span class="identifier">language_like</span><span class="special">.</span><span class="identifier">hpp</span><span class="special">></span></code></a>
</h4></div></div></div>
<div class="toc"><dl>
-<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.macro__atomic_">Macro
- <code class="computeroutput"><span class="identifier">atomic</span></code></a></span></dt>
-<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.macro__try_atomic_">Macro
- <code class="computeroutput"><span class="identifier">try_atomic</span></code></a></span></dt>
-<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.macro__use_atomic_">Macro
- <code class="computeroutput"><span class="identifier">use_atomic</span></code></a></span></dt>
-<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.macro__catch_before_retry_">Macro
- <code class="computeroutput"><span class="identifier">catch_before_retry</span></code></a></span></dt>
-<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.macro__before_retry_">Macro
- <code class="computeroutput"><span class="identifier">before_retry</span></code></a></span></dt>
-<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.macro__end_atom_">Macro
- <code class="computeroutput"><span class="identifier">end_atom</span></code></a></span></dt>
+<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related">Transaction
+ related</a></span></dt>
+<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.memory_management_related">Memory
+ management related</a></span></dt>
+</dl></div>
+<div class="section" lang="en">
+<div class="titlepage"><div><div><h5 class="title">
+<a name="toward_boost_stm.reference.core.language_like_hpp.transaction_related"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related" title="Transaction
+ related">Transaction
+ related</a>
+</h5></div></div></div>
+<div class="toc"><dl>
+<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_transaction_">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_TRANSACTION</span></code></a></span></dt>
+<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_retry_">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_RETRY</span></code></a></span></dt>
+<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_end_retry_">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_END_RETRY</span></code></a></span></dt>
+<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_end_transaction_">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_END_TRANSACTION</span></code></a></span></dt>
+<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_transaction_in_loop_">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_TRANSACTION_IN_LOOP</span></code></a></span></dt>
+<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_end_retry_in_loop_">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_END_RETRY_IN_LOOP</span></code></a></span></dt>
+<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_end_transaction_in_loop_">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_END_TRANSACTION_IN_LOOP</span></code></a></span></dt>
+<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_current_">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_CURRENT</span></code></a></span></dt>
+<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_commit_">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_COMMIT</span></code></a></span></dt>
+<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_abort_">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_ABORT</span></code></a></span></dt>
</dl></div>
<p>
- The complexity behind the <code class="computeroutput"><span class="identifier">atomic</span></code>
- macro is needed to guarantee two fundamental goals.
- </p>
+ The complexity behind the <code class="computeroutput"><span class="identifier">BOOST_STM_TRANSACTION</span></code>/<code class="computeroutput"><span class="identifier">BOOST_STM_END_TRANSACTION</span></code> macros is
+ needed to guarantee the following fundamental goals:
+ </p>
<div class="itemizedlist"><ul type="disc">
<li>
- First, transactions must start and end correctly.
- </li>
+ First, transactions must start and end correctly.
+ </li>
+<li>
+ Second, transactions must change their retry behavior based on whether
+ they are a nested or root transaction to ensure proper closed nesting,
+ flattened transaction behavior is performed.
+ </li>
+<li>
+<code class="computeroutput"><span class="keyword">return</span></code><span class="emphasis"><em>`goto`
+ statements can be used and must commit the transaction before leaving
+ the transaction block * `break`</em></span><code class="computeroutput"><span class="keyword">continue</span></code>
+ statements can be used and are associated to a user loop and commit
+ the transaction before leaving the transaction block.
+ </li>
<li>
- Second, transactions must change their retry behavior based on whether
- they are a child or parent transaction to ensure proper closed nesting,
- flattened transaction behavior is performed.
- </li>
+ commit or abort on user exceptions before re-throwing.
+ </li>
</ul></div>
<p>
- The <code class="computeroutput"><span class="identifier">atomic</span></code> preprocessor
- behaves as follows. When the transactional for loop is entered, a transaction
- automatic object is constructed which initializes the transaction and puts
- it in-flight. The for loop conditional ensures the following conditions:
- </p>
+ The following pseudo code shows the general schema:
+ </p>
+<pre class="programlisting"><span class="number">1</span> <span class="special">>>></span> <span class="identifier">frame</span> <span class="identifier">transaction_block</span> <span class="special">=</span>
+<span class="number">2</span> <span class="identifier">keyword</span><span class="special"><</span><span class="identifier">transaction</span><span class="special">></span> <span class="identifier">statement_sequence</span><span class="special"><</span><span class="identifier">BODY</span><span class="special">></span>
+<span class="number">3</span> <span class="special">[</span> <span class="identifier">keyword</span><span class="special"><</span><span class="identifier">retry</span><span class="special">></span> <span class="identifier">statement_sequence</span><span class="special"><</span><span class="identifier">RETRY</span><span class="special">></span> <span class="special">]</span>
+<span class="number">4</span> <span class="special">>>></span> <span class="identifier">var</span> <span class="identifier">in_loop</span> <span class="special">=</span> <span class="special">...</span>
+<span class="number">5</span> <span class="special">>>></span> <span class="identifier">var</span> <span class="identifier">exception_commits</span> <span class="special">=</span> <span class="special">...</span>
+<span class="number">6</span> <span class="special">>>></span> <span class="identifier">transaformation</span>
+<span class="number">7</span> <span class="special">{</span>
+<span class="number">8</span> <span class="identifier">control_flow</span> <span class="identifier">__ctrl</span><span class="special">;</span>
+<span class="number">9</span> <span class="keyword">for</span><span class="special">(</span><span class="identifier">boost</span><span class="special">::</span><span class="identifier">stm</span><span class="special">::</span><span class="identifier">transaction</span> <span class="identifier">__txn_</span><span class="special">;;)</span> <span class="special">{</span> <span class="special">\</span>
+<span class="number">10</span> <span class="keyword">try</span><span class="special">{</span>
+<span class="number">11</span> <span class="identifier">boost</span><span class="special">::</span><span class="identifier">stm</span><span class="special">::</span><span class="identifier">detail</span><span class="special">::</span><span class="identifier">commit_on_destruction</span> <span class="identifier">__destr</span><span class="special">(</span><span class="identifier">__txn</span><span class="special">);</span> <span class="special">\</span>
+<span class="number">12</span> <span class="keyword">try</span><span class="special">{</span>
+<span class="number">13</span> <span class="keyword">do</span> <span class="special">{</span>
+<span class="number">14</span> <span class="identifier">__ctrl</span><span class="special">=</span><span class="identifier">break_</span><span class="special">;</span>
+<span class="number">15</span> <span class="special"><</span><span class="identifier">BODY</span><span class="special">></span>
+<span class="number">16</span> <span class="identifier">__ctrl</span><span class="special">=</span><span class="identifier">none</span><span class="special">;</span>
+<span class="number">17</span> <span class="keyword">break</span><span class="special">;</span>
+<span class="number">18</span> <span class="special">}</span> <span class="keyword">while</span> <span class="special">((</span><span class="identifier">__ctrl</span><span class="special">=</span><span class="identifier">continue_</span><span class="special">),</span><span class="keyword">false</span><span class="special">);</span>
+<span class="number">19</span> <span class="special">}</span> <span class="keyword">catch</span> <span class="special">(</span><span class="identifier">boost</span><span class="special">::</span><span class="identifier">stm</span><span class="special">::</span><span class="identifier">aborted_tx</span> <span class="special">&)</span> <span class="special">{</span>
+<span class="number">20</span> <span class="identifier">__destr</span><span class="special">.</span><span class="identifier">release</span><span class="special">();</span>
+<span class="number">21</span> <span class="keyword">throw</span><span class="special">;</span>
+<span class="number">22</span> <span class="special">}</span> <span class="keyword">catch</span><span class="special">(...)</span> <span class="special">{</span>
+<span class="number">23</span> <span class="special">>>></span> <span class="keyword">if</span> <span class="identifier">exception_commits</span>
+<span class="number">24</span> <span class="keyword">if</span> <span class="special">(</span><span class="identifier">__txn_</span><span class="special">.</span><span class="identifier">forced_to_abort</span><span class="special">())</span> <span class="special">{</span>
+<span class="number">25</span> <span class="identifier">__destr_</span><span class="special">.</span><span class="identifier">release</span><span class="special">();</span>
+<span class="number">26</span> <span class="special">}</span> <span class="keyword">else</span> <span class="special">{</span>
+<span class="number">27</span> <span class="identifier">__destr_</span><span class="special">.</span><span class="identifier">commit</span><span class="special">();</span>
+<span class="number">28</span> <span class="special">}</span>
+<span class="number">29</span> <span class="special">>>></span> <span class="keyword">else</span>
+<span class="number">30</span> <span class="identifier">__destr_</span><span class="special">.</span><span class="identifier">release</span><span class="special">();</span>
+<span class="number">31</span> <span class="special">>>></span> <span class="identifier">endif</span>
+<span class="number">32</span> <span class="keyword">throw</span><span class="special">;</span>
+<span class="number">33</span> <span class="special">}</span>
+<span class="number">34</span> <span class="keyword">break</span><span class="special">;</span>
+<span class="number">35</span> <span class="special">}</span> <span class="keyword">catch</span> <span class="special">(</span><span class="identifier">boost</span><span class="special">::</span><span class="identifier">stm</span><span class="special">::</span><span class="identifier">aborted_tx</span> <span class="special">&)</span> <span class="special">{</span>
+<span class="number">36</span> <span class="keyword">if</span> <span class="special">(</span><span class="identifier">__txn_</span><span class="special">.</span><span class="identifier">is_nested</span><span class="special">())</span> <span class="keyword">throw</span><span class="special">;</span>
+<span class="number">37</span> <span class="keyword">do</span> <span class="special">{</span>
+<span class="number">38</span> <span class="identifier">__ctrl</span><span class="special">=</span><span class="identifier">break_</span><span class="special">;</span>
+<span class="number">39</span> <span class="special"><</span><span class="identifier">RETRY</span><span class="special">></span>
+<span class="number">40</span> <span class="identifier">__ctrl</span><span class="special">=</span><span class="identifier">none</span><span class="special">;</span>
+<span class="number">41</span> <span class="keyword">break</span><span class="special">;</span>
+<span class="number">42</span> <span class="special">}</span> <span class="keyword">while</span> <span class="special">((</span><span class="identifier">__ctrl</span><span class="special">=</span><span class="identifier">continue_</span><span class="special">),</span><span class="keyword">false</span><span class="special">);</span>
+<span class="number">43</span> <span class="special">}</span>
+<span class="number">44</span> <span class="special">>>></span> <span class="keyword">if</span> <span class="identifier">in_loop</span>
+<span class="number">45</span> <span class="keyword">if</span> <span class="special">(</span><span class="identifier">__ctrl</span> <span class="special">!=</span> <span class="identifier">none</span><span class="special">)</span> <span class="keyword">break</span><span class="special">;</span> <span class="special">\</span>
+<span class="number">46</span> <span class="special">>>></span> <span class="keyword">else</span>
+<span class="number">47</span> <span class="identifier">BOOST_ASSERT</span><span class="special">(</span><span class="identifier">__ctrl</span> <span class="special">==</span> <span class="identifier">none</span><span class="special">);</span> <span class="special">\</span>
+<span class="number">48</span> <span class="special">>>></span> <span class="identifier">endif</span>
+<span class="number">49</span> <span class="identifier">__txn</span><span class="special">.</span><span class="identifier">restart</span><span class="special">();</span> <span class="special">\</span>
+<span class="number">50</span> <span class="special">}</span>
+<span class="number">51</span> <span class="special">>>></span> <span class="keyword">if</span> <span class="identifier">in_loop</span>
+<span class="number">52</span> <span class="keyword">if</span> <span class="special">(</span><span class="identifier">__ctrl</span><span class="special">==</span><span class="identifier">continue_</span><span class="special">)</span> <span class="keyword">continue</span><span class="special">;</span>
+<span class="number">53</span> <span class="keyword">else</span> <span class="keyword">if</span> <span class="special">(</span><span class="identifier">__ctrl</span><span class="special">==</span><span class="identifier">break_</span><span class="special">)</span> <span class="keyword">break</span><span class="special">;</span>
+<span class="number">54</span> <span class="special">>>></span> <span class="identifier">endif</span>
+<span class="number">55</span> <span class="special">}</span>
+<span class="number">56</span> <span class="special">>>></span> <span class="identifier">end_frame</span>
+</pre>
+<p>
+ The <code class="computeroutput"><span class="identifier">BOOST_STM_TRANSACTION</span></code>/<code class="computeroutput"><span class="identifier">BOOST_STM_END_TRANSACTION</span></code> preprocessor
+ behaves as follows. When the transactional for loop is entered, a transaction
+ automatic object is constructed which initializes the transaction and
+ puts it in-flight. The for loop conditional ensures the following conditions:
+ </p>
<div class="orderedlist"><ol type="1">
<li>
- the transaction is uncommitted,
- </li>
+ the transaction is uncommitted,
+ </li>
<li>
- the transaction has the opportunity to throw an exception if necessary,
- and
- </li>
+ the transaction has the opportunity to throw an exception if necessary,
+ and
+ </li>
<li>
- the transaction is in-flight.
- </li>
+ the transaction is in-flight.
+ </li>
</ol></div>
<p>
- Once a transaction commits, the check on (1) <code class="computeroutput"><span class="special">!</span><span class="identifier">T</span><span class="special">.</span><span class="identifier">committed</span><span class="special">()</span></code> ensures the transaction is not executed
- again. If the transaction has been aborted but is a child transaction,
- the transaction must be restarted at the parent level. The call to (2)
- <code class="computeroutput"><span class="identifier">T</span><span class="special">.</span><span class="identifier">check_throw_before_restart</span><span class="special">()</span></code>
- allows an aborted child transaction to throw an exception upward (before
- it is restarted) so the entire transaction can be restarted from the parent.
- The <code class="computeroutput"><span class="identifier">check_throw_before_restart</span><span class="special">()</span></code> API checks the current run-time state
- of the thread to determine if another transaction is active above it. This
- behavior allows transactions to be used at any nesting level while dynamically
- ensuring the correct retry behavior. Finally, the call to <code class="computeroutput"><span class="identifier">restart_if_not_inflight</span><span class="special">()</span></code>
- ensures the transaction is correctly restarted after each subsequent abort.
- </p>
+ On a inner block a commit on destruction variable is created, which will
+ commit the transaction if not released. Once a transaction commits, the
+ transaction is not executed again. If the transaction has been aborted
+ but is a nested transaction, the transaction must be restarted at the
+ parent level. This is done by throwing an exception upward so the entire
+ transaction can be restarted from the parent. This behavior allows transactions
+ to be used at any nesting level while dynamically ensuring the correct
+ retry behavior. Finally, the call to <code class="computeroutput"><span class="identifier">restart</span></code>
+ ensures the transaction is correctly restarted after each subsequent
+ abort.
+ </p>
<p>
- Once all of the transactional operations within the for loop are executed,
- a call to no_throw_end() is made which ends the transaction. The <code class="computeroutput"><span class="identifier">no_throw_end</span><span class="special">()</span></code>
- terminates the transaction by either committing or aborting it. Note, however,
- that <code class="computeroutput"><span class="identifier">no_throw_end</span><span class="special">()</span></code>
- does not throw an exception if the transaction is aborted, whereas the
- prior TBoost.STM's API end() does. This non-throwing behavior deviates
- from the prior TBoost.STM implementation of automatic objects when end()
- was invoked within the try / catch body. Furthermore, due to <code class="computeroutput"><span class="identifier">no_throw_end</span><span class="special">()</span></code>
- not throwing an exception if the transaction is aborted, some cases may
- arise where <code class="computeroutput"><span class="identifier">catch_before_retry</span></code>
- or <code class="computeroutput"><span class="identifier">before_retry</span></code> operations
- are not invoked when a transaction is aborted. This is a current limitation
- of the system and is overcome by inserting a manual end() operation as
- the last operation in the atomic block. The explicit end() ensures any
- operations within the before_retry block are executed if the transaction
- is aborted.
- </p>
-<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">atomic</span><span class="special">(</span><span class="identifier">T</span><span class="special">)</span>
-<span class="preprocessor">#define</span> <span class="identifier">try_atomic</span><span class="special">(</span><span class="identifier">T</span><span class="special">)</span>
-<span class="preprocessor">#define</span> <span class="identifier">use_atomic</span><span class="special">(</span><span class="identifier">T</span><span class="special">)</span>
-<span class="preprocessor">#define</span> <span class="identifier">catch_before_retry</span><span class="special">(</span><span class="identifier">E</span><span class="special">)</span>
-<span class="preprocessor">#define</span> <span class="identifier">before_retry</span>
-<span class="preprocessor">#define</span> <span class="identifier">end_atom</span>
+ Once all of the transactional operations within the for loop are executed,
+ the commit on destruction destructor is called which commit the transaction.
+ This terminates the transaction by either committing or aborting it.
+ </p>
+<div class="section" lang="en">
+<div class="titlepage"><div><div><h6 class="title">
+<a name="toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_transaction_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_transaction_" title="Macro
+ BOOST_STM_TRANSACTION">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_TRANSACTION</span></code></a>
+</h6></div></div></div>
+<p>
+ Use <code class="computeroutput"><span class="identifier">BOOST_STM_TRANSACTION</span></code>
+ for optimistic critical section. Supports single-depth and multiple-depth
+ (nested) transactions. Performs automatic retry for root transaction,
+ throws exception when T is a child transaction. This automatic switch
+ enables correctly behaved nested and non-nested transactions
+ </p>
+<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">BOOST_STM_TRANSACTION</span><span class="special">(</span><span class="identifier">T</span><span class="special">)</span> <span class="special">...</span>
+</pre>
+<p>
+ Usage
+ </p>
+<pre class="programlisting"> <span class="identifier">BOOST_STM_TRANSACTION</span>
+ <span class="special"><</span><span class="identifier">sentence</span> <span class="identifier">sequence</span> <span class="identifier">excluding</span> <span class="keyword">break</span><span class="special">/</span><span class="keyword">continue</span><span class="special">></span>
+ <span class="identifier">BOOST_STM_END_TRANSACTION</span>
+<span class="special">|</span>
+ <span class="identifier">BOOST_STM_TRANSACTION</span>
+ <span class="special"><</span><span class="identifier">sentence</span> <span class="identifier">sequence</span> <span class="identifier">excluding</span> <span class="keyword">break</span><span class="special">/</span><span class="keyword">continue</span><span class="special">></span>
+ <span class="identifier">BOOST_STM_RETRY</span>
+ <span class="special"><</span><span class="identifier">sentence</span> <span class="identifier">sequence</span><span class="special">></span>
+ <span class="identifier">BOOST_STM_END_RETRY</span>
</pre>
+</div>
<div class="section" lang="en">
-<div class="titlepage"><div><div><h5 class="title">
-<a name="toward_boost_stm.reference.core.language_like_hpp.macro__atomic_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.macro__atomic_" title="Macro
- atomic">Macro
- <code class="computeroutput"><span class="identifier">atomic</span></code></a>
-</h5></div></div></div>
+<div class="titlepage"><div><div><h6 class="title">
+<a name="toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_retry_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_retry_" title="Macro
+ BOOST_STM_RETRY">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_RETRY</span></code></a>
+</h6></div></div></div>
+<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">BOOST_STM_RETRY</span> <span class="special">...</span>
+</pre>
<p>
- Use atomic transaction T for optimistic critical section. Supports single-depth
- and multiple-depth (nested) transactions. Performs automatic retry when
- T is a parent transaction, throws exception when T is a child transaction.
- This automatic switch enables correctly behaved nested and non-nested
- transactions
- </p>
-<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">atomic</span><span class="special">(</span><span class="identifier">T</span><span class="special">)</span> <span class="special">\</span>
- <span class="keyword">for</span> <span class="special">(</span><span class="identifier">transaction</span> <span class="identifier">T</span><span class="special">;</span> <span class="special">\</span>
- <span class="special">!</span><span class="identifier">T</span><span class="special">.</span><span class="identifier">committed</span><span class="special">()</span> <span class="special">&&</span> <span class="identifier">T</span><span class="special">.</span><span class="identifier">check_throw_before_restart</span><span class="special">()</span> <span class="special">&&</span> <span class="identifier">T</span><span class="special">.</span><span class="identifier">restart_if_not_inflight</span><span class="special">();</span> <span class="special">\</span>
- <span class="identifier">T</span><span class="special">.</span><span class="identifier">no_throw_end</span><span class="special">())</span>
- <span class="keyword">try</span>
+ Opens a retry block.
+ </p>
+<p>
+ Usage
+ </p>
+<pre class="programlisting"><span class="identifier">BOOST_STM_TRANSACTION</span>
+ <span class="special"><</span><span class="identifier">sentence</span> <span class="identifier">sequence</span><span class="special">></span>
+<span class="identifier">BOOST_STM_RETRY</span>
+ <span class="special"><</span><span class="identifier">sentence</span> <span class="identifier">sequence</span><span class="special">></span>
+<span class="identifier">BOOST_STM_END_RETRY</span>
</pre>
</div>
<div class="section" lang="en">
-<div class="titlepage"><div><div><h5 class="title">
-<a name="toward_boost_stm.reference.core.language_like_hpp.macro__try_atomic_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.macro__try_atomic_" title="Macro
- try_atomic">Macro
- <code class="computeroutput"><span class="identifier">try_atomic</span></code></a>
-</h5></div></div></div>
-<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">try_atomic</span><span class="special">(</span><span class="identifier">T</span><span class="special">)</span> <span class="special">\</span>
- <span class="keyword">for</span> <span class="special">(</span><span class="identifier">transaction</span> <span class="identifier">T</span><span class="special">;</span> <span class="special">\</span>
- <span class="special">!</span><span class="identifier">T</span><span class="special">.</span><span class="identifier">committed</span><span class="special">()</span> <span class="special">&&</span> <span class="identifier">T</span><span class="special">.</span><span class="identifier">restart</span><span class="special">();</span> <span class="special">\</span>
- <span class="identifier">T</span><span class="special">.</span><span class="identifier">no_throw_end</span><span class="special">())</span>
- <span class="keyword">try</span>
+<div class="titlepage"><div><div><h6 class="title">
+<a name="toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_end_retry_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_end_retry_" title="Macro
+ BOOST_STM_END_RETRY">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_END_RETRY</span></code></a>
+</h6></div></div></div>
+<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">BOOST_STM_END_RETRY</span> <span class="keyword">catch</span> <span class="special">(</span><span class="identifier">aborted_tx</span> <span class="special">&)</span>
+</pre>
+<p>
+ Usage
+ </p>
+<pre class="programlisting"><span class="identifier">BOOST_STM_TRANSACTION</span>
+ <span class="special"><</span><span class="identifier">sentence</span> <span class="identifier">sequence</span><span class="special">></span>
+<span class="identifier">BOOST_STM_RETRY</span>
+ <span class="special"><</span><span class="identifier">sentence</span> <span class="identifier">sequence</span><span class="special">></span>
+<span class="identifier">BOOST_STM_END_RETRY</span>
</pre>
</div>
<div class="section" lang="en">
-<div class="titlepage"><div><div><h5 class="title">
-<a name="toward_boost_stm.reference.core.language_like_hpp.macro__use_atomic_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.macro__use_atomic_" title="Macro
- use_atomic">Macro
- <code class="computeroutput"><span class="identifier">use_atomic</span></code></a>
-</h5></div></div></div>
-<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">use_atomic</span><span class="special">(</span><span class="identifier">T</span><span class="special">)</span> <span class="special">\</span>
- <span class="keyword">for</span> <span class="special">(</span><span class="identifier">transaction</span> <span class="identifier">T</span><span class="special">;</span> <span class="special">\</span>
- <span class="special">!</span><span class="identifier">T</span><span class="special">.</span><span class="identifier">committed</span><span class="special">()</span> <span class="special">&&</span> <span class="identifier">T</span><span class="special">.</span><span class="identifier">restart</span><span class="special">();</span> <span class="special">\</span>
- <span class="identifier">T</span><span class="special">.</span><span class="identifier">end</span><span class="special">())</span>
+<div class="titlepage"><div><div><h6 class="title">
+<a name="toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_end_transaction_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_end_transaction_" title="Macro
+ BOOST_STM_END_TRANSACTION">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_END_TRANSACTION</span></code></a>
+</h6></div></div></div>
+<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">BOOST_STM_END_TRANSACTION</span> <span class="special">...</span>
+</pre>
+<p>
+ Equivalent to
+ </p>
+<pre class="programlisting"><span class="identifier">BOOST_STM_RETRY</span> <span class="special">{}</span>
+<span class="identifier">BOOST_STM_END_RETRY</span>
+</pre>
+<p>
+ Usage
+ </p>
+<pre class="programlisting"><span class="identifier">BOOST_STM_TRANSACTION</span>
+ <span class="special"><</span><span class="identifier">sentence</span> <span class="identifier">sequence</span><span class="special">></span>
+<span class="identifier">BOOST_STM_END_TRANSACTION</span>
</pre>
</div>
<div class="section" lang="en">
-<div class="titlepage"><div><div><h5 class="title">
-<a name="toward_boost_stm.reference.core.language_like_hpp.macro__catch_before_retry_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.macro__catch_before_retry_" title="Macro
- catch_before_retry">Macro
- <code class="computeroutput"><span class="identifier">catch_before_retry</span></code></a>
-</h5></div></div></div>
-<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">catch_before_retry</span><span class="special">(</span><span class="identifier">E</span><span class="special">)</span> <span class="keyword">catch</span> <span class="special">(</span><span class="identifier">aborted_tx</span> <span class="special">&</span><span class="identifier">E</span><span class="special">)</span>
+<div class="titlepage"><div><div><h6 class="title">
+<a name="toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_transaction_in_loop_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_transaction_in_loop_" title="Macro
+ BOOST_STM_TRANSACTION_IN_LOOP">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_TRANSACTION_IN_LOOP</span></code></a>
+</h6></div></div></div>
+<p>
+ Use <code class="computeroutput"><span class="identifier">BOOST_STM_TRANSACTION_IN_LOOP</span></code>
+ for optimistic critical section. Works as <code class="computeroutput"><span class="identifier">BOOST_STM_TRANSACTION</span></code>
+ and in addition supporte <code class="computeroutput"><span class="keyword">break</span></code>/<code class="computeroutput"><span class="keyword">continue</span></code> statements.
+ </p>
+<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">BOOST_STM_TRANSACTION_IN_LOOP</span><span class="special">(</span><span class="identifier">T</span><span class="special">)</span> <span class="special">...</span>
</pre>
<p>
- Catches exception when transaction is aborted. This, before_retry, or
- end_atom must follow an atomic block. Once all operations within the
- catch_before_retry block are executed, control is returned to the local
- atomic where the transaction is retried (if it is the parent) or the
- atomic block is exited by exception (if it is a child).
- </p>
+ Currently equivalent to
+ </p>
+<pre class="programlisting"><span class="identifier">BOOST_STM_TRANSACTION</span>
+</pre>
<p>
- To be used pairwise with <code class="computeroutput"><span class="identifier">try_atomic</span></code>
- or <code class="computeroutput"><span class="identifier">atomic</span></code>.
- </p>
+ Usage user loop { BOOST_STM_TRANSACTION_IN_LOOP <sentence sequence
+ including break/continue> BOOST_STM_END_TRANSACTION_IN_LOOP } |
+ BOOST_STM_TRANSACTION_IN_LOOP <sentence sequence> BOOST_STM_RETRY
+ <sentence sequence> BOOST_STM_END_RETRY_IN_LOOP
+ </p>
</div>
<div class="section" lang="en">
-<div class="titlepage"><div><div><h5 class="title">
-<a name="toward_boost_stm.reference.core.language_like_hpp.macro__before_retry_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.macro__before_retry_" title="Macro
- before_retry">Macro
- <code class="computeroutput"><span class="identifier">before_retry</span></code></a>
-</h5></div></div></div>
-<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">before_retry</span> <span class="keyword">catch</span> <span class="special">(</span><span class="identifier">aborted_tx</span> <span class="special">&)</span>
+<div class="titlepage"><div><div><h6 class="title">
+<a name="toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_end_retry_in_loop_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_end_retry_in_loop_" title="Macro
+ BOOST_STM_END_RETRY_IN_LOOP">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_END_RETRY_IN_LOOP</span></code></a>
+</h6></div></div></div>
+<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">BOOST_STM_END_RETRY_IN_LOOP</span> <span class="special">...</span>
</pre>
<p>
- Same as catch_before_retry(E) except the exception is discarded.
- </p>
+ Usage
+ </p>
+<pre class="programlisting"><span class="identifier">BOOST_STM_TRANSACTION_IN_LOOP</span>
+ <span class="special"><</span><span class="identifier">sentence</span> <span class="identifier">sequence</span><span class="special">></span>
+<span class="identifier">BOOST_STM_RETRY</span>
+ <span class="special"><</span><span class="identifier">sentence</span> <span class="identifier">sequence</span><span class="special">></span>
+<span class="identifier">BOOST_STM_END_RETRY_IN_LOOP</span>
+</pre>
+</div>
+<div class="section" lang="en">
+<div class="titlepage"><div><div><h6 class="title">
+<a name="toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_end_transaction_in_loop_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_end_transaction_in_loop_" title="Macro
+ BOOST_STM_END_TRANSACTION_IN_LOOP">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_END_TRANSACTION_IN_LOOP</span></code></a>
+</h6></div></div></div>
+<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">BOOST_STM_END_TRANSACTION_IN_LOOP</span> <span class="special">...</span>
+</pre>
<p>
- To be used pairwise with <code class="computeroutput"><span class="identifier">try_atomic</span></code>
- or <code class="computeroutput"><span class="identifier">atomic</span></code>.
- </p>
+ Equivalent to
+ </p>
+<pre class="programlisting"><span class="identifier">BOOST_STM_RETRY</span> <span class="special">{}</span>
+<span class="identifier">BOOST_STM_END_RETRY_IN_LOOP</span>
+</pre>
+<p>
+ Usage
+ </p>
+<pre class="programlisting"><span class="identifier">BOOST_STM_TRANSACTION_IN_LOOP</span>
+ <span class="special"><</span><span class="identifier">sentence</span> <span class="identifier">sequence</span><span class="special">></span>
+<span class="identifier">BOOST_STM_END_TRANSACTION_IN_LOOP</span>
+</pre>
+</div>
+<div class="section" lang="en">
+<div class="titlepage"><div><div><h6 class="title">
+<a name="toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_current_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_current_" title="Macro
+ BOOST_STM_CURRENT">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_CURRENT</span></code></a>
+</h6></div></div></div>
+<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">BOOST_STM_CURRENT</span> <span class="special">...</span>
+</pre>
+<p>
+ Reference to the current transaction on a <code class="computeroutput"><span class="identifier">BOOST_STM_TRANSACTION</span></code><span class="emphasis"><em>`BOOST_STM_END_TRANSACTION`
+ or `BOOST_STM_RETRY`</em></span><code class="computeroutput"><span class="identifier">BOOST_STM_END_RETRY</span></code>
+ block
+ </p>
+<p>
+ Usage
+ </p>
+<pre class="programlisting"><span class="identifier">BOOST_STM_TRANSACTION</span>
+ <span class="identifier">BOOST_STM_CURRENT</span><span class="special">.</span>
+<span class="identifier">BOOST_STM_END_TRANSACTION</span>
+</pre>
+</div>
+<div class="section" lang="en">
+<div class="titlepage"><div><div><h6 class="title">
+<a name="toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_commit_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_commit_" title="Macro
+ BOOST_STM_COMMIT">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_COMMIT</span></code></a>
+</h6></div></div></div>
+<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">BOOST_STM_COMMIT</span> <span class="special">...</span>
+</pre>
+<p>
+ Abort the current transaction on a <code class="computeroutput"><span class="identifier">BOOST_STM_TRANSACTION</span></code>/<code class="computeroutput"><span class="identifier">BOOST_STM_END_TRANSACTION</span></code> block
+ </p>
+<p>
+ Usage
+ </p>
+<pre class="programlisting"><span class="identifier">BOOST_STM_TRANSACTION</span>
+ <span class="identifier">BOOST_STM_COMMIT</span><span class="special">();</span>
+<span class="identifier">BOOST_STM_END_TRANSACTION</span>
+</pre>
+</div>
+<div class="section" lang="en">
+<div class="titlepage"><div><div><h6 class="title">
+<a name="toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_abort_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.transaction_related.macro__boost_stm_abort_" title="Macro
+ BOOST_STM_ABORT">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_ABORT</span></code></a>
+</h6></div></div></div>
+<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">BOOST_STM_ABORT</span> <span class="special">...</span>
+</pre>
+<p>
+ Abort the current transaction on a <code class="computeroutput"><span class="identifier">BOOST_STM_TRANSACTION</span></code>/<code class="computeroutput"><span class="identifier">BOOST_STM_END_TRANSACTION</span></code> block
+ </p>
+<p>
+ Usage
+ </p>
+<pre class="programlisting"><span class="identifier">BOOST_STM_TRANSACTION</span>
+ <span class="identifier">BOOST_STM_ABORT</span><span class="special">();</span>
+<span class="identifier">BOOST_STM_END_TRANSACTION</span>
+</pre>
+</div>
</div>
<div class="section" lang="en">
<div class="titlepage"><div><div><h5 class="title">
-<a name="toward_boost_stm.reference.core.language_like_hpp.macro__end_atom_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.macro__end_atom_" title="Macro
- end_atom">Macro
- <code class="computeroutput"><span class="identifier">end_atom</span></code></a>
+<a name="toward_boost_stm.reference.core.language_like_hpp.memory_management_related"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.memory_management_related" title="Memory
+ management related">Memory
+ management related</a>
</h5></div></div></div>
-<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">end_atom</span> <span class="keyword">catch</span> <span class="special">(</span><span class="identifier">aborted_tx</span> <span class="special">&)</span> <span class="special">{}</span>
+<div class="toc"><dl>
+<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.memory_management_related.macro__boost_stm_new_ptr_">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_NEW_PTR</span></code></a></span></dt>
+<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.memory_management_related.macro__boost_stm_delete_ptr_">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_DELETE_PTR</span></code></a></span></dt>
+<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.memory_management_related.macro__boost_stm_new_array_">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_NEW_ARRAY</span></code></a></span></dt>
+<dt><span class="section"><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.memory_management_related.macro__boost_stm_delete_array_">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_DELETE_ARRAY</span></code></a></span></dt>
+</dl></div>
+<div class="section" lang="en">
+<div class="titlepage"><div><div><h6 class="title">
+<a name="toward_boost_stm.reference.core.language_like_hpp.memory_management_related.macro__boost_stm_new_ptr_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.memory_management_related.macro__boost_stm_new_ptr_" title="Macro
+ BOOST_STM_NEW_PTR">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_NEW_PTR</span></code></a>
+</h6></div></div></div>
+<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">BOOST_STM_NEW_PTR</span><span class="special">(</span><span class="identifier">T_ARGS</span><span class="special">)</span> <span class="special">...</span>
</pre>
<p>
- Same as before_retry except the exception is discarded and {} are automated.
- </p>
+ Creates a new instance using the <code class="computeroutput"><span class="keyword">new</span></code>
+ operator and notifies the STM system of this new creation.
+ </p>
<p>
- To be used pairwise with <code class="computeroutput"><span class="identifier">try_atomic</span></code>
- or <code class="computeroutput"><span class="identifier">atomic</span></code>.
- </p>
+ Usage
+ </p>
+<pre class="programlisting"><span class="identifier">BOOST_STM_TRANSACTION</span>
+ <span class="special">...</span>
+ <span class="identifier">prev</span><span class="special">-></span><span class="identifier">next_</span><span class="special">=</span><span class="identifier">BOOST_STM_NEW</span><span class="special">(</span><span class="identifier">list_node</span><span class="special"><</span><span class="identifier">T</span><span class="special">>(</span><span class="identifier">val</span><span class="special">,</span> <span class="identifier">curr</span><span class="special">));</span>
+ <span class="special">...</span>
+<span class="identifier">BOOST_STM_END_TRANSACTION</span>
+</pre>
+</div>
+<div class="section" lang="en">
+<div class="titlepage"><div><div><h6 class="title">
+<a name="toward_boost_stm.reference.core.language_like_hpp.memory_management_related.macro__boost_stm_delete_ptr_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.memory_management_related.macro__boost_stm_delete_ptr_" title="Macro
+ BOOST_STM_DELETE_PTR">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_DELETE_PTR</span></code></a>
+</h6></div></div></div>
+<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">BOOST_STM_DELETE_PTR</span><span class="special">(</span><span class="identifier">PTR</span><span class="special">)</span> <span class="special">...</span>
+</pre>
+<p>
+ Deletes the instance using the <code class="computeroutput"><span class="keyword">delete</span></code>
+ operator and nnotifies the STM system of this deletion.
+ </p>
+<p>
+ Usage
+ </p>
+<pre class="programlisting"><span class="identifier">BOOST_STM_TRANSACTION</span>
+ <span class="special">...</span>
+ <span class="identifier">prev</span><span class="special">-></span><span class="identifier">next_</span><span class="special">=</span><span class="identifier">BOOST_STM_NEW</span><span class="special">(</span><span class="identifier">list_node</span><span class="special"><</span><span class="identifier">T</span><span class="special">>(</span><span class="identifier">val</span><span class="special">,</span> <span class="identifier">curr</span><span class="special">));</span>
+ <span class="special">...</span>
+<span class="identifier">BOOST_STM_END_TRANSACTION</span>
+</pre>
+</div>
+<div class="section" lang="en">
+<div class="titlepage"><div><div><h6 class="title">
+<a name="toward_boost_stm.reference.core.language_like_hpp.memory_management_related.macro__boost_stm_new_array_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.memory_management_related.macro__boost_stm_new_array_" title="Macro
+ BOOST_STM_NEW_ARRAY">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_NEW_ARRAY</span></code></a>
+</h6></div></div></div>
+<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">BOOST_STM_NEW_ARRAY</span><span class="special">(</span><span class="identifier">SIZE</span><span class="special">,</span> <span class="identifier">T</span><span class="special">)</span> <span class="special">...</span>
+</pre>
+<p>
+ Creates a new array using the <code class="computeroutput"><span class="keyword">new</span>
+ <span class="special">[]</span></code> operator and notifies the
+ STM system of this new allocation.
+ </p>
+<p>
+ Usage
+ </p>
+<pre class="programlisting"><span class="identifier">BOOST_STM_TRANSACTION</span>
+ <span class="special">...</span>
+ <span class="identifier">prev</span><span class="special">-></span><span class="identifier">next_</span><span class="special">=</span><span class="identifier">BOOST_STM_NEW</span><span class="special">(</span><span class="identifier">size</span><span class="special">,</span> <span class="identifier">list_node</span><span class="special"><</span><span class="identifier">T</span><span class="special">>(</span><span class="identifier">val</span><span class="special">,</span> <span class="identifier">curr</span><span class="special">));</span>
+ <span class="special">...</span>
+<span class="identifier">BOOST_STM_END_TRANSACTION</span>
+</pre>
+</div>
+<div class="section" lang="en">
+<div class="titlepage"><div><div><h6 class="title">
+<a name="toward_boost_stm.reference.core.language_like_hpp.memory_management_related.macro__boost_stm_delete_array_"></a><a href="core.html#toward_boost_stm.reference.core.language_like_hpp.memory_management_related.macro__boost_stm_delete_array_" title="Macro
+ BOOST_STM_DELETE_ARRAY">Macro
+ <code class="computeroutput"><span class="identifier">BOOST_STM_DELETE_ARRAY</span></code></a>
+</h6></div></div></div>
+<pre class="programlisting"><span class="preprocessor">#define</span> <span class="identifier">BOOST_STM_DELETE_ARRAY</span><span class="special">(</span><span class="identifier">PTR</span><span class="special">,</span> <span class="identifier">SIZE</span><span class="special">)</span> <span class="special">...</span>
+</pre>
+<p>
+ Deletes the array instance using the <code class="computeroutput"><span class="keyword">delete</span><span class="special">[]</span></code> operator and nnotifies the STM system
+ of this deletion.
+ </p>
+<p>
+ Usage
+ </p>
+<pre class="programlisting"><span class="identifier">BOOST_STM_TRANSACTION</span>
+ <span class="special">...</span>
+ <span class="identifier">prev</span><span class="special">-></span><span class="identifier">next_</span><span class="special">=</span><span class="identifier">BOOST_STM_NEW</span><span class="special">(</span><span class="identifier">list_node</span><span class="special"><</span><span class="identifier">T</span><span class="special">>(</span><span class="identifier">val</span><span class="special">,</span> <span class="identifier">curr</span><span class="special">));</span>
+ <span class="special">...</span>
+<span class="identifier">BOOST_STM_END_TRANSACTION</span>
+</pre>
+</div>
</div>
</div>
<div class="section" lang="en">
Modified: sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/users_guide/ext_references.html
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/users_guide/ext_references.html (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/users_guide/ext_references.html 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -27,7 +27,7 @@
<a name="toward_boost_stm.users_guide.ext_references"></a> References
</h3></div></div></div>
<a name="toward_boost_stm.users_guide.ext_references.tboost_stm_relared"></a><h4>
-<a name="id4829196"></a>
+<a name="id4824954"></a>
<a href="ext_references.html#toward_boost_stm.users_guide.ext_references.tboost_stm_relared">TBoost.STM
relared</a>
</h4>
@@ -155,7 +155,7 @@
</dl>
</div>
<a name="toward_boost_stm.users_guide.ext_references.stm_related"></a><h4>
-<a name="id4829571"></a>
+<a name="id4825325"></a>
<a href="ext_references.html#toward_boost_stm.users_guide.ext_references.stm_related">STM
related</a>
</h4>
Modified: sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/users_guide/getting_started.html
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/users_guide/getting_started.html (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/users_guide/getting_started.html 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -64,7 +64,7 @@
memory!
</p>
<a name="toward_boost_stm.users_guide.getting_started.install.getting_boost_stm"></a><h5>
-<a name="id4814747"></a>
+<a name="id4765244"></a>
<a href="getting_started.html#toward_boost_stm.users_guide.getting_started.install.getting_boost_stm">Getting
Boost.STM</a>
</h5>
@@ -81,7 +81,7 @@
Sandbox</a>.
</p>
<a name="toward_boost_stm.users_guide.getting_started.install.building_boost_stm"></a><h5>
-<a name="id4814800"></a>
+<a name="id4765298"></a>
<a href="getting_started.html#toward_boost_stm.users_guide.getting_started.install.building_boost_stm">Building
Boost.STM</a>
</h5>
@@ -93,7 +93,7 @@
<span class="identifier">bjam</span>
</pre>
<a name="toward_boost_stm.users_guide.getting_started.install.requirements"></a><h5>
-<a name="id4814862"></a>
+<a name="id4765363"></a>
<a href="getting_started.html#toward_boost_stm.users_guide.getting_started.install.requirements">Requirements</a>
</h5>
<p>
@@ -163,7 +163,7 @@
</dl>
</div>
<a name="toward_boost_stm.users_guide.getting_started.install.exceptions_safety"></a><h5>
-<a name="id4815127"></a>
+<a name="id4765637"></a>
<a href="getting_started.html#toward_boost_stm.users_guide.getting_started.install.exceptions_safety">Exceptions
safety</a>
</h5>
@@ -172,7 +172,7 @@
of exception safety as long as the underlying parameters provide it.
</p>
<a name="toward_boost_stm.users_guide.getting_started.install.thread_safety"></a><h5>
-<a name="id4815153"></a>
+<a name="id4765663"></a>
<a href="getting_started.html#toward_boost_stm.users_guide.getting_started.install.thread_safety">Thread
safety</a>
</h5>
@@ -180,7 +180,7 @@
All functions in the library are thread-unsafe except when noted explicitly.
</p>
<a name="toward_boost_stm.users_guide.getting_started.install.tested_compilers"></a><h5>
-<a name="id4815178"></a>
+<a name="id4765690"></a>
<a href="getting_started.html#toward_boost_stm.users_guide.getting_started.install.tested_compilers">Tested
compilers</a>
</h5>
Modified: sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/users_guide/tutorial.html
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/users_guide/tutorial.html (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/html/toward_boost_stm/users_guide/tutorial.html 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -81,7 +81,7 @@
calls.
</p>
<a name="toward_boost_stm.users_guide.tutorial.a_simple_transaction.client_invoked_inserts"></a><h5>
-<a name="id4815527"></a>
+<a name="id4758392"></a>
<a href="tutorial.html#toward_boost_stm.users_guide.tutorial.a_simple_transaction.client_invoked_inserts">Client
Invoked Inserts</a>
</h5>
@@ -104,7 +104,7 @@
of TM solutions into algorithms of new and legacy systems.
</p>
<a name="toward_boost_stm.users_guide.tutorial.a_simple_transaction.linked_list_declaration"></a><h5>
-<a name="id4815743"></a>
+<a name="id4758620"></a>
<a href="tutorial.html#toward_boost_stm.users_guide.tutorial.a_simple_transaction.linked_list_declaration">Linked
list declaration</a>
</h5>
@@ -128,7 +128,7 @@
<span class="special">};</span>
</pre>
<a name="toward_boost_stm.users_guide.tutorial.a_simple_transaction.insert_retry_transaction"></a><h5>
-<a name="id4816039"></a>
+<a name="id4758946"></a>
<a href="tutorial.html#toward_boost_stm.users_guide.tutorial.a_simple_transaction.insert_retry_transaction">Insert
retry transaction</a>
</h5>
@@ -158,7 +158,7 @@
with its absorption of aborted transactions and only aborted transactions.
</p>
<a name="toward_boost_stm.users_guide.tutorial.a_simple_transaction.insert_specific"></a><h5>
-<a name="id4816171"></a>
+<a name="id4759084"></a>
<a href="tutorial.html#toward_boost_stm.users_guide.tutorial.a_simple_transaction.insert_specific">Insert
specific</a>
</h5>
@@ -580,7 +580,7 @@
<span class="special">}</span>
</pre>
<a name="toward_boost_stm.users_guide.tutorial.a_dynamically_prioritized__composed_transaction.priority_inversion_allowed"></a><h5>
-<a name="id4821217"></a>
+<a name="id4817278"></a>
<a href="tutorial.html#toward_boost_stm.users_guide.tutorial.a_dynamically_prioritized__composed_transaction.priority_inversion_allowed">Priority
Inversion Allowed</a>
</h5>
@@ -607,7 +607,7 @@
for their specific needs.
</p>
<a name="toward_boost_stm.users_guide.tutorial.a_dynamically_prioritized__composed_transaction.the_future_of_parallel_programming"></a><h5>
-<a name="id4821319"></a>
+<a name="id4817380"></a>
<a href="tutorial.html#toward_boost_stm.users_guide.tutorial.a_dynamically_prioritized__composed_transaction.the_future_of_parallel_programming">The
Future of Parallel Programming</a>
</h5>
@@ -796,7 +796,7 @@
values from a function</a>
</h4></div></div></div>
<a name="toward_boost_stm.users_guide.tutorial.returning_values_from_a_function.returning_from_outside_the_transaction_context"></a><h5>
-<a name="id4822924"></a>
+<a name="id4818985"></a>
<a href="tutorial.html#toward_boost_stm.users_guide.tutorial.returning_values_from_a_function.returning_from_outside_the_transaction_context">Returning
from outside the transaction context</a>
</h5>
@@ -813,7 +813,7 @@
<span class="special">}</span>
</pre>
<a name="toward_boost_stm.users_guide.tutorial.returning_values_from_a_function.returning_from_inside"></a><h5>
-<a name="id4823100"></a>
+<a name="id4819161"></a>
<a href="tutorial.html#toward_boost_stm.users_guide.tutorial.returning_values_from_a_function.returning_from_inside">Returning
from inside</a>
</h5>
@@ -852,7 +852,7 @@
care of modifications of the pointer itself.
</p>
<a name="toward_boost_stm.users_guide.tutorial.pointer_to_transactional_objects.using_the_mixin_transaction_object_lt__gt_"></a><h5>
-<a name="id4823503"></a>
+<a name="id4819564"></a>
<a href="tutorial.html#toward_boost_stm.users_guide.tutorial.pointer_to_transactional_objects.using_the_mixin_transaction_object_lt__gt_">Using
the mixin transaction_object<></a>
</h5>
@@ -920,7 +920,7 @@
inheriting from a transactional class <code class="computeroutput"><span class="identifier">B</span></code>
</p>
<a name="toward_boost_stm.users_guide.tutorial.polymorphic.using_the_mixin_transaction_object_lt__gt_"></a><h5>
-<a name="id4824642"></a>
+<a name="id4820703"></a>
<a href="tutorial.html#toward_boost_stm.users_guide.tutorial.polymorphic.using_the_mixin_transaction_object_lt__gt_">Using
the mixin transaction_object<></a>
</h5>
@@ -954,7 +954,7 @@
<span class="special">*</span><span class="identifier">ptr_b</span> <span class="special">=</span> <span class="identifier">BOOST_STM_NEW</span><span class="special">(</span><span class="identifier">_</span><span class="special">,</span> <span class="identifier">D</span><span class="special">());</span>
</pre>
<a name="toward_boost_stm.users_guide.tutorial.polymorphic.using_the_wrapper_transactional_object_lt__gt_"></a><h5>
-<a name="id4825006"></a>
+<a name="id4821067"></a>
<a href="tutorial.html#toward_boost_stm.users_guide.tutorial.polymorphic.using_the_wrapper_transactional_object_lt__gt_">Using
the wrapper transactional_object<></a>
</h5>
Modified: sandbox/stm/branches/vbe/libs/stm/doc/introduction.qbk
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/introduction.qbk (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/introduction.qbk 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -8,293 +8,6 @@
[section:intro Introduction]
-[heading Transactional Memory]
-
-Transactional memory(TM) is a modern type of concurrency control that uses transactions as its synchronization mechanism. A transaction is a finite sequence of operations that are executed in an *atomic*, *isolated* and *consistent* manner. The atomicity, isolation and consistency (*ACI*) of transaction's are derived from the ACID principle in the database community. TM does not exhibit the D (durability) of ACID because unlike database transactions, TM transactions are not saved to permanent storage (e.g., hard drives).
-
-Transactions are executed speculatively (optimistically) and are checked for consistency at various points in the transaction's lifetime. Programmers specify the starting and ending points of a transaction. All of the operations between those points make up the transaction's execution body. Transactions are commonly represented using the atomic structure shown in the figure below.
-
- 1 transaction
- 2 {
- 3 ++x;
- 4 --y;
- 5 }
-
- Simple Transaction Using the atomic Keyword
-
-Once a transaction has started it either commits or aborts. A transaction's operations are only seen once the transaction has committed, providing the illusion that all of the operations occurred at a single instance in time. The instructions of a committed transaction are viewed as if they occurred as an indivisible event, not as a set of operations executed serially. The operations of an aborted transaction are never seen by other threads, even if such operations were executed within a transaction and then rolled back.
-
-In the case of the above example, if the transaction was committed both operations `++x` and `--y` would be made visible to other threads at the same instance in time. If the transaction the above example was aborted, neither operation (`++x` and `--y`) would appear to have occurred even if the local transaction executed one or both operations.
-
-TM offers three distinct advantages over other parallel programming abstractions.
-
-# TM is simple; transactions, the synchronization mechanism of TM, are easier to program than other synchronization mechanisms because they move shared memory management into the underlying TM subsystem, removing its complexity from the programmer's view. Moreover, TM exposes a simple programmer interface, reducing (or in some cases, removing) the potential for deadlock, livelock and priority inversion.
-
-# TM is scalable; it achieves increased computational throughput when compared to other parallel programming abstractions by allowing multiple threads to speculatively execute the same critical section. When concurrently executing threads do not exhibit shared data conflicts, they are guaranteed to make forward progress.
-
-# TM is modular; transactions can be nested to any depth and function as a single unit. This behavior allows application programmers to extend atomic library algorithms into atomic domain-specific algorithms without requiring the application programmers to understand the implementation details of the library algorithm.
-
-For these reasons, transactions are considered an important synchronization mechanism and TM is viewed as an important type of concurrency control. The remainder of this section presents TM from a viewpoint of (1) simplicity, (2) scalability and (3) modularity.
-
-[heading Simplicity]
-
-Synchronization problems, such as deadlocks, livelocks and priority inversion are common in software systems using mutual exclusion. TM avoids many of these problems by providing a synchronization mechanism that does not expose any of its implementation details to the programmer. The only low level interfaces the programmer needs to use for TM are:
-
-* begin_tx() - the signaled start of the transaction.
-* read(loc) - reads the specified memory location, storing its location in the transaction's read set and returning its current value.
-* write(loc, val) - writes the specified memory location to the supplied val, storing its location in the transaction's write set.
-* end_tx() - the signaled end of the transaction. end_tx() returns true if the transaction commits, otherwise it returns false.
-
-The above interfaces allow the programmer to create a transaction (using begin_tx()), specify its memory operations (using read() and write()) and terminate (using end_tx()()). Moreover, none of the interfaces specify details of the TM subsystem's implementation. This leaves the TM system's implementation disjoint from the interfaces it supplies, a key characteristic for TM's simplicity.
-
-All TM implementations use some combination of the above interfaces. TMs implemented within compilers tend to implicitly annotate transactional read() and write() operations, whereas those implemented within software libraries tend to require the programmer explicitly state which operations are transactional reads and writes. An example of a transaction using the above interfaces alongside an actual STM library implementation is shown in Figure 3.
-
- 1 // TM interfaces // __Boost_STM__
- 2 do { BOOST_STM_TRANSACTION
- 3 begin_tx(); {
- 4 write(x, read(x)+1); ++x;
- 5 write(y, read(y)-1); --y;
- 6 } while (end_tx()); } BOOST_STM_END_TRANSACTION
-
- Figure 3. Transaction Using (1) Explicit TM Interfaces and (2) __Boost_STM__.
-
-Figure 3 implements the same transaction as shown in Figure 2, except all transactional memory accesses, including the transaction's retry behavior (e.g., its loop), are demonstrated from a simple TM interface perspective and an actual library implementation (__Boost_STM__). While most TM systems handle some portion of these interface calls implicitly, as is shown in the __Boost_STM__ transaction, it is important to note that even when all operations are made visible to the end programmer, transactions are still devoid of many concurrency problems, such as data races and deadlocks (explained below), that plague other types of concurrency control.
-
-For example, as long as the programmer properly annotates the access to the shared variables x and y as shown in Figure 3, it is impossible for race conditions or deadlocks to occur. Furthermore, the programmer does not need any program-specific knowledge to use shared data; he or she simply uses the TM interfaces supplied by the system and the resulting behavior is guaranteed to be consistent. This is explained in greater detail in Section ???3.1.
-
-Other types of concurrency control, such as mutual exclusion, cannot achieve the same interface simplicity, because part of their implementation is associated with, or exposed through, their interface. To demonstrate this, consider the fine-grained locking example of Figure 1 as shown below.
-
- 1 // fine-grained locking
- 2 lock(mutexX);
- 3 lock(mutexY);
- 4 ++x;
- 5 --y;
- 6 unlock(mutexX);
- 7 unlock(mutexY);
-
-There is no universal interface that can be used to properly access the shared data protected by the mutual exclusion in the above fine-grained locking example. Instead, the programmer must be aware that mutexX and mutexY protect shared data x and y and, therefore, the locks must be obtained before accessing the shared data. In short, the programmer is responsible for knowing not only that mutual exclusion is used, but also how it is used (e.g., which locks protect which shared variables). In this case, mutexX must be obtained before mutexY. If another section of code implements the following, a deadlock scenario will eventually occur.
-
- 1 // fine-grained locking
- 2 lock(mutexY);
- 3 lock(mutexX); // deadlock here
- 4 --y;
- 5 ++x;
- 6 unlock(mutexY);
- 7 unlock(mutexX);
-
-[heading Understanding Concurrency Hazards]
-
-Informally, a concurrency hazard is a condition existing in a set of operations that, if executed with a specific concurrent interleaving, results in one or many unwanted sideeffects. Most errors in parallel systems, such as deadlocks and priority inversion, are the specific execution of concurrency hazards resulting in the unwanted side-effect(s) they contain. If the concurrency hazards are eliminated, the parallel system errors contained within the concurrency hazards are also eliminated. Unfortunately, detecting existing concurrency hazards is non-trivial and therefore eliminating them is also non-trivial.
-
-Mutual exclusion exhibits more concurrency hazards than TM because its implementation details (i.e., its locks) must be exposed and used by the end programmer. While the locks used to enforce mutual exclusion by themselves are not concurrency hazards, their use can lead to a number of hazards. As such, using locks leads to concurrency hazards.
-
-Because the mutual exclusion locking details are exposed to the programmer and because the programmer must maintain a universal and informal contract to use these locks, concurrency hazards can arise due to the number of possible misuses that can be introduced by the programmer. In particular, if the programmer accidentally deviates from the informal locking contract, he or she may inadvertently introduce a concurrency hazard that can cause the program to deadlock, invert priority or lead to inconsistent data.
-
-In contrast, TM has no universal or informal contract between shared data that the end programmer needs to understand and follow as is required in mutual exclusion. Due to this, TM can hide its implementation details which results in reduced concurrency hazards. In particular, each transaction tracks the memory it uses in its read and write sets. When a transaction begins its commit phase, it verifies its state is consistent and commits its changes. If a transaction finds its state is inconsistent, it discards its changes and restarts. All of this can be achieved using the basic TM interfaces shown in Section 3 without exposing any implementation details. In order to use TM, the end programmer only needs to know how to correctly create a transaction. Once the transaction is executed, regardless of how it is executed, it results in a program state that is guaranteed to be consistent.
-
-Fundamentally, TM exhibits less concurrency hazards than mutual exclusion because its implementation details are divorced from its interface and can therefore be hidden within its subsystem. Any number of implementations can be used in a TM subsystem using only the basic TM interfaces shown in Section 3. The same is not true for mutual exclusion. Mutual exclusion, regardless of how it is implemented, exposes details of its implementation to the programmer. As demonstrated in Section 5, mutual exclusion does not provide software modularity specifically because extending an existing module requires an understanding and extension of that module's implementation. When such locking implementations are hidden inside of software libraries, extending these modules can range from difficult to impossible.
-
-[heading Testing: Race Conditions and Interleavings]
-
-A race condition is a common concurrency hazard that exists in parallel or distributed software. As with all concurrency hazards, race conditions rely on a specific interleaving of concurrent execution to cause their unwanted side-effect. In this section we demonstrate that race conditions do not exist in TM and therefore, software testing is greatly simplified because all possible interleavings do not need to be tested to ensure correct system behavior. In order to demonstrate that race conditions are absent from TM, we must first show that they are present in other types of concurrency control.
-
- 1 // Thread T1 // Thread T
- 2 lock(L2);
- 3 lock(L1);
- 4 ...
- 5 unlock(L1);
- 6 unlock(L2);
- 7 lock(L1);
- 8 lock(L2);
- 9 ...
-
- Figure 4. Mutual Exclusion Race Condition.
-
-Consider the race condition present in the mutual exclusion example shown in Figure 4. The race condition present in the example results in a deadlock if thread T1 executes line 7 followed by thread T2 executing line 2. However, if the program executes the lines in order (e.g., line 1, then line 2, then line 3, etc.), the system will execute properly. The fundamental problem in Figure 4 is that it contains a concurrency hazard; in particular, it contains a race condition. To further complicate matters, the race condition can only be observed in two of many possible concurrent executions. Those two executions are: T1 executes line 7 followed by T2 executing line 2 or T2 executes line 2 followed by T1 executing line 7. All other possible concurrent interleavings of threads T1 and T2 avoid the deadlock race condition. More specifically, as long as T1 executes lines 7-8 atomically or T2 executes line 2-3 atomically, all remaining concurrent interleavings are free of the deadlock race condition.
-
-Because it is unlikely that the deadlock race condition will occur, the programmer may never observe it, no matter how many times the program is tested. Only exhaustive testing, which tests all possible concurrent interleavings, is guaranteed to identify the presence of the deadlock. Regrettably, exhaustive testing is an unrealistic solution for most programs due to the time it would take to execute all possible concurrent interleavings of the program.
-
-An alternative to exhaustive testing is for programmers to use types of concurrency control that are devoid of certain concurrency hazards. For example, if mutual exclusion did not emit the race condition concurrency hazard, it would be impossible for a program using it to deadlock. Therefore, exhaustive testing would not be necessary. While this scenario is hypothetical, it illustrates our larger argument: in order to avoid common parallel problems in a practical fashion, programmers may need to only use types of concurrency control that are devoid of certain concurrency hazards. By doing this, the program using the specific type of concurrency control will be guaranteed to be free of certain common parallel problems.
-
-TMs are required to be devoid of race conditions within their implementations because they must enforce the ACI (atomic, consistent and isolated) principles. Transactions must execute as atomic and isolated and, therefore, TMs are not capable of supporting concurrent interleavings between multiple transactions as that would violate the atomic and isolated principles of ACI. Due to this, programs only using TM are guaranteed to be free of deadlocks (i.e., deadlockfreedom). Moreover, because TM implementations can guarantee freedom of race condition concurrency hazards, programmers only need to verify their transactional code is correct in a sequential (non-parallel) manner. Once the sequential execution of the transactional code has been verified, no more testing is required as the TM system is required to behave in a consistent manner for all serial orders.
-
-[heading Development: Mutual Exclusion and TM]
-
-The development of fine-grained locking is notoriously difficult. Designing such software is equally as hard. The difficulty in developing and designing fine-grained locking systems is rooted in conflicting heuristics. A primary goal of software design is to identify the most simplistic software solution that exists for a particular problem. A primary goal of fine-grained locking is the most efficient concurrent implementation of a software system. The goals of software design and fine-grained locking are conflicting because the most efficient fine-grained locking solution usually requires some of the most complex software design implementations to achieve such performance.
-
-TM achieves scalability by using optimistic concurrency that is implemented within its subsystem (see Section 4). Since the TM subsystem is the efficiency throttle for TM, which is unexposed to the programmer, the software architecture and design never needs to be complicated (nor can it be) in order to achieve increased parallelism when using transactions. As will be demonstrated in the following section, transactions run efficiently using the interfaces shown in this section and are never complicated in order to achieve improved performance, as is commonly found in fine-grained mutual exclusion implementations.
-
-[heading Scalability]
-
-In this section we analyze the scalability of TM compared to mutual exclusion. We measure scalability by two metrics: consistency and performance. A concurrency control type has consistent scalability if it guarantees correct behavior for an arbitrarily large number of concurrently executing processes.4 Performance scalability is measured by the maximum number of consistent processes supported by a concurrency control type while executing concurrently.
-
-[heading Pessimistic and Optimistic Critical Sections]
-
-Critical sections can be pessimistic or optimistic. Pessimistic critical sections limit their critical section execution to a single thread. Locks are an example of a synchronization mechanism that use pessimistic critical sections. Optimistic critical sections allow unlimited concurrent threaded execution. Transactions are an example of a synchronization mechanism that use optimistic critical sections.
-
-[heading Truly Optimistic Critical Sections]
-
-Truly optimistic critical sections are those critical sections which allow multiple conflicting threads to simultaneously execute the same critical section. A deferred update (or lazy acquire) TM system supports truly optimistic critical section. A direct update (or eager acquire) TM system does not support truly optimistic critical sections. More details on deferred and direct update TM systems are presented in the subsequent sections.
-
-Truly optimistic critical sections are important because they allow simultaneous conflicting critical section execution, as opposed to disallowing such behavior. It is important to allow conflicting critical section execution because prematurely preventing concurrently executing threads pessimistically degrades performance. To demonstrate this, consider two transactions, called T1 and T2, executing the same critical section. Transaction T1 starts first and tentatively writes to memory locationM. Transaction T2 then starts and tries to write to memory locationM. In a truly optimistic TM system, T2 would be allowed to tentatively write to location M while T1 is also writing to M. This behavior then allows T2 to commit before T1 in the event T2 completes before T1.
-
-In comparison, if the TM system is not truly optimistic, once T1 writes to M, T2 must stall until T1 completes. This pessimistically degrades the performance of the system by prematurely deciding that T1's transactional execution should have higher priority than T2's.
-
- 1 // global variables
- 2 int g1 = 0; int g2 = 0;
- 3
- 4 void set1(int val) { atomic { g1 = val; } }
- 5 void set2(int val) { atomic { g2 = val; } }
- 6 int get1() { atomic { return g1; } }
- 7 int get2() { atomic { return g2; } }
-
- Figure 5. Truly Optimistic Concurrency Diagram.
-
-Furthermore, and perhaps more importantly, truly optimistic critical sections allow readers and writers of the same memory location to execute concurrently. This behavior is important because in many cases, both the readers and writers of the same memory can commit with consistent views of memory.
-
-An example of this is shown in Figure 5. As demonstrated in Figure 5 thread 1 and thread 2, which we'll refer to as T1 and T2 respectively, operate on the same memory locations (g1 and g2). Because the TM system supports optimistic concurrency, T2 is allowed to execute concurrently alongside T1 even though their memory accesses conflict. However, in this scenario, because T2 completes its workload before T1, both transactions are allowed to commit. T2 captures the state of g1=0,g2=0 while T1 sets the state of g1=1,g2=1. As the example addresses, both g1=0,g2=0 and g1=1,g2=1 are legal states.
-
-[heading Direct and Deferred Update]
-
-Updating is the process of committing transactional writes to global memory and is performed in either a direct or deferred manner. Figure 6 presents a step-by-step analysis of direct and deferred updating.
-
-Deferred update creates a local copy of global memory, performs modifications to the local copy, and then writes those changes to global memory if the transaction commits. If the transaction aborts, no additional work is done. Direct update makes an original backup copy of global memory and then writes directly to global memory. If the transaction commits, the transaction does nothing. If the transaction aborts, the transaction restores global memory with its backup copy. Some TM systems favor direct update due to its natural optimization of commits (BSTM, McRTSTM and LogTM). However, other TM systems favor deferred update due to its support for truly optimistic critical sections (__Boost_STM__ and RingSTM).
-
-Direct update enables greater TM throughput when aborts are relatively low because it optimizes the common commit case. Deferred update enables greater TM throughput when
-
-# aborts are relatively high or
-# short running transactions (e.g., those that complete quickly) are executed alongside long running transactions (e.g., those that do not complete quickly) because long running transactions do not stall shorter running ones as they would in direct update systems, and therefore the fastest transactions can commit first.
-
-It is important to note that deferred update supports truly optimistic critical sections without special effort, while direct update does not. Truly optimistic critical sections enable the speculative execution of transactions that arrive after a memory location has already been tentatively written to by another transaction. This allows the first transaction, of potentially many competing transactions, to complete its commit, whether it be the later arriving transaction or the earlier arriving writer. This scenario is not possible with direct update without special effort.
-
-[heading Scalability: Mutual Exclusion and Transactional Memory]
-
-The scalability of mutual exclusion is limited to pessimistic concurrency. By definition, mutual exclusion's critical sections must be pessimistic, otherwise they would not be isolated to a single thread (i.e., they would not be mutually exclusive). TM, however, is generally implemented using optimistic concurrency, but it can enforce pessimistic concurrency amongst transactions if that behavior is required for certain conditions. In certain cases, TMs becomemore strict and execute pessimistically to enable inevitable or irrevocable transactions. Such transactions have significant importance for handling operations that, once started, must complete (e.g., I/O operations).
-
-Since TM can execute optimistically and pessimistically, it is clear that whatever benefits pessimistic concurrency has can be acquired by TM. However, since mutual exclusion can only execute pessimistically, the advantages found in optimistic concurrency can never be obtained by mutual exclusion.
-
-When one first analyzes pessimistic and optimistic concurrency, it may seem that the only benefit optimistic concurrency has over pessimistic concurrency is that multiple critical sections, which conflict on the memory they access, can execute concurrently. The simultaneous execution of such conflicting critical sections allows the execution speed of such critical sections to guide the system in deciding which execution should be allowed to commit and which should be aborted. In particular, the first process to complete the critical section can be allowed to abort the other process of the system. The same scenario cannot be achieved by pessimistic critical sections and is demonstrated in Section 4.1.1.
-
-A counterargument to this scenario is that such optimistic concurrency only allows one critical section to commit, while one must be aborted. Because mutual exclusion only allows one conflicting critical section execution at a time, and because mutual exclusion does not support failure atomicity (i.e., rollbacking of the critical section), mutual exclusion's pessimistic behavior is superior in terms of energy and efficiency. Mutual exclusion, unlike TM, suffers no wasted work because conflicting critical sections are limited to a single thread of execution, reducing the energy it uses. Furthermore, because mutual exclusion does not require original data to copied, as needed for TM's direct or deferred update, it executes faster.
-
-While there is merit to this counterargument, an important scenario is not captured by it: truly optimistic critical sections can support multiple reader / single write executions which, if executed so the readers commit before the writer, all critical sections will succeed. This scenario is impossible to achieve using pessimistic critical sections. Although mutual exclusion can use read/write locking, as soon as a writer thread begins execution on a conflicting critical section, all readers must be stalled. TM's truly optimistic concurrency does not suffer from this overly pessimistic limitation of throughput and is therefore capable of producing an immeasurable amount of concurrent throughput under such conditions.
-
-From a theoretical perspective, given L memory locations and P processes, mutual exclusion can support the consistent concurrent execution of P*L number of readers or L writers. TM can support the consistent concurrent execution of P*L number of readers and L writers. Using the above variables, the mathematical expression of the performance scalability of mutual exclusion (S(ME)) is:
-
- S(ME) = (P*L) + L
-
-Using the same variables, the mathematical expression of the performance scalability of transactional memory is:
-
- S(TM) = (P * L) + L
-
-As should be clear from the above equations, mutual exclusion cannot achieve the same performance scalability of TM. This is because TM supports truly optimistic concurrency and mutual exclusion is confined to pessimistic concurrency. While other examples exist that demonstrate optimistic concurrency can increase throughput via contention management, the above equations capture the indisputable mathematical limitations in mutual exclusion's performance scalability.
-
-[heading Modularity]
-
-Software modularity is an important aspect of software that is necessary for its reuse. Formally, software is modular if it can be composed in a new system without altering its internal implementation. Informally, software is modular if it can be used, in its entirety, through its interface.
-
-By making software modular, it can be freely used in an unlimited number of software systems. Without software modularity, software can only be used in the original system where it was written. Clearly, without software modularity, software cannot be reused. Because most software developments are based on extensive library use, software reuse is an integral part of software development. As such, limiting software reuse, would result in severely hampered development capabilities and overall development time. For these reasons, software modularity is vital for any software paradigm to be practical. Software paradigms that do not support software modularity are, in short, impractical.
-
-[heading Mutual Exclusion and Software Modularity]
-
-In this section, we show that mutual exclusion, regardless of its implementation, fails to deliver software modularity. We demonstrate this through a running example started in Figure 7 which implements inc(), mult() and get(); these functions use lock G to respectively implement an increment, multiply and get operations for the shared data.
-
- 1 void inc(int v) {
- 2 lock(G); g += v; unlock(G);
- 3 }
- 4
- 5 void mult(int v) {
- 6 lock(G); g *= v; unlock(G);
- 7 }
- 8
- 9 int get() {
- 10 lock(G); int v = g; unlock(G);
- 11 return v;
- 12 }
-
- Figure 7. Mutual Exclusion for Increment, Multiply and Get of Shared Variable.
-
-Now suppose a programmer wants to increment and multiply by some values within the same atomic operation. The initial implementation may look like the following.
-
- 1 inc(a);
- 2 mult(-b);
-
-An unwanted side-effect of such an implementation is the exposure of the intermediate state of g between inc() and mult(). A second thread performing a get() may read an inconsistent value of g; the value of g between inc() and mult(). This is demonstrated in the timing diagram of Figure 8 .
-
- Figure 8. Example of Exposed State of Mutual Exclusion.
-
-If the programmer needs the inc() and mult() operations to be executed together, without an intermediate state being revealed, he or she could make lock G reentrant. Reentrant locks are locks that can be obtained multiple times by a single thread without deadlocking. If G is made reentrant, the following code could be used to make inc(a) and mult(-b) atomic. A basic implementation of a reentrant lock is to associate a counter with its lock and increment the counter each time the lock() interface is called and to decrement the counter each time the unlock() interface is called. The reentrant lock is only truly locked when a call to lock() is made when its associated counter is 0. Likewise, the reentrant lock is only truly unlocked when a call to unlock() is made when its associated counter is 1.
-
- 1 lock(G);
- 2 inc(a);
- 3 mult(-b);
- 4 unlock(G);
-
-If the above code uses reentrant locks, it will achieve the programmer's intended atomicity for inc() and mult(), isolating the state between inc() and mult(), which disallows the unwanted side-effect shown in Figure 8. While the atomicity of the operations is achieved, it is only achieved by exposing the implementation details of inc() and mult(). In particular, if the programmer had not known that lock G was used within inc() and mult(), making an atomic operation of inc() and mult() would be impossible.
-
-An external atomic grouping of operations is impossible using embedded mutual exclusion without exposing the implementation details because the heart of mutual exclusion is based on named variables which the programmer specifies to guard their critical sections. Because these variables are named, they cannot be abstracted away and any programmer wishing to reuse the mutually exclusive code must be able to access and extend the implementation details.
-
- 1 void inc(int v) { transaction { g += v; } }
- 2
- 3 void mult(int v) { transaction { g *= v; } }
- 4
- 5 int get() { transaction { return g; } }
-
- Figure 9. TM of Increment, Multiply and Get of Shared Variable.
-
-
-[heading Summary of Mutual Exclusion Modularity]
-
-As we presented at the beginning of this section, software modularity can be informally understood as a component's ability to be used entirely from its interface. Therefore, components that cannot be used entirely from their interface, components that must expose their implementation details to be extended, are not modular. As such, the paradigm of mutual exclusion does not support software modularity.
-
-[heading Transactional Memory and Software Modularity]
-
-Transactional memory works in a fundamentally different manner than mutual exclusion, with regard to its interface and implementation. To begin, as demonstrated in Section 3, TMs do not generally expose any of their implementation details to client code. In fact, in many TMs, client code is more versatile if it knows and assumes nothing about the active implementation of the TM. By abstracting away details of TM implementation, a TM subsystem can adapt its behavior to the most efficient configuration for the program's current workload, much like the algorithms used for efficient operation of processes controlled by operating systems. TM uses such abstractions to optimize the performance of concurrent programs using various consistency checking methods, conflict detection times, updating policies, and contention management schemes.
-
-[heading Achieving TM Software Modularity]
-
-TM achieves software modularity by allowing transactions to nest. With transactional nesting, individual transactions can be wrapped inside of other transactions which call the methods where they reside, resulting in a transaction composed of both the parent and child transaction. Furthermore, this is achieved without altering or understanding the child transaction's implementation. To best demonstrate transactional nesting, we reuse the prior mutual exclusion example shown in Figure 7 and implement it using transactions as shown in Figure 9.
-
-As before, the programmer's goal is to implement a combination of inc() and mult() executed in an atomic fashion. The basic, and incorrect implementation is demonstrated below:
-
- 1 inc(a);
- 2 mult(-b);
-
-Even with transactions, this approach fails because the transactions within inc() and mult() begin and end inside their respective functions. However, to make the above operations atomic, the programmer need only make the following modification shown in Figure 10.
-
- 1 transaction { // transaction {
- 2 inc(a); // transaction { g += a; }
- 3 mult(-b); // transaction { g *= -b; }
- 4 } // }
-
- Figure 10. Modularity: Transaction of Increment and Multiply.
-
-In effect, the TM system subsumes the transactions that are nested inside of the inc() and mult() operations. The left side of Figure 10 shows the actual code of the transaction, while the right side shows the child transactions that are subsumed by the parent transaction.
-
-Because transactions are isolated and atomic, the resulting state of g, from operations inc() and mult(), is invisible to outside observers until the transaction is committed. As such, outside threads cannot view any intermediate state constructed by partial transaction execution. The result of such isolated behavior is the guaranteed consistent concurrent execution of interleaved accesses to shared memory from in-flight transactions. This is demonstrated in Figure 11; let g=0 and assume deferred update is the active updating policy, as explained in Section 4.2.
-
- Figure 11. Example of Isolated and Consistent State of TM.
-
-As shown in Figure 11, multiple concurrently executing threads can read and write to the same shared memory in a consistent and isolated fashion when using TM. In the example, thread T2 performs x = get() after T1 has already executed inc(a). However, since T1 has not yet committed its transaction, T2's view of shared data g is consistent (g=0). When T2 begins the commit phase of its transaction, the TM subsystem verifies that shared data g has not been updated since it initially read it. Since no other transaction has updated shared data g, T2's transaction is permitted to commit. Thread T1 then continues with its mult() operation and then enters its commit phase. The TM subsystem also verifies the consistency of T1's transaction before it is allowed to commit. Again, since no one transaction has updated shared data g between its reads and writes to it, T1's transaction is permitted to commit.
-
-The above analysis demonstrates that software modularity can be achieved in TM through transactional nesting (Figure 10). In this case, the specific software modularity achieved is extension to an existing critical section. Critical section extension was also possible with mutual exclusion, as demonstrated in Section 5.1, but only through exposing the details behind the mutual exclusion implementation. Due to this, mutual exclusion fails to deliver a practical level of software modularity.
-
-[heading Summary of Transactional Memory Modularity]
-
-TM supports software modularity by allowing transactions to nest, to any depth, while logically grouping the shared data accesses within the transactions into an atomic, consistent and isolated (ACI) operation. Transactional nesting is natural to the programmer because nested transactions behave in the same manner as unnested transactions. TM's ACI support ensures transactions will behave in a correct manner regardless of if the transaction is used by itself or subsumed into a larger transaction.
-
-
-[heading C++ library language-like solution]
-
-Research in parallel programming has recently seen a flurry of attention. Among the active research is a push for high-level languages to offer native support for parallel programming primitives. The next version of C++ will incorporate library support for threads, while numerous researchers are exploring ways to extend C++ to support transactional memory (TM).
-
-A strength of C++ is its support for automatic objects. Rather than requiring that parallel primitives be added directly to the language, automatic objects in C++ can be used to implement much of their necessary infrastructure. The automatic object approach is natural from a language perspective, provides full algorithmic control to the end programmer, and demonstrates C++'s linguistic elegance. The disadvantage of this approach is its added programmatic overhead. Using only automatic objects, certain programming errors, such as accidental scope removal and incorrectly programmed transactional retry behavior, can arise.
-
-In light of this, there are unique trade-offs between language based and library-based parallel primitives. Language-based solutions minimize syntactic clutter which reduce programmer related errors, but are seemingly irreversible and, if incorrect, can have crippling effects upon the language. Library-based solutions increase programmer control and flexibility, but place substantial pressure on the programmer to avoid minute programming errors. A good compromise is a solution that behaves like a language extension, but is implemented within a library. By implementing parallel primitives within a library that uses language-like interfaces, programmer pressure is reduced, implementation updates are seamless, and full programmer control is achieved through library extensibility.
-
-__Boost_STM__ present such a language-like solution for C++ using generic library coupled with a deliberate use of the preprocessor. The culmination of these components facilitate a simple, yet powerful, parallel programming interface in C++.
-
-
[endsect]
\ No newline at end of file
Modified: sandbox/stm/branches/vbe/libs/stm/doc/rationale.qbk
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/rationale.qbk (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/rationale.qbk 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -8,6 +8,297 @@
[section:rationale Appendix B: Rationale]
+[section:intro Transactional Memory versus Locking]
+
+[heading Transactional Memory]
+
+Transactional memory(TM) is a modern type of concurrency control that uses transactions as its synchronization mechanism. A transaction is a finite sequence of operations that are executed in an *atomic*, *isolated* and *consistent* manner. The atomicity, isolation and consistency (*ACI*) of transaction's are derived from the ACID principle in the database community. TM does not exhibit the D (durability) of ACID because unlike database transactions, TM transactions are not saved to permanent storage (e.g., hard drives).
+
+Transactions are executed speculatively (optimistically) and are checked for consistency at various points in the transaction's lifetime. Programmers specify the starting and ending points of a transaction. All of the operations between those points make up the transaction's execution body. Transactions are commonly represented using the atomic structure shown in the figure below.
+
+ 1 transaction
+ 2 {
+ 3 ++x;
+ 4 --y;
+ 5 }
+
+ Simple Transaction Using the atomic Keyword
+
+Once a transaction has started it either commits or aborts. A transaction's operations are only seen once the transaction has committed, providing the illusion that all of the operations occurred at a single instance in time. The instructions of a committed transaction are viewed as if they occurred as an indivisible event, not as a set of operations executed serially. The operations of an aborted transaction are never seen by other threads, even if such operations were executed within a transaction and then rolled back.
+
+In the case of the above example, if the transaction was committed both operations `++x` and `--y` would be made visible to other threads at the same instance in time. If the transaction the above example was aborted, neither operation (`++x` and `--y`) would appear to have occurred even if the local transaction executed one or both operations.
+
+TM offers three distinct advantages over other parallel programming abstractions.
+
+# TM is simple; transactions, the synchronization mechanism of TM, are easier to program than other synchronization mechanisms because they move shared memory management into the underlying TM subsystem, removing its complexity from the programmer's view. Moreover, TM exposes a simple programmer interface, reducing (or in some cases, removing) the potential for deadlock, livelock and priority inversion.
+
+# TM is scalable; it achieves increased computational throughput when compared to other parallel programming abstractions by allowing multiple threads to speculatively execute the same critical section. When concurrently executing threads do not exhibit shared data conflicts, they are guaranteed to make forward progress.
+
+# TM is modular; transactions can be nested to any depth and function as a single unit. This behavior allows application programmers to extend atomic library algorithms into atomic domain-specific algorithms without requiring the application programmers to understand the implementation details of the library algorithm.
+
+For these reasons, transactions are considered an important synchronization mechanism and TM is viewed as an important type of concurrency control. The remainder of this section presents TM from a viewpoint of (1) simplicity, (2) scalability and (3) modularity.
+
+[heading Simplicity]
+
+Synchronization problems, such as deadlocks, livelocks and priority inversion are common in software systems using mutual exclusion. TM avoids many of these problems by providing a synchronization mechanism that does not expose any of its implementation details to the programmer. The only low level interfaces the programmer needs to use for TM are:
+
+* begin_tx() - the signaled start of the transaction.
+* read(loc) - reads the specified memory location, storing its location in the transaction's read set and returning its current value.
+* write(loc, val) - writes the specified memory location to the supplied val, storing its location in the transaction's write set.
+* end_tx() - the signaled end of the transaction. end_tx() returns true if the transaction commits, otherwise it returns false.
+
+The above interfaces allow the programmer to create a transaction (using begin_tx()), specify its memory operations (using read() and write()) and terminate (using end_tx()()). Moreover, none of the interfaces specify details of the TM subsystem's implementation. This leaves the TM system's implementation disjoint from the interfaces it supplies, a key characteristic for TM's simplicity.
+
+All TM implementations use some combination of the above interfaces. TMs implemented within compilers tend to implicitly annotate transactional read() and write() operations, whereas those implemented within software libraries tend to require the programmer explicitly state which operations are transactional reads and writes. An example of a transaction using the above interfaces alongside an actual STM library implementation is shown in Figure 3.
+
+ 1 // TM interfaces // __Boost_STM__
+ 2 do { BOOST_STM_TRANSACTION
+ 3 begin_tx(); {
+ 4 write(x, read(x)+1); ++x;
+ 5 write(y, read(y)-1); --y;
+ 6 } while (end_tx()); } BOOST_STM_END_TRANSACTION
+
+ Figure 3. Transaction Using (1) Explicit TM Interfaces and (2) __Boost_STM__.
+
+Figure 3 implements the same transaction as shown in Figure 2, except all transactional memory accesses, including the transaction's retry behavior (e.g., its loop), are demonstrated from a simple TM interface perspective and an actual library implementation (__Boost_STM__). While most TM systems handle some portion of these interface calls implicitly, as is shown in the __Boost_STM__ transaction, it is important to note that even when all operations are made visible to the end programmer, transactions are still devoid of many concurrency problems, such as data races and deadlocks (explained below), that plague other types of concurrency control.
+
+For example, as long as the programmer properly annotates the access to the shared variables x and y as shown in Figure 3, it is impossible for race conditions or deadlocks to occur. Furthermore, the programmer does not need any program-specific knowledge to use shared data; he or she simply uses the TM interfaces supplied by the system and the resulting behavior is guaranteed to be consistent. This is explained in greater detail in Section ???3.1.
+
+Other types of concurrency control, such as mutual exclusion, cannot achieve the same interface simplicity, because part of their implementation is associated with, or exposed through, their interface. To demonstrate this, consider the fine-grained locking example of Figure 1 as shown below.
+
+ 1 // fine-grained locking
+ 2 lock(mutexX);
+ 3 lock(mutexY);
+ 4 ++x;
+ 5 --y;
+ 6 unlock(mutexX);
+ 7 unlock(mutexY);
+
+There is no universal interface that can be used to properly access the shared data protected by the mutual exclusion in the above fine-grained locking example. Instead, the programmer must be aware that mutexX and mutexY protect shared data x and y and, therefore, the locks must be obtained before accessing the shared data. In short, the programmer is responsible for knowing not only that mutual exclusion is used, but also how it is used (e.g., which locks protect which shared variables). In this case, mutexX must be obtained before mutexY. If another section of code implements the following, a deadlock scenario will eventually occur.
+
+ 1 // fine-grained locking
+ 2 lock(mutexY);
+ 3 lock(mutexX); // deadlock here
+ 4 --y;
+ 5 ++x;
+ 6 unlock(mutexY);
+ 7 unlock(mutexX);
+
+[heading Understanding Concurrency Hazards]
+
+Informally, a concurrency hazard is a condition existing in a set of operations that, if executed with a specific concurrent interleaving, results in one or many unwanted sideeffects. Most errors in parallel systems, such as deadlocks and priority inversion, are the specific execution of concurrency hazards resulting in the unwanted side-effect(s) they contain. If the concurrency hazards are eliminated, the parallel system errors contained within the concurrency hazards are also eliminated. Unfortunately, detecting existing concurrency hazards is non-trivial and therefore eliminating them is also non-trivial.
+
+Mutual exclusion exhibits more concurrency hazards than TM because its implementation details (i.e., its locks) must be exposed and used by the end programmer. While the locks used to enforce mutual exclusion by themselves are not concurrency hazards, their use can lead to a number of hazards. As such, using locks leads to concurrency hazards.
+
+Because the mutual exclusion locking details are exposed to the programmer and because the programmer must maintain a universal and informal contract to use these locks, concurrency hazards can arise due to the number of possible misuses that can be introduced by the programmer. In particular, if the programmer accidentally deviates from the informal locking contract, he or she may inadvertently introduce a concurrency hazard that can cause the program to deadlock, invert priority or lead to inconsistent data.
+
+In contrast, TM has no universal or informal contract between shared data that the end programmer needs to understand and follow as is required in mutual exclusion. Due to this, TM can hide its implementation details which results in reduced concurrency hazards. In particular, each transaction tracks the memory it uses in its read and write sets. When a transaction begins its commit phase, it verifies its state is consistent and commits its changes. If a transaction finds its state is inconsistent, it discards its changes and restarts. All of this can be achieved using the basic TM interfaces shown in Section 3 without exposing any implementation details. In order to use TM, the end programmer only needs to know how to correctly create a transaction. Once the transaction is executed, regardless of how it is executed, it results in a program state that is guaranteed to be consistent.
+
+Fundamentally, TM exhibits less concurrency hazards than mutual exclusion because its implementation details are divorced from its interface and can therefore be hidden within its subsystem. Any number of implementations can be used in a TM subsystem using only the basic TM interfaces shown in Section 3. The same is not true for mutual exclusion. Mutual exclusion, regardless of how it is implemented, exposes details of its implementation to the programmer. As demonstrated in Section 5, mutual exclusion does not provide software modularity specifically because extending an existing module requires an understanding and extension of that module's implementation. When such locking implementations are hidden inside of software libraries, extending these modules can range from difficult to impossible.
+
+[heading Testing: Race Conditions and Interleavings]
+
+A race condition is a common concurrency hazard that exists in parallel or distributed software. As with all concurrency hazards, race conditions rely on a specific interleaving of concurrent execution to cause their unwanted side-effect. In this section we demonstrate that race conditions do not exist in TM and therefore, software testing is greatly simplified because all possible interleavings do not need to be tested to ensure correct system behavior. In order to demonstrate that race conditions are absent from TM, we must first show that they are present in other types of concurrency control.
+
+ 1 // Thread T1 // Thread T
+ 2 lock(L2);
+ 3 lock(L1);
+ 4 ...
+ 5 unlock(L1);
+ 6 unlock(L2);
+ 7 lock(L1);
+ 8 lock(L2);
+ 9 ...
+
+ Figure 4. Mutual Exclusion Race Condition.
+
+Consider the race condition present in the mutual exclusion example shown in Figure 4. The race condition present in the example results in a deadlock if thread T1 executes line 7 followed by thread T2 executing line 2. However, if the program executes the lines in order (e.g., line 1, then line 2, then line 3, etc.), the system will execute properly. The fundamental problem in Figure 4 is that it contains a concurrency hazard; in particular, it contains a race condition. To further complicate matters, the race condition can only be observed in two of many possible concurrent executions. Those two executions are: T1 executes line 7 followed by T2 executing line 2 or T2 executes line 2 followed by T1 executing line 7. All other possible concurrent interleavings of threads T1 and T2 avoid the deadlock race condition. More specifically, as long as T1 executes lines 7-8 atomically or T2 executes line 2-3 atomically, all remaining concurrent interleavings are free of the deadlock race condition.
+
+Because it is unlikely that the deadlock race condition will occur, the programmer may never observe it, no matter how many times the program is tested. Only exhaustive testing, which tests all possible concurrent interleavings, is guaranteed to identify the presence of the deadlock. Regrettably, exhaustive testing is an unrealistic solution for most programs due to the time it would take to execute all possible concurrent interleavings of the program.
+
+An alternative to exhaustive testing is for programmers to use types of concurrency control that are devoid of certain concurrency hazards. For example, if mutual exclusion did not emit the race condition concurrency hazard, it would be impossible for a program using it to deadlock. Therefore, exhaustive testing would not be necessary. While this scenario is hypothetical, it illustrates our larger argument: in order to avoid common parallel problems in a practical fashion, programmers may need to only use types of concurrency control that are devoid of certain concurrency hazards. By doing this, the program using the specific type of concurrency control will be guaranteed to be free of certain common parallel problems.
+
+TMs are required to be devoid of race conditions within their implementations because they must enforce the ACI (atomic, consistent and isolated) principles. Transactions must execute as atomic and isolated and, therefore, TMs are not capable of supporting concurrent interleavings between multiple transactions as that would violate the atomic and isolated principles of ACI. Due to this, programs only using TM are guaranteed to be free of deadlocks (i.e., deadlockfreedom). Moreover, because TM implementations can guarantee freedom of race condition concurrency hazards, programmers only need to verify their transactional code is correct in a sequential (non-parallel) manner. Once the sequential execution of the transactional code has been verified, no more testing is required as the TM system is required to behave in a consistent manner for all serial orders.
+
+[heading Development: Mutual Exclusion and TM]
+
+The development of fine-grained locking is notoriously difficult. Designing such software is equally as hard. The difficulty in developing and designing fine-grained locking systems is rooted in conflicting heuristics. A primary goal of software design is to identify the most simplistic software solution that exists for a particular problem. A primary goal of fine-grained locking is the most efficient concurrent implementation of a software system. The goals of software design and fine-grained locking are conflicting because the most efficient fine-grained locking solution usually requires some of the most complex software design implementations to achieve such performance.
+
+TM achieves scalability by using optimistic concurrency that is implemented within its subsystem (see Section 4). Since the TM subsystem is the efficiency throttle for TM, which is unexposed to the programmer, the software architecture and design never needs to be complicated (nor can it be) in order to achieve increased parallelism when using transactions. As will be demonstrated in the following section, transactions run efficiently using the interfaces shown in this section and are never complicated in order to achieve improved performance, as is commonly found in fine-grained mutual exclusion implementations.
+
+[heading Scalability]
+
+In this section we analyze the scalability of TM compared to mutual exclusion. We measure scalability by two metrics: consistency and performance. A concurrency control type has consistent scalability if it guarantees correct behavior for an arbitrarily large number of concurrently executing processes.4 Performance scalability is measured by the maximum number of consistent processes supported by a concurrency control type while executing concurrently.
+
+[heading Pessimistic and Optimistic Critical Sections]
+
+Critical sections can be pessimistic or optimistic. Pessimistic critical sections limit their critical section execution to a single thread. Locks are an example of a synchronization mechanism that use pessimistic critical sections. Optimistic critical sections allow unlimited concurrent threaded execution. Transactions are an example of a synchronization mechanism that use optimistic critical sections.
+
+[heading Truly Optimistic Critical Sections]
+
+Truly optimistic critical sections are those critical sections which allow multiple conflicting threads to simultaneously execute the same critical section. A deferred update (or lazy acquire) TM system supports truly optimistic critical section. A direct update (or eager acquire) TM system does not support truly optimistic critical sections. More details on deferred and direct update TM systems are presented in the subsequent sections.
+
+Truly optimistic critical sections are important because they allow simultaneous conflicting critical section execution, as opposed to disallowing such behavior. It is important to allow conflicting critical section execution because prematurely preventing concurrently executing threads pessimistically degrades performance. To demonstrate this, consider two transactions, called T1 and T2, executing the same critical section. Transaction T1 starts first and tentatively writes to memory locationM. Transaction T2 then starts and tries to write to memory locationM. In a truly optimistic TM system, T2 would be allowed to tentatively write to location M while T1 is also writing to M. This behavior then allows T2 to commit before T1 in the event T2 completes before T1.
+
+In comparison, if the TM system is not truly optimistic, once T1 writes to M, T2 must stall until T1 completes. This pessimistically degrades the performance of the system by prematurely deciding that T1's transactional execution should have higher priority than T2's.
+
+ 1 // global variables
+ 2 int g1 = 0; int g2 = 0;
+ 3
+ 4 void set1(int val) { atomic { g1 = val; } }
+ 5 void set2(int val) { atomic { g2 = val; } }
+ 6 int get1() { atomic { return g1; } }
+ 7 int get2() { atomic { return g2; } }
+
+ Figure 5. Truly Optimistic Concurrency Diagram.
+
+Furthermore, and perhaps more importantly, truly optimistic critical sections allow readers and writers of the same memory location to execute concurrently. This behavior is important because in many cases, both the readers and writers of the same memory can commit with consistent views of memory.
+
+An example of this is shown in Figure 5. As demonstrated in Figure 5 thread 1 and thread 2, which we'll refer to as T1 and T2 respectively, operate on the same memory locations (g1 and g2). Because the TM system supports optimistic concurrency, T2 is allowed to execute concurrently alongside T1 even though their memory accesses conflict. However, in this scenario, because T2 completes its workload before T1, both transactions are allowed to commit. T2 captures the state of g1=0,g2=0 while T1 sets the state of g1=1,g2=1. As the example addresses, both g1=0,g2=0 and g1=1,g2=1 are legal states.
+
+[heading Direct and Deferred Update]
+
+Updating is the process of committing transactional writes to global memory and is performed in either a direct or deferred manner. Figure 6 presents a step-by-step analysis of direct and deferred updating.
+
+Deferred update creates a local copy of global memory, performs modifications to the local copy, and then writes those changes to global memory if the transaction commits. If the transaction aborts, no additional work is done. Direct update makes an original backup copy of global memory and then writes directly to global memory. If the transaction commits, the transaction does nothing. If the transaction aborts, the transaction restores global memory with its backup copy. Some TM systems favor direct update due to its natural optimization of commits (BSTM, McRTSTM and LogTM). However, other TM systems favor deferred update due to its support for truly optimistic critical sections (__Boost_STM__ and RingSTM).
+
+Direct update enables greater TM throughput when aborts are relatively low because it optimizes the common commit case. Deferred update enables greater TM throughput when
+
+# aborts are relatively high or
+# short running transactions (e.g., those that complete quickly) are executed alongside long running transactions (e.g., those that do not complete quickly) because long running transactions do not stall shorter running ones as they would in direct update systems, and therefore the fastest transactions can commit first.
+
+It is important to note that deferred update supports truly optimistic critical sections without special effort, while direct update does not. Truly optimistic critical sections enable the speculative execution of transactions that arrive after a memory location has already been tentatively written to by another transaction. This allows the first transaction, of potentially many competing transactions, to complete its commit, whether it be the later arriving transaction or the earlier arriving writer. This scenario is not possible with direct update without special effort.
+
+[heading Scalability: Mutual Exclusion and Transactional Memory]
+
+The scalability of mutual exclusion is limited to pessimistic concurrency. By definition, mutual exclusion's critical sections must be pessimistic, otherwise they would not be isolated to a single thread (i.e., they would not be mutually exclusive). TM, however, is generally implemented using optimistic concurrency, but it can enforce pessimistic concurrency amongst transactions if that behavior is required for certain conditions. In certain cases, TMs becomemore strict and execute pessimistically to enable inevitable or irrevocable transactions. Such transactions have significant importance for handling operations that, once started, must complete (e.g., I/O operations).
+
+Since TM can execute optimistically and pessimistically, it is clear that whatever benefits pessimistic concurrency has can be acquired by TM. However, since mutual exclusion can only execute pessimistically, the advantages found in optimistic concurrency can never be obtained by mutual exclusion.
+
+When one first analyzes pessimistic and optimistic concurrency, it may seem that the only benefit optimistic concurrency has over pessimistic concurrency is that multiple critical sections, which conflict on the memory they access, can execute concurrently. The simultaneous execution of such conflicting critical sections allows the execution speed of such critical sections to guide the system in deciding which execution should be allowed to commit and which should be aborted. In particular, the first process to complete the critical section can be allowed to abort the other process of the system. The same scenario cannot be achieved by pessimistic critical sections and is demonstrated in Section 4.1.1.
+
+A counterargument to this scenario is that such optimistic concurrency only allows one critical section to commit, while one must be aborted. Because mutual exclusion only allows one conflicting critical section execution at a time, and because mutual exclusion does not support failure atomicity (i.e., rollbacking of the critical section), mutual exclusion's pessimistic behavior is superior in terms of energy and efficiency. Mutual exclusion, unlike TM, suffers no wasted work because conflicting critical sections are limited to a single thread of execution, reducing the energy it uses. Furthermore, because mutual exclusion does not require original data to copied, as needed for TM's direct or deferred update, it executes faster.
+
+While there is merit to this counterargument, an important scenario is not captured by it: truly optimistic critical sections can support multiple reader / single write executions which, if executed so the readers commit before the writer, all critical sections will succeed. This scenario is impossible to achieve using pessimistic critical sections. Although mutual exclusion can use read/write locking, as soon as a writer thread begins execution on a conflicting critical section, all readers must be stalled. TM's truly optimistic concurrency does not suffer from this overly pessimistic limitation of throughput and is therefore capable of producing an immeasurable amount of concurrent throughput under such conditions.
+
+From a theoretical perspective, given L memory locations and P processes, mutual exclusion can support the consistent concurrent execution of P*L number of readers or L writers. TM can support the consistent concurrent execution of P*L number of readers and L writers. Using the above variables, the mathematical expression of the performance scalability of mutual exclusion (S(ME)) is:
+
+ S(ME) = (P*L) + L
+
+Using the same variables, the mathematical expression of the performance scalability of transactional memory is:
+
+ S(TM) = (P * L) + L
+
+As should be clear from the above equations, mutual exclusion cannot achieve the same performance scalability of TM. This is because TM supports truly optimistic concurrency and mutual exclusion is confined to pessimistic concurrency. While other examples exist that demonstrate optimistic concurrency can increase throughput via contention management, the above equations capture the indisputable mathematical limitations in mutual exclusion's performance scalability.
+
+[heading Modularity]
+
+Software modularity is an important aspect of software that is necessary for its reuse. Formally, software is modular if it can be composed in a new system without altering its internal implementation. Informally, software is modular if it can be used, in its entirety, through its interface.
+
+By making software modular, it can be freely used in an unlimited number of software systems. Without software modularity, software can only be used in the original system where it was written. Clearly, without software modularity, software cannot be reused. Because most software developments are based on extensive library use, software reuse is an integral part of software development. As such, limiting software reuse, would result in severely hampered development capabilities and overall development time. For these reasons, software modularity is vital for any software paradigm to be practical. Software paradigms that do not support software modularity are, in short, impractical.
+
+[heading Mutual Exclusion and Software Modularity]
+
+In this section, we show that mutual exclusion, regardless of its implementation, fails to deliver software modularity. We demonstrate this through a running example started in Figure 7 which implements inc(), mult() and get(); these functions use lock G to respectively implement an increment, multiply and get operations for the shared data.
+
+ 1 void inc(int v) {
+ 2 lock(G); g += v; unlock(G);
+ 3 }
+ 4
+ 5 void mult(int v) {
+ 6 lock(G); g *= v; unlock(G);
+ 7 }
+ 8
+ 9 int get() {
+ 10 lock(G); int v = g; unlock(G);
+ 11 return v;
+ 12 }
+
+ Figure 7. Mutual Exclusion for Increment, Multiply and Get of Shared Variable.
+
+Now suppose a programmer wants to increment and multiply by some values within the same atomic operation. The initial implementation may look like the following.
+
+ 1 inc(a);
+ 2 mult(-b);
+
+An unwanted side-effect of such an implementation is the exposure of the intermediate state of g between inc() and mult(). A second thread performing a get() may read an inconsistent value of g; the value of g between inc() and mult(). This is demonstrated in the timing diagram of Figure 8 .
+
+ Figure 8. Example of Exposed State of Mutual Exclusion.
+
+If the programmer needs the inc() and mult() operations to be executed together, without an intermediate state being revealed, he or she could make lock G reentrant. Reentrant locks are locks that can be obtained multiple times by a single thread without deadlocking. If G is made reentrant, the following code could be used to make inc(a) and mult(-b) atomic. A basic implementation of a reentrant lock is to associate a counter with its lock and increment the counter each time the lock() interface is called and to decrement the counter each time the unlock() interface is called. The reentrant lock is only truly locked when a call to lock() is made when its associated counter is 0. Likewise, the reentrant lock is only truly unlocked when a call to unlock() is made when its associated counter is 1.
+
+ 1 lock(G);
+ 2 inc(a);
+ 3 mult(-b);
+ 4 unlock(G);
+
+If the above code uses reentrant locks, it will achieve the programmer's intended atomicity for inc() and mult(), isolating the state between inc() and mult(), which disallows the unwanted side-effect shown in Figure 8. While the atomicity of the operations is achieved, it is only achieved by exposing the implementation details of inc() and mult(). In particular, if the programmer had not known that lock G was used within inc() and mult(), making an atomic operation of inc() and mult() would be impossible.
+
+An external atomic grouping of operations is impossible using embedded mutual exclusion without exposing the implementation details because the heart of mutual exclusion is based on named variables which the programmer specifies to guard their critical sections. Because these variables are named, they cannot be abstracted away and any programmer wishing to reuse the mutually exclusive code must be able to access and extend the implementation details.
+
+ 1 void inc(int v) { transaction { g += v; } }
+ 2
+ 3 void mult(int v) { transaction { g *= v; } }
+ 4
+ 5 int get() { transaction { return g; } }
+
+ Figure 9. TM of Increment, Multiply and Get of Shared Variable.
+
+
+[heading Summary of Mutual Exclusion Modularity]
+
+As we presented at the beginning of this section, software modularity can be informally understood as a component's ability to be used entirely from its interface. Therefore, components that cannot be used entirely from their interface, components that must expose their implementation details to be extended, are not modular. As such, the paradigm of mutual exclusion does not support software modularity.
+
+[heading Transactional Memory and Software Modularity]
+
+Transactional memory works in a fundamentally different manner than mutual exclusion, with regard to its interface and implementation. To begin, as demonstrated in Section 3, TMs do not generally expose any of their implementation details to client code. In fact, in many TMs, client code is more versatile if it knows and assumes nothing about the active implementation of the TM. By abstracting away details of TM implementation, a TM subsystem can adapt its behavior to the most efficient configuration for the program's current workload, much like the algorithms used for efficient operation of processes controlled by operating systems. TM uses such abstractions to optimize the performance of concurrent programs using various consistency checking methods, conflict detection times, updating policies, and contention management schemes.
+
+[heading Achieving TM Software Modularity]
+
+TM achieves software modularity by allowing transactions to nest. With transactional nesting, individual transactions can be wrapped inside of other transactions which call the methods where they reside, resulting in a transaction composed of both the parent and child transaction. Furthermore, this is achieved without altering or understanding the child transaction's implementation. To best demonstrate transactional nesting, we reuse the prior mutual exclusion example shown in Figure 7 and implement it using transactions as shown in Figure 9.
+
+As before, the programmer's goal is to implement a combination of inc() and mult() executed in an atomic fashion. The basic, and incorrect implementation is demonstrated below:
+
+ 1 inc(a);
+ 2 mult(-b);
+
+Even with transactions, this approach fails because the transactions within inc() and mult() begin and end inside their respective functions. However, to make the above operations atomic, the programmer need only make the following modification shown in Figure 10.
+
+ 1 transaction { // transaction {
+ 2 inc(a); // transaction { g += a; }
+ 3 mult(-b); // transaction { g *= -b; }
+ 4 } // }
+
+ Figure 10. Modularity: Transaction of Increment and Multiply.
+
+In effect, the TM system subsumes the transactions that are nested inside of the inc() and mult() operations. The left side of Figure 10 shows the actual code of the transaction, while the right side shows the child transactions that are subsumed by the parent transaction.
+
+Because transactions are isolated and atomic, the resulting state of g, from operations inc() and mult(), is invisible to outside observers until the transaction is committed. As such, outside threads cannot view any intermediate state constructed by partial transaction execution. The result of such isolated behavior is the guaranteed consistent concurrent execution of interleaved accesses to shared memory from in-flight transactions. This is demonstrated in Figure 11; let g=0 and assume deferred update is the active updating policy, as explained in Section 4.2.
+
+ Figure 11. Example of Isolated and Consistent State of TM.
+
+As shown in Figure 11, multiple concurrently executing threads can read and write to the same shared memory in a consistent and isolated fashion when using TM. In the example, thread T2 performs x = get() after T1 has already executed inc(a). However, since T1 has not yet committed its transaction, T2's view of shared data g is consistent (g=0). When T2 begins the commit phase of its transaction, the TM subsystem verifies that shared data g has not been updated since it initially read it. Since no other transaction has updated shared data g, T2's transaction is permitted to commit. Thread T1 then continues with its mult() operation and then enters its commit phase. The TM subsystem also verifies the consistency of T1's transaction before it is allowed to commit. Again, since no one transaction has updated shared data g between its reads and writes to it, T1's transaction is permitted to commit.
+
+The above analysis demonstrates that software modularity can be achieved in TM through transactional nesting (Figure 10). In this case, the specific software modularity achieved is extension to an existing critical section. Critical section extension was also possible with mutual exclusion, as demonstrated in Section 5.1, but only through exposing the details behind the mutual exclusion implementation. Due to this, mutual exclusion fails to deliver a practical level of software modularity.
+
+[heading Summary of Transactional Memory Modularity]
+
+TM supports software modularity by allowing transactions to nest, to any depth, while logically grouping the shared data accesses within the transactions into an atomic, consistent and isolated (ACI) operation. Transactional nesting is natural to the programmer because nested transactions behave in the same manner as unnested transactions. TM's ACI support ensures transactions will behave in a correct manner regardless of if the transaction is used by itself or subsumed into a larger transaction.
+
+
+[heading C++ library language-like solution]
+
+Research in parallel programming has recently seen a flurry of attention. Among the active research is a push for high-level languages to offer native support for parallel programming primitives. The next version of C++ will incorporate library support for threads, while numerous researchers are exploring ways to extend C++ to support transactional memory (TM).
+
+A strength of C++ is its support for automatic objects. Rather than requiring that parallel primitives be added directly to the language, automatic objects in C++ can be used to implement much of their necessary infrastructure. The automatic object approach is natural from a language perspective, provides full algorithmic control to the end programmer, and demonstrates C++'s linguistic elegance. The disadvantage of this approach is its added programmatic overhead. Using only automatic objects, certain programming errors, such as accidental scope removal and incorrectly programmed transactional retry behavior, can arise.
+
+In light of this, there are unique trade-offs between language based and library-based parallel primitives. Language-based solutions minimize syntactic clutter which reduce programmer related errors, but are seemingly irreversible and, if incorrect, can have crippling effects upon the language. Library-based solutions increase programmer control and flexibility, but place substantial pressure on the programmer to avoid minute programming errors. A good compromise is a solution that behaves like a language extension, but is implemented within a library. By implementing parallel primitives within a library that uses language-like interfaces, programmer pressure is reduced, implementation updates are seamless, and full programmer control is achieved through library extensibility.
+
+__Boost_STM__ present such a language-like solution for C++ using generic library coupled with a deliberate use of the preprocessor. The culmination of these components facilitate a simple, yet powerful, parallel programming interface in C++.
+
+
+[endsect]
+
[section TM-Specific Concepts]
[section Optimistic concurrency]
@@ -70,7 +361,7 @@
Conflict detection is the process of identifying when two or more transactions conflict. Conflicts can exist when a transaction writes to memory that another transaction then reads or writes (write after write, write after read), or when a transaction reads memory that is then used in another transaction's write (read after write). Unlimited readers, on the other hand, can read the same piece of memory without any conflict (read after read).
-[table Comparaison with other STM systems
+[table Conflict Detection
[
[[*Features]] [[*after write]] [[*after read]]
]
@@ -78,7 +369,7 @@
[[*write]] [[*YES]] [[*YES]]
]
[
- [[*read]] [[*YES]] [[*NO]]
+ [[*read]] [[*YES]] [NO]
]
]
@@ -604,7 +895,7 @@
* intrinsic: Not directly, we need to use some form of cast. clas D:B{}. intrinsic<B>* ptr=new intrinsic<D>(); ????
* extrinsic: N/A because the helper is hidden
# Allows to avoid the forwarding problem for base classes
- * mixin: As the user defines its own class he can define the exact constructors avoiding the problem added the base parameter
+ * mixin: As the user defines its own class he can define the exact constructors avoiding the problem added the base parameter.
* intrinsic: the wrapper must define generic constructors having the forwarding problem.
* extrinsic: N/A because the helper is hidden
# Allows to avoid the forwarding problem for derived classes
@@ -618,30 +909,30 @@
The following table is a compilation of the preceding analysis:
-[table Comparaison with other STM systems
+[table Mixin versus wrapper
[
[[*feature]] [[*Mixin]] [[*Intrinsic Wrapper]] [[*Extrinsic Wrapper]]
]
[
- [[*existing class]] [[No]] [[*Yes]] [[*Yes]]
+ [[*existing class]] [No] [*Yes*] [*Yes*]
]
[
- [[*existing variable]] [[No]] [[No]] [[*Yes]]
+ [[*existing variable]] [No] [No] [*Yes*]
]
[
- [[*existing class hierarchy]] [[No]] [[No]] [[*Yes]]
+ [[*existing class hierarchy]] [No] [No] [*Yes*]
]
[
- [[*avoid the forwarding problem for base classes]] [[Yes]] [[No]] [[*Yes]]
+ [[*avoid the forwarding problem for base classes]] [*Yes*] [No] [*Yes*]
]
[
- [[*avoid the forwarding problem for derived classes]] [[No]] [[No]] [[*Yes]]
+ [[*avoid the forwarding problem for derived classes]] [No] [No] [*Yes*]
]
[
- [[*Works for class hierarchies]] [[*Yes]] [[No]] [[*Yes]]
+ [[*Works for class hierarchies]] [*Yes*] [No] [*Yes*]
]
[
- [[*Performs optimaly]] [[*Yes]] [[*Yes]] [[No]]
+ [[*Performs optimally]] [*Yes*] [*Yes*] [No]
]
]
@@ -728,7 +1019,7 @@
The synchronization primitives supported in __Boost_STM_s__ programming model are mutual exclusion locks, timed locks and transactions. Mutual exclusion locks have architectural support in all modern instruction set architectures and are widely considered the most common synchronization primitive [11]. Transactions, a parallel programming abstraction by Herlihy and Moss [10], are currently implemented in software with potential hardware support in the near future [4].
-[table Comparaison with other STM systems
+[table __Boost_STM__ Mutual Exclusion Locking Parallel Constructs
[
[[*Keyword]] [[*Behavior]]
]
@@ -751,8 +1042,6 @@
]
-Table 1. __Boost_STM__ Mutual Exclusion Locking Parallel Constructs.
-
[heading Library-based Lock Implementations]
Mutual exclusion locks, or just locks, ensure a programmerspecified set of operations are limited to one thread of execution [7, 11]. Lock-guarded operations, also known as pessimistic critical sections, require that all threads obtain a single-ownership flag before executing the guarded operations. If N threads simultaneously attempt to execute the same pessimistic critical section, one thread is allowed forward progress while the remaining N ?? 1 threads are stalled. An example of three distinct types of locking is shown in Figure 1.
Modified: sandbox/stm/branches/vbe/libs/stm/doc/reference.qbk
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/reference.qbk (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/reference.qbk 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -8,8 +8,6 @@
[section Reference]
-This section presents the interface of __Boost_STM__.
-
[include reference/stm.qbk]
[section Core]
@@ -24,10 +22,6 @@
[include reference/transaction_bookkeeping.qbk]
[endsect]
-There are two kind of transactional objets:
-
-* Mixins
-* Wrappers
[section Transactional Objects]
@@ -52,7 +46,9 @@
[include reference/tx/smart_ptr.qbk]
[endsect]
[endsect]
+]
+[/
[section Non Transactional Objects]
[include reference/non_tx/numeric.qbk]
[include reference/non_tx/pointer.qbk]
@@ -63,18 +59,29 @@
]
[section Transaction Specific Memory Managers]
+[/include reference/tx_memory_managers.qbk]
+[/include reference/tx_memory_managers/shared_memory_manager.qbk]
+[/include reference/tx_memory_managers/malloc_memory_manager.qbk]
+[/include reference/tx_memory_managers/tss_memory_manager.qbk]
+[/include reference/tx_memory_managers/tx_memory_manager.qbk]
[endsect]
[section User Memory Managers]
+[/include reference/memory_managers.qbk]
+[/include reference/memory_managers/basic_memory_manager.qbk]
[endsect]
[section Synchronization]
+[include reference/sync.qbk]
+[include reference/sync/mutex.qbk]
+[include reference/sync/unique_lock.qbk]
+[include reference/sync/synchronized.qbk]
[endsect]
[section Contention Managers]
[include reference/contention_managers/concept.qbk]
-__TBoost_STM__ allows the user to choose between static and dynamic polimorphic contention manager.
+__Boost_STM__ allows the user to choose between static and dynamic polimorphic contention manager.
[include reference/contention_manager.qbk]
@@ -88,6 +95,10 @@
[endsect]
[endsect]
+[section Utilities]
+[/include reference/utilities/incomplete_smart_cast.qbk]
+[endsect]
+
[endsect]
Modified: sandbox/stm/branches/vbe/libs/stm/doc/reference/language_like.qbk
==============================================================================
--- sandbox/stm/branches/vbe/libs/stm/doc/reference/language_like.qbk (original)
+++ sandbox/stm/branches/vbe/libs/stm/doc/reference/language_like.qbk 2010-03-13 20:04:00 EST (Sat, 13 Mar 2010)
@@ -10,92 +10,321 @@
[section:language_like_hpp Header `<boost/stm/language_like.hpp>`]
[/==========================================================================================]
-The complexity behind the `atomic` macro is needed to guarantee two fundamental goals.
+[section Transaction related]
-* First, transactions must start and end correctly.
-* Second, transactions must change their retry behavior based on whether they are a child or parent transaction to ensure proper closed nesting, flattened transaction behavior is performed.
+The complexity behind the `BOOST_STM_TRANSACTION`/`BOOST_STM_END_TRANSACTION` macros is needed to guarantee the following fundamental goals:
-The `atomic` preprocessor behaves as follows. When the transactional for loop is entered, a transaction automatic object is constructed which initializes the transaction and puts it in-flight. The for loop conditional ensures the following conditions:
+* First, transactions must start and end correctly.
+* Second, transactions must change their retry behavior based on whether they are a nested or root transaction to ensure proper closed nesting, flattened transaction behavior is performed.
+* `return`/`goto` statements can be used and must commit the transaction before leaving the transaction block
+* `break`/`continue` statements can be used and are associated to a user loop and commit the transaction before leaving the transaction block.
+* commit or abort on user exceptions before re-throwing.
+
+The following pseudo code shows the general schema:
+
+ 1 >>> frame transaction_block =
+ 2 keyword<transaction> statement_sequence<BODY>
+ 3 [ keyword<retry> statement_sequence<RETRY> ]
+ 4 >>> var in_loop = ...
+ 5 >>> var exception_commits = ...
+ 6 >>> transaformation
+ 7 {
+ 8 control_flow __ctrl;
+ 9 for(boost::stm::transaction __txn_;;) { \
+ 10 try{
+ 11 boost::stm::detail::commit_on_destruction __destr(__txn); \
+ 12 try{
+ 13 do {
+ 14 __ctrl=break_;
+ 15 <BODY>
+ 16 __ctrl=none;
+ 17 break;
+ 18 } while ((__ctrl=continue_),false);
+ 19 } catch (boost::stm::aborted_tx &) {
+ 20 __destr.release();
+ 21 throw;
+ 22 } catch(...) {
+ 23 >>> if exception_commits
+ 24 if (__txn_.forced_to_abort()) {
+ 25 __destr_.release();
+ 26 } else {
+ 27 __destr_.commit();
+ 28 }
+ 29 >>> else
+ 30 __destr_.release();
+ 31 >>> endif
+ 32 throw;
+ 33 }
+ 34 break;
+ 35 } catch (boost::stm::aborted_tx &) {
+ 36 if (__txn_.is_nested()) throw;
+ 37 do {
+ 38 __ctrl=break_;
+ 39 <RETRY>
+ 40 __ctrl=none;
+ 41 break;
+ 42 } while ((__ctrl=continue_),false);
+ 43 }
+ 44 >>> if in_loop
+ 45 if (__ctrl != none) break; \
+ 46 >>> else
+ 47 BOOST_ASSERT(__ctrl == none); \
+ 48 >>> endif
+ 49 __txn.restart(); \
+ 50 }
+ 51 >>> if in_loop
+ 52 if (__ctrl==continue_) continue;
+ 53 else if (__ctrl==break_) break;
+ 54 >>> endif
+ 55 }
+ 56 >>> end_frame
+
+
+The `BOOST_STM_TRANSACTION`/`BOOST_STM_END_TRANSACTION` preprocessor behaves as follows. When the transactional for loop is entered, a transaction automatic object is constructed which initializes the transaction and puts it in-flight. The for loop conditional ensures the following conditions:
# the transaction is uncommitted,
# the transaction has the opportunity to throw an exception if necessary, and
# the transaction is in-flight.
-Once a transaction commits, the check on (1) `!T.committed()` ensures the transaction is not executed again. If the transaction has been aborted but is a child transaction, the transaction must be restarted at the parent level. The call to (2) `T.check_throw_before_restart()` allows an aborted child transaction to throw an exception upward (before it is restarted) so the entire transaction can be restarted from the parent. The `check_throw_before_restart()` API checks the current run-time state of the thread to determine if another transaction is active above it. This behavior allows transactions to be used at any nesting level while dynamically ensuring the correct retry behavior. Finally, the call to `restart_if_not_inflight()` ensures the transaction is correctly restarted after each subsequent abort.
+On a inner block a commit on destruction variable is created, which will commit the transaction if not released.
+Once a transaction commits, the transaction is not executed again. If the transaction has been aborted but is a nested transaction, the transaction must be restarted at the parent level. This is done by throwing an exception upward so the entire transaction can be restarted from the parent. This behavior allows transactions to be used at any nesting level while dynamically ensuring the correct retry behavior. Finally, the call to `restart` ensures the transaction is correctly restarted after each subsequent abort.
+
+Once all of the transactional operations within the for loop are executed, the commit on destruction destructor is called which commit the transaction. This terminates the transaction by either committing or aborting it.
-Once all of the transactional operations within the for loop are executed, a call to no_throw_end() is made which ends the transaction. The `no_throw_end()` terminates the transaction by either committing or aborting it. Note, however, that `no_throw_end()` does not throw an exception if the transaction is aborted, whereas the prior __Boost_STM_s__ API end() does. This non-throwing behavior deviates from the prior __Boost_STM__ implementation of automatic objects when end() was invoked within the try / catch body. Furthermore, due to `no_throw_end()` not throwing an exception if the transaction is aborted, some cases may arise where `catch_before_retry` or `before_retry` operations are not invoked when a transaction is aborted. This is a current limitation of the system and is overcome by inserting a manual end() operation as the last operation in the atomic block. The explicit end() ensures any operations within the before_retry block are executed if the transaction is aborted.
+[section Macro `BOOST_STM_TRANSACTION`]
+Use `BOOST_STM_TRANSACTION` for optimistic critical section. Supports single-depth and multiple-depth (nested) transactions. Performs automatic retry for root transaction, throws exception when T is a child transaction. This automatic switch enables correctly behaved nested and non-nested transactions
+ #define BOOST_STM_TRANSACTION(T) ...
- #define atomic(T)
- #define try_atomic(T)
- #define use_atomic(T)
- #define catch_before_retry(E)
- #define before_retry
- #define end_atom
+Usage
+ BOOST_STM_TRANSACTION
+ <sentence sequence excluding break/continue>
+ BOOST_STM_END_TRANSACTION
+ |
+ BOOST_STM_TRANSACTION
+ <sentence sequence excluding break/continue>
+ BOOST_STM_RETRY
+ <sentence sequence>
+ BOOST_STM_END_RETRY
-[section Macro `atomic`]
-Use atomic transaction T for optimistic critical section. Supports single-depth and multiple-depth (nested) transactions. Performs automatic retry when T is a parent transaction, throws exception when T is a child transaction. This automatic switch enables correctly behaved nested and non-nested transactions
+[endsect]
+
+[section Macro `BOOST_STM_RETRY`]
- #define atomic(T) \
- for (transaction T; \
- !T.committed() && T.check_throw_before_restart() && T.restart_if_not_inflight(); \
- T.no_throw_end())
- try
+ #define BOOST_STM_RETRY ...
+Opens a retry block.
+Usage
+
+ BOOST_STM_TRANSACTION
+ <sentence sequence>
+ BOOST_STM_RETRY
+ <sentence sequence>
+ BOOST_STM_END_RETRY
[endsect]
+[section Macro `BOOST_STM_END_RETRY`]
+
+ #define BOOST_STM_END_RETRY catch (aborted_tx &)
-[section Macro `try_atomic`]
+Usage
- #define try_atomic(T) \
- for (transaction T; \
- !T.committed() && T.restart(); \
- T.no_throw_end())
- try
+ BOOST_STM_TRANSACTION
+ <sentence sequence>
+ BOOST_STM_RETRY
+ <sentence sequence>
+ BOOST_STM_END_RETRY
[endsect]
+[section Macro `BOOST_STM_END_TRANSACTION`]
-[section Macro `use_atomic`]
+ #define BOOST_STM_END_TRANSACTION ...
- #define use_atomic(T) \
- for (transaction T; \
- !T.committed() && T.restart(); \
- T.end())
+Equivalent to
+ BOOST_STM_RETRY {}
+ BOOST_STM_END_RETRY
+Usage
+
+ BOOST_STM_TRANSACTION
+ <sentence sequence>
+ BOOST_STM_END_TRANSACTION
[endsect]
-[section Macro `catch_before_retry`]
- #define catch_before_retry(E) catch (aborted_tx &E)
+[section Macro `BOOST_STM_TRANSACTION_IN_LOOP`]
+
+Use `BOOST_STM_TRANSACTION_IN_LOOP` for optimistic critical section. Works as `BOOST_STM_TRANSACTION` and in addition supporte `break`/`continue` statements.
-Catches exception when transaction is aborted. This, before_retry, or end_atom must follow an atomic block. Once all operations within the catch_before_retry block are executed, control is returned to the local atomic where the transaction is retried (if it is the parent) or the atomic block is exited by exception (if it is a child).
+ #define BOOST_STM_TRANSACTION_IN_LOOP(T) ...
+
+Currently equivalent to
+
+ BOOST_STM_TRANSACTION
+
+Usage
+ user loop {
+ BOOST_STM_TRANSACTION_IN_LOOP
+ <sentence sequence including break/continue>
+ BOOST_STM_END_TRANSACTION_IN_LOOP
+ }
+ |
+ BOOST_STM_TRANSACTION_IN_LOOP
+ <sentence sequence>
+ BOOST_STM_RETRY
+ <sentence sequence>
+ BOOST_STM_END_RETRY_IN_LOOP
-To be used pairwise with `try_atomic` or `atomic`.
[endsect]
-[section Macro `before_retry`]
- #define before_retry catch (aborted_tx &)
+[section Macro `BOOST_STM_END_RETRY_IN_LOOP`]
+
+ #define BOOST_STM_END_RETRY_IN_LOOP ...
-Same as catch_before_retry(E) except the exception is discarded.
+Usage
-To be used pairwise with `try_atomic` or `atomic`.
+ BOOST_STM_TRANSACTION_IN_LOOP
+ <sentence sequence>
+ BOOST_STM_RETRY
+ <sentence sequence>
+ BOOST_STM_END_RETRY_IN_LOOP
[endsect]
-[section Macro `end_atom`]
+[section Macro `BOOST_STM_END_TRANSACTION_IN_LOOP`]
+
+ #define BOOST_STM_END_TRANSACTION_IN_LOOP ...
+
+Equivalent to
- #define end_atom catch (aborted_tx &) {}
+ BOOST_STM_RETRY {}
+ BOOST_STM_END_RETRY_IN_LOOP
-Same as before_retry except the exception is discarded and {} are automated.
-To be used pairwise with `try_atomic` or `atomic`.
+Usage
+
+ BOOST_STM_TRANSACTION_IN_LOOP
+ <sentence sequence>
+ BOOST_STM_END_TRANSACTION_IN_LOOP
[endsect]
+
+[section Macro `BOOST_STM_CURRENT`]
+
+ #define BOOST_STM_CURRENT ...
+
+Reference to the current transaction on a `BOOST_STM_TRANSACTION`/`BOOST_STM_END_TRANSACTION` or `BOOST_STM_RETRY`/`BOOST_STM_END_RETRY` block
+
+Usage
+
+ BOOST_STM_TRANSACTION
+ BOOST_STM_CURRENT.
+ BOOST_STM_END_TRANSACTION
+
+[endsect]
+
+[section Macro `BOOST_STM_COMMIT`]
+
+ #define BOOST_STM_COMMIT ...
+
+Abort the current transaction on a `BOOST_STM_TRANSACTION`/`BOOST_STM_END_TRANSACTION` block
+
+Usage
+
+ BOOST_STM_TRANSACTION
+ BOOST_STM_COMMIT();
+ BOOST_STM_END_TRANSACTION
+
+[endsect]
+
+[section Macro `BOOST_STM_ABORT`]
+
+ #define BOOST_STM_ABORT ...
+
+Abort the current transaction on a `BOOST_STM_TRANSACTION`/`BOOST_STM_END_TRANSACTION` block
+
+Usage
+
+ BOOST_STM_TRANSACTION
+ BOOST_STM_ABORT();
+ BOOST_STM_END_TRANSACTION
+
+[endsect]
+[endsect]
+
+[section Memory management related]
+
+[section Macro `BOOST_STM_NEW_PTR`]
+
+ #define BOOST_STM_NEW_PTR(T_ARGS) ...
+
+Creates a new instance using the `new` operator and notifies the STM system of this new creation.
+
+Usage
+
+ BOOST_STM_TRANSACTION
+ ...
+ prev->next_=BOOST_STM_NEW(list_node<T>(val, curr));
+ ...
+ BOOST_STM_END_TRANSACTION
+
+[endsect]
+
+[section Macro `BOOST_STM_DELETE_PTR`]
+
+ #define BOOST_STM_DELETE_PTR(PTR) ...
+
+Deletes the instance using the `delete` operator and nnotifies the STM system of this deletion.
+
+Usage
+
+ BOOST_STM_TRANSACTION
+ ...
+ prev->next_=BOOST_STM_NEW(list_node<T>(val, curr));
+ ...
+ BOOST_STM_END_TRANSACTION
+
+[endsect]
+
+[section Macro `BOOST_STM_NEW_ARRAY`]
+
+ #define BOOST_STM_NEW_ARRAY(SIZE, T) ...
+
+Creates a new array using the `new []` operator and notifies the STM system of this new allocation.
+
+Usage
+
+ BOOST_STM_TRANSACTION
+ ...
+ prev->next_=BOOST_STM_NEW(size, list_node<T>(val, curr));
+ ...
+ BOOST_STM_END_TRANSACTION
+
+[endsect]
+
+[section Macro `BOOST_STM_DELETE_ARRAY`]
+
+ #define BOOST_STM_DELETE_ARRAY(PTR, SIZE) ...
+
+Deletes the array instance using the `delete[]` operator and nnotifies the STM system of this deletion.
+
+Usage
+
+ BOOST_STM_TRANSACTION
+ ...
+ prev->next_=BOOST_STM_NEW(list_node<T>(val, curr));
+ ...
+ BOOST_STM_END_TRANSACTION
+
+[endsect]
+
+[endsect]
+
[endsect]
Boost-Commit list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk