|
Boost-Commit : |
Subject: [Boost-commit] svn:boost r49836 - sandbox/interthreads/libs/interthreads/doc
From: vicente.botet_at_[hidden]
Date: 2008-11-19 05:56:31
Author: viboes
Date: 2008-11-19 05:56:30 EST (Wed, 19 Nov 2008)
New Revision: 49836
URL: http://svn.boost.org/trac/boost/changeset/49836
Log:
intterthreads version 0.1
Added:
sandbox/interthreads/libs/interthreads/doc/Jamfile.v2 (contents, props changed)
sandbox/interthreads/libs/interthreads/doc/acknowledgements.qbk (contents, props changed)
sandbox/interthreads/libs/interthreads/doc/appendices.qbk (contents, props changed)
sandbox/interthreads/libs/interthreads/doc/case_studies.qbk (contents, props changed)
sandbox/interthreads/libs/interthreads/doc/changes.qbk (contents, props changed)
sandbox/interthreads/libs/interthreads/doc/getting_started.qbk (contents, props changed)
sandbox/interthreads/libs/interthreads/doc/implementation.qbk (contents, props changed)
sandbox/interthreads/libs/interthreads/doc/installation.qbk (contents, props changed)
sandbox/interthreads/libs/interthreads/doc/interthreads.qbk (contents, props changed)
sandbox/interthreads/libs/interthreads/doc/introduction.qbk (contents, props changed)
sandbox/interthreads/libs/interthreads/doc/overview.qbk (contents, props changed)
sandbox/interthreads/libs/interthreads/doc/rationale.qbk (contents, props changed)
sandbox/interthreads/libs/interthreads/doc/reference.qbk (contents, props changed)
sandbox/interthreads/libs/interthreads/doc/tutorial.qbk (contents, props changed)
sandbox/interthreads/libs/interthreads/doc/users_guide.qbk (contents, props changed)
Added: sandbox/interthreads/libs/interthreads/doc/Jamfile.v2
==============================================================================
--- (empty file)
+++ sandbox/interthreads/libs/interthreads/doc/Jamfile.v2 2008-11-19 05:56:30 EST (Wed, 19 Nov 2008)
@@ -0,0 +1,56 @@
+
+# (C) Copyright 2008 Vicente J Botet Escriba.
+#
+# Distributed under the Boost Software License, Version 1.0. (See accompanying
+# file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
+
+path-constant boost-images : ../../../../doc/src/images ;
+
+xml interthreads : interthreads.qbk ;
+
+boostbook standalone
+ :
+ interthreads
+ :
+ # HTML options first:
+ # Use graphics not text for navigation:
+ <xsl:param>navig.graphics=1
+ # How far down we chunk nested sections, basically all of them:
+ <xsl:param>chunk.section.depth=2
+ # Don't put the first section on the same page as the TOC:
+ <xsl:param>chunk.first.sections=1
+ # How far down sections get TOC's
+ <xsl:param>toc.section.depth=4
+ # Max depth in each TOC:
+ <xsl:param>toc.max.depth=2
+ # How far down we go with TOC's
+ <xsl:param>generate.section.toc.level=10
+ # Path for links to Boost:
+ <xsl:param>boost.root=../../../..
+ # Path for libraries index:
+ <xsl:param>boost.libraries=../../../../libs/libraries.htm
+ # Use the main Boost stylesheet:
+ <xsl:param>html.stylesheet=../../../../doc/html/boostbook.css
+
+ # PDF Options:
+ # TOC Generation: this is needed for FOP-0.9 and later:
+ #<xsl:param>fop1.extensions=1
+ # Or enable this if you're using XEP:
+ <xsl:param>xep.extensions=1
+ # TOC generation: this is needed for FOP 0.2, but must not be set to zero for FOP-0.9!
+ <xsl:param>fop.extensions=0
+ # No indent on body text:
+ <xsl:param>body.start.indent=0pt
+ # Margin size:
+ <xsl:param>page.margin.inner=0.5in
+ # Margin size:
+ <xsl:param>page.margin.outer=0.5in
+ # Yes, we want graphics for admonishments:
+ <xsl:param>admon.graphics=1
+ # Set this one for PDF generation *only*:
+ # default pnd graphics are awful in PDF form,
+ # better use SVG's instead:
+ <format>pdf:<xsl:param>admon.graphics.extension=".svg"
+ <format>pdf:<xsl:param>admon.graphics.path=$(boost-images)/
+ ;
+
Added: sandbox/interthreads/libs/interthreads/doc/acknowledgements.qbk
==============================================================================
--- (empty file)
+++ sandbox/interthreads/libs/interthreads/doc/acknowledgements.qbk 2008-11-19 05:56:30 EST (Wed, 19 Nov 2008)
@@ -0,0 +1,18 @@
+[/
+ (C) Copyright 2008 Vicente J Botet Escriba.
+ Distributed under the Boost Software License, Version 1.0.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ http://www.boost.org/LICENSE_1_0.txt).
+]
+
+[section:acknowledgements Appendix D: Acknowledgments]
+
+Part of this library (thread_decorator and thread_specific_shared_ptr) are based on the original implementation of
+[@http://www.boost-consulting.com/vault/index.php?directory=Concurrent%20Programming [*threadalert]] written by Roland Schwarz (thread::init and thread_member_ptr). Many Thanks to Roland that allowed me to
+adapt his implementation.
+
+Thanks also must go to Jochen Heckl for the ideas given respect to the thread_tuple::wait_first implementation.
+
+You can help me to make this library better! Any feedback is very welcome.
+
+[endsect]
Added: sandbox/interthreads/libs/interthreads/doc/appendices.qbk
==============================================================================
--- (empty file)
+++ sandbox/interthreads/libs/interthreads/doc/appendices.qbk 2008-11-19 05:56:30 EST (Wed, 19 Nov 2008)
@@ -0,0 +1,121 @@
+[/
+[/
+ (C) Copyright 2008 Vicente J. Botet Escriba
+ Distributed under the Boost Software License, Version 1.0.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ http://www.boost.org/LICENSE_1_0.txt).
+]
+
+[/=================]
+[section Appendices]
+[/=================]
+
+[include changes.qbk]
+
+[include rationale.qbk]
+
+[include implementation.qbk]
+
+[include acknowledgements.qbk]
+
+[/=====================================]
+[section:todo Appendix E: Future plans]
+[/=====================================]
+
+[heading Tasks to do before review]
+[variablelist
+
+[[Add tests] [Currently there are only some examples. There is yet a long way before been able to review the library.]]
+
+[[Add C++0x move semantics on compilers supporting it and use the Boost.Move emulation otherwise] [.]]
+[[Use C++0x variadic templates on compilers supporting it and the Use the preprocesor otherwise] [.]]
+
+[[Add a practical example using thread_tuple, thread_tuple_once] [.]]
+
+[[Complete the STM example] [.]]
+
+[[Optimize the TSSS maps using intrusive containers] [.]]
+
+]
+
+[heading For later releases]
+[variablelist
+
+[[Generalize both specific pointers in a template class basic_thread_specific_ptr] [
+
+[*Domain Features]
+
+[variablelist
+
+[[ownership: exclusive/shared] [Specifies if the pointer is exclusive to the thread or shared with other threads.]]
+
+[[key range: fixed/variable/mixed ] [Specifies how the key range is defined. fixed has an integer range 0..n, variable takes as key the address of the specific_ptr and mixed use a variant of fixed or variable key.]]
+
+[[key creation: eager/lazy] [Specifies how the key is created, eager at construction time, lazy when needed.]]
+
+[[context setting: eager/lazy] [Specifies how the pointer is set, eager at thread initialization time, lazy when needed.]]
+
+]
+
+The current `thread_specific_ptr` is exclusive, has a variable key range and so the key is initialized at construction time and
+explicit context setting by user on the current thread.
+
+ typedef basic_thread_specific_ptr<exclusive, variable_key_range<>, eager_context_setting> thread_specific_ptr;
+
+The current `thread_specific_shared_ptr` is shared, has a variable key range and so the key is initialized at construction time and
+explicit context setting by user on the current thread.
+
+ typedef basic_thread_specific_ptr<exclusive, variable_key_range<>, eager_context_setting> thread_specific_ptr;
+
+[*Design rationale]
+
+`boost::thread_specific_ptr` uses as key the address of the variable. This has as consequence that
+the mapping from keys to the thread specific context pointer is much less efficient that the direct access provided
+by some implementations. In this concern this library would try to improve the performance to this mapping providing
+direct access keys. As the number of direct access keys must be know at compile time this has a limit on the number
+of thread specific pointers. A mixed approach would be also provided in which the key could be a fixed one or a
+variable one. It is up to the user to give an hint of the key sort.
+When using fixed or mixed keys, there is decision to take related to when the key is created, i.e. eager before the
+threads starts or lazy when we need it.
+
+Non-portable: The interfaces of POSIX Pthreads, Solaris threads, and Win32 threads are very similar. However,
+the semantics of Win32 threads are subtly different since they do not provide a reliable means of cleaning up objects
+allocated in thread-specific storage when a thread exits. Moreover, there is no API to delete a key in Solaris
+threads. This makes it hard to write portable code among UNIX and Win32 platforms.
+
+So we need to implement it on top of the underlying OS. The idea is to take a native thread specific pointer which
+will manage with the fixed, variable and mixed keys. This exactly the approach of `boost::thread_specific_ptr` but
+only for variable keys.
+
+Just to finish, we need to decide when the context is set, two possibilities: the user sets explicitlly the context
+when it considers pertinent or the context is created lazily the first time we try to get it. The first approach has
+the liability that the user needs to ensure that the context is set before get them, but when the context must be set
+before the function thread starts this is a no issue. The second is safe but has as consequence that every access
+include a check. In addition the type must be default constructible.
+
+ bith::thread_specific_shared_ptr<myclass, lazy_setting> ptr;
+
+The pointer will be initilaized when needed as if we had do
+
+ if (ptr.get()==0) {
+ ptr.reset( new myclass() );
+ }
+
+Between the features:
+
+Thread_specific key mapping optimization: Replace the Boost.Thread thread_specific_ptr thread specific key mapping implementation
+by a mix of fixed/variable mapping which will provide efficient access to the fixed keys and scalability with the
+others keys.
+
+Configuration the fixed/variable/mixed key range, ordered/unordered map, intrusive/extrusive map, shared/exclusive locking.
+]]
+
+[[Add Message queues] [Messages queue are the next step concerning the communication between threads on the InterThreads library.]]
+[[Add a daemon controlling all the keep alive controller threads] [This daemon will send regular keep_alive messages and kill the process when dead]]
+
+]
+
+[endsect]
+
+[endsect]
+
Added: sandbox/interthreads/libs/interthreads/doc/case_studies.qbk
==============================================================================
--- (empty file)
+++ sandbox/interthreads/libs/interthreads/doc/case_studies.qbk 2008-11-19 05:56:30 EST (Wed, 19 Nov 2008)
@@ -0,0 +1,589 @@
+[/
+ (C) Copyright 2008 Vicente J. Botet Escriba
+ Distributed under the Boost Software License, Version 1.0.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ http://www.boost.org/LICENSE_1_0.txt).
+]
+
+[/============================]
+[section Examples]
+[/============================]
+
+This section do includes complete examples using the library, but some case studies that use in some way the library.
+I'm now working on the STM project. The deferred log, though tricky, are not huge and would be suitable for someone
+who has a limited time to spend on them.
+
+[/==================================]
+[section thread_tuple_once]
+[/==================================]
+
+thread_tuple_once.hpp is a good example of the use of set_once.
+
+[endsect]
+
+[/==================================]
+[section thread_keep_alive]
+[/==================================]
+
+thread_keep_alive.hpp and thread_keep_alive.cpp is a good example of the use of thread_decoration, thread_specific_shared_pointer.
+
+[endsect]
+
+[/==================================]
+[section Any ideas on a thread_tuple, thread_tuple_once practical example are welcome?]
+[/==================================]
+
+
+[endsect]
+
+
+[/==================================]
+[section Thread safe deferred traces]
+[/==================================]
+
+When executing on a multi thread environment, the outputs lines on
+std::cout could interleave. We can synchronize these outputs with a
+global mutex
+
+ {
+ boost::lock_guard<boost::mutex> lock(global_cout_mutex);
+ std::cout << ... << std::endl ;
+ }
+
+This mutex could be the bottleneck of the system. Onle one mutex resource for all the user threads.
+
+[pre
+ U U
+ \ /
+ \ /
+ U ----- R ----- U
+ |
+ |
+ U
+]
+
+
+[$images/star.png]
+
+Another approach could be to use a queue of output stream buffers for each thread.
+Each buffer is timestamped with the creation date and there is concentrator that takes one by one the elements ordered by the timestamp.
+Only the current thread can push on this queue because it soecific to the thread.
+There is a single thread, the concentrator that pops from these queue.
+In this context we can ensure thread safety without locking as far as
+the queue has at least two messages.
+
+[$images/fourche.png]
+
+[pre
+ U ----- R ------+
+ \
+ U ----- R ------\ \
+ \ \
+ U ----- R -------- U
+ / /
+ U ----- R ------/ /
+ /
+ U ----- R ------+
+]
+
+This can be encapsulated in an async_ostream class
+
+ class async_ostream : public iostreams::stream<detail::async_ostream_sink> {
+ public:
+ typedef char char_type;
+ typedef iostreams::sink_tag category;
+
+ async_ostream(std::ostream& os);
+ void flush();
+ };
+
+ extern async_ostream cout_;
+
+
+With this interface the user can use cout_ as it used std::cout.
+
+ cout_ << "Hello World!" << std::endl ;
+
+All the magic is in the template class boost::iostreams::stream<>. The parameter must be a model of a sink (See the boost.Iostreams library).
+Here it is.
+
+ namespace detail {
+ struct async_ostream_sink {
+ typedef char char_type;
+ typedef boost::iostreams::sink_tag category;
+ async_ostream_sink(std::ostream& os);
+ std::streamsize write(const char* s, std::streamsize n);
+ void flush();
+ private:
+ friend class async_ostream_concentrator;
+ friend class async_ostream;
+ struct impl;
+ boost::shared_ptr<impl> impl_;
+ };
+ }
+
+This class declares the just minimum in order to model a sink. In addition as in order to mask the implementation the Piml idiom is used.
+The implementation of these function is straiforward:
+
+ async_ostream::async_ostream(std::ostream& os)
+ : base_type(os) {}
+
+ void async_ostream::flush() {
+ this->base_type::flush();
+ async_ostream& d = *this;
+ d->flush();
+ }
+
+ async_ostream_sink::async_ostream_sink(std::ostream& os)
+ : impl_(new async_ostream_sink::impl(os)) {}
+
+ std::streamsize detail::async_ostream_sink::write(const char* s, std::streamsize n) {
+ return impl_->write(s,n);
+ }
+
+ void async_ostream_sink::flush() {
+ return impl_->flush();
+ }
+
+Let me continue with the handle of the Pimpl pattern:
+
+ struct detail::async_ostream_sink::impl {
+ impl(std::ostream& os);
+ std::ostream& os_;
+ tsss_type tsss_;
+ priority_queue_type queue_;
+ boost::thread thread_;
+
+ std::streamsize write(const char* s, std::streamsize n);
+
+ static void terminate(shared_ptr<async_ostream_thread_ctx> that);
+ static void loop(impl* that);
+ };
+
+Of course we need to store a reference to the final ostreanm.
+The thread_specific_shared_ptr tsss_ is used to encapsulate the logic specific to each thread.
+
+ typedef thread_specific_shared_ptr<async_ostream_thread_ctx> tsss_type;
+
+A priority queue queue_ will be used the concentrator thread to order the stringstreams by date.
+
+ template <typename T>
+ struct detail::timestamped {
+ system_time date_;
+ unsigned seq_;
+ T value_;
+ void reset_date(unsigned seq) {
+ date_ = system_time();
+ seq_ = seq;
+ }
+ struct ptr_comparator_gt {
+ typedef timestamped* value_type;
+ bool operator()(const value_type&lhs, const value_type&rhs) {
+ return (lhs->date_ > rhs->date_) ? true :
+ (lhs->date_ == rhs->date_) && (lhs->seq_ > rhs->seq_)? true:false;
+ }
+ };
+ };
+
+ typedef timestamped<std::stringstream> element_type;
+ typedef std::priority_queue<queue_type*, std::deque<element_type*>, element_type::ptr_comparator_gt> priority_queue_type;
+
+In addition to the timestamp date_ we need a sequence number to order the stringstreams pushed without enough time granularity, e.g. on the same microsecond.
+
+To finish the field declaration there is the concentrator thread implemented by the loop function.
+
+ , thread_(boost::bind(loop, this))
+
+Comming back to the sink implementation,
+
+ async_ostream_sink::impl::impl(std::ostream& os)
+ : os_(os)
+ , tsss_(terminate)
+ , thread_(boost::bind(loop, this))
+ {}
+
+The terminate cleanup function is used to ensure that the queue is empty before the thread finish.
+To avoid optimizations a non const call inc is done while waitig the queue empties.
+
+ void async_ostream_sink::impl::terminate(shared_ptr<async_ostream_thread_ctx> that) {
+ while (!that->empty()) {
+ that->inc();
+ }
+ }
+
+The central sink function is write. Here instead to lock a mutex the function forwards to
+the thread specific shared pointer. We will see above the how async_ostream_thread_ctx handles this call.
+
+ std::streamsize write(const char* s, std::streamsize n) {
+ return tsss_->write(s, n);
+ }
+
+It is time to analyze the thread specific context before seen how the concentrator is implemented.
+
+ struct async_ostream_thread_ctx {
+ async_ostream_thread_ctx();
+ std::streamsize write(const char* s, std::streamsize n);
+ void flush();
+ element_type* get();
+ bool empty() const {return queue_.empty();}
+ void inc() {++inc_;}
+ private:
+ unsigned seq_;
+ element_type *current_;
+ queue_type queue_;
+ boost::mutex mutex_;
+ std::stringstream& buffer() {return current_->value_;}
+ };
+
+Each thread has a pointer to the current timestamped stringstream wich is used for the current output flow, i.e. by the write function.
+
+ std::streamsize write(const char* s, std::streamsize n) {
+ buffer().write(s, n);
+ return n;
+ }
+
+Once the user do a flush the current element is enqueued on the queue. The sec_ integer is used as monotonic sequence in conjuntion with the timestamp.
+
+ void flush() {
+ current_->reset_date(seq_);
+ ++seq_;
+ if (queue_.size()>2) {
+ queue_.push(current_);
+ } else {
+ boost::lock_guard<boost::mutex> lock(mutex_);
+ queue_.push(current_);
+ }
+ current_ = new element_type();
+ }
+
+As stated in the introduction, we don't need to lock the mute if the umber of elements in the queue are enough.
+
+These queue elements will be read by the concentrator using the get function.
+ element_type* get() {
+ if (queue_.size()>1) {
+ return get_i();
+ } else {
+ boost::lock_guard<boost::mutex> lock(mutex_);
+ return get_i();
+ }
+ }
+
+ element_type* get_i() {
+ if (queue_.empty()) return 0;
+ element_type* e= queue_.front();
+ queue_.pop();
+ return e;
+ }
+
+The concentrator loop looks like:
+
+ void async_ostream_sink_impl::loop(async_ostream_sink::impl* that) {
+ std::ostream& os_ = that->os_;
+ for(;;) {
+ // sleeps a little bit
+ this_thread::sleep(boost::posix_time::milliseconds(1));
+ { // scope needed don't remove
+ // lock the map access
+ tsss_type::lock_type lock(that->tsss_.get_mutex());
+ const tsss_type::map_type& tmap(that->tsss_.get_map(lock));
+ for (tsss_type::map_type::const_iterator it = tmap.begin(); it != tmap.end(); ++it) {
+ // takes the first element of each thread queue (if it exists) and push it on the ordered queue.
+ element_type* e= it->second->get();
+ if (e != 0) that->queue_.push(e);
+ }
+ }
+ if (that->queue_.empty()) { //when the queue is empty sleeps a little more
+ this_thread::sleep(boost::posix_time::milliseconds(10));
+ } else {
+ // takes the fist element of the ordered queue, write them on the output stream and delete it.
+ element_type* e = that->queue_.top();
+ that->queue_.pop();
+ os_ << "["<< e->date_ <<"-" << e->seq_ << "] " << e->value_.str();
+ delete e;
+ }
+ }
+ }
+
+
+[endsect]
+[endsect]
+
+[/============================]
+[section Proposed Case Studies]
+[/============================]
+
+This section do not includes a complete examples using the library, but some case studies that could use in some way the library.
+Some case studies, though tricky, are not huge and would be suitable for someone who has a limited time to spend on them.
+I'm curently working on the STM case study.
+
+[/========================]
+[section:stm STM]
+[/========================]
+
+Transactional memory (TM) is a recent parallel programming concept which reduces challenges found in parallel programming.
+TM offers numerous advantages over other synchronization mechanisms.
+
+This case study contains some thoughts on how I see a boostified version of DracoSTM, a software transactional memory (STM) system.
+DracoSTM is a high performance lock-based C++ STM research library.
+DracoSTM uses only native object-oriented language semantics, increasing its intuitiveness for developers while maintaining
+high programmability via automatic handling of composition, locks and transaction termination.
+
+The example will show only the part concerning how the different context are stored.
+
+Let me start of a typical use of this library with the Hello World! of transactional concurrent programming, Banck accounts and transfer.
+Let BankAccount be a simple account.
+
+ class BankAccount {
+ public:
+ void Deposit(unsigned amount);
+ void Withdraw(unsigned amount);
+ int GetBalance() const;
+ };
+ void Transfer(BankAccount* inA, BankAccount* outA, int amount);
+ class AccountManager {
+ public:
+ BankAccount* checkingAcct_;
+ BankAccount* savingsAcct_;
+ AccountManager(BankAccount& checking, BankAccount& savings)
+ : checkingAcct_(&checking)
+ , savingsAcct_(&savings)
+ {}
+ void Checking2Savings(int amount) {
+ Transfer(checkingAcct_, savingsAcct_, amount);
+ }
+ };
+
+And here a little programm that emulates an employer and two employeeds behabior
+The employee has requested to its employer to transfer its salary to its checking account every month its salary.
+The employer do the transfer the 28th of each month.
+Employee do some withdrawals and query its accounts from an ATM.
+Some people has requested to the Back automatic periodic transfers from its checking account to its saving account.
+The transfer is done 3th of each month.
+
+
+ BankAccount *emp;
+ BankAccount *c1;
+ BankAccount *c2;
+ BankAccount *s1;
+ AccountManager *am1;
+
+ int employer_th() {
+ sleep_for(day(28));
+ for (int i=0;i<2;++i) {
+ Transfer(emp, c1, 3000);
+ Transfer(emp, c2, 3200);
+ sleep(month(1));
+ }
+ }
+
+ void people_1_th() {
+ sleep_for(day(1));
+ c1->Withdraw(100);
+ sleep_for(day(5));
+ c1->Withdraw(500);
+ sleep_for(day(4));
+ c1->Withdraw(200);
+ }
+
+ void automatic_transfer_th(AccountManager *am, unsigned debit) {
+ sleep_for(day(3));
+ for (int i=0;i<2;++i) {
+ am.Checking2Savings(debit);
+ sleep_for(month(1));
+ };
+ }
+
+Evidently every operation must be atomic.
+
+ class BankAccount {
+ int balance_;
+ public:
+ void Deposit(unsigned amount) {
+ stm::this_tread::atomic _;
+ stm::this_tread::make_transactional_ptr(this)->balance_ += amount;
+ _.commit();
+ }
+ // ...
+
+ };
+
+How all this works? `stm::this_tread::atomic _;` declares the scope of the variable `_` as an atomic transaction.
+To access this in the current transaction we use `stm::make_transactional_ptr(this)` which return a smart pointer.
+If nothing is said the transaction will be aborted at `_` destruction.
+When everything is ok we need to do a `_.commit()`.
+
+When there are a lot of uses of this we can write instead
+
+ {
+ stm::this_tread::atomic _;
+ stm::this_tread::transactional_ptr<BankAccount> this_ptr(this);
+ this_ptr->balance_ += amount;
+ _.commit();
+ }
+
+or even shorter with the suggar syntax
+
+ {
+ stm::this_tread::atomic_transactional_ptr <BankAccount> this_ptr(this);
+ this_ptr->balance_ += amount;
+ // other uses of this
+ // ...
+ this_ptr.commit();
+ }
+
+The other `BankAccount` functions are coded as expected. Here is the code introducing a `using stm::this_tread;`
+which make it mush more readable.
+
+ class BankAccount {
+ int balance_;
+ using stm::this_tread;
+ public:
+ void Deposit(unsigned amount) {
+ atomic_ptr<BankAccount> this_ptr(this);
+ this_ptr->balance_ += amount;
+ this_ptr.commit();
+ }
+ void Withdraw(unsigned amount) {
+ atomic_ptr<BankAccount> this_ptr(this);
+ this_ptr->balance_ -= amount;
+ this_ptr.commit();
+ }
+ int GetBalance() const {
+ atomic_ptr<BankAccount> this_ptr(this);
+ int res = this_ptr->balance_;
+ this_ptr.commit();
+ return res;
+ }
+ };
+
+The transfer from accounts is done like:
+
+ void Transfer(BankAccount* inA, BankAccount* outA, int amount) {
+ using stm::this_tread;
+ atomic _;
+ make_transactional_ptr(inA)->Withdraw(amount);
+ make_transactional_ptr(outA)_>Deposit(amount);
+ _.commit();
+ }
+
+The core of all this stuff is `stm::this_tread::atomic` and `stm::transactional_ptr<>`.
+`stm::make_transactional_ptr()` and `stm::this_tread::atomic_ptr<>` are defined in terms of them.
+
+Next follows the interface of the atomic class.
+
+ namespace stm {
+ namespace this_thread {
+ class atomic {
+ public:
+ atomic();
+ ~atomic();
+ void rollback();
+ };
+ } // this_transaction
+ } // stm
+
+The atomic constructor will construct a
+transaction on the current thread and pust it to the stack of nested transactions.
+The atomic destructor will rollback the transaction if not commited and pop the stack of nested transactions.
+We will see later the transaction class.
+
+The transactional_ptr<> smart pointer interface follows:
+
+ template <typename T>
+ class transactional_ptr {
+ public:
+
+ typedef T element_type;
+ typedef T value_type;
+ typedef T * pointer;
+ typedef T& reference;
+
+ transactional_ptr(T* p, transaction* tr=0);
+ transactional_ptr(T* p, this_thread::atomic& scope);
+
+ transactional_ptr(T* p, writable&tag, transaction* tr=0));
+ transactional_ptr(T* p, writable&tag, this_thread::atomic& scope);
+
+ transactional_ptr(T* p, is_new&tag, transaction* tr=0));
+ transactional_ptr(T* p, is_new&tag, this_thread::atomic& scope);
+
+ const T* operator->() const;
+ const T& operator*() const;
+ const T * get() const;
+
+ T* operator->();
+ T& operator*();
+ T * get();
+
+ void delete_ptr();
+ };
+
+Let me start with the simpler constructor:
+
+ transactional_ptr(T* p);
+
+This creates a smart pointer pointing to a specific transaction memory of the current transaction.
+
+It contains the clasic functions of a smart pointer overloaded with `const` or non `const`.
+
+ const T* operator->() const;
+ const T& operator*() const;
+ const T * get() const;
+
+ T* operator->();
+ T& operator*();
+ T * get();
+
+By default the `transactional_ptr` points to a read only cache. When we use one of the non const operators,
+the pointers points to a upgraded write cache specific to the transaction. In the example
+
+ this_ptr->balance_ += amount;
+
+The use of `this_ptr->balance_on` the left hand side of the assignement operator requires a non const access,
+so the upgrade to writable is done.
+
+When we know a priori that the pointer contents will be modified we can create it as follows:
+
+ void Deposit(unsigned amount) {
+ atomic _;
+ transactional_ptr<BankAccount> this_ptr(this, writable);
+ this_ptr->balance_ += amount;
+ _.commit();
+ }
+
+Every `new`/`delete` operation on a transaction must be in some way be signaled to the transaction service.
+The new created objects would be wrapper by a `transactional_ptr<>` initialized like that;
+
+ transactional_ptr<BankAccount> this_ptr(new BackAccount(), is_new);
+
+When we want ot delete a pointer in a transaction we use `transactional_ptr::delete_ptr`
+
+ transactional_ptr<BankAccount> p_ptr(p, writable);
+ // ...
+ p_ptr.delete_ptr();
+
+Before to finish with the `transaction` class le me show you the
+`transactional_object_cache<T>` and its base class `transactional_object_cache_base`.
+
+ class transaction {
+ public:
+ bool commit();
+ void rollback();
+ void rollback_only();
+
+ template <typename T>
+ shared_ptr<transactional_object_cache<T> > read(T* ptr);
+
+ template <typename T>
+ T* write(T* in);
+
+ template <typename T> void delete_memory(T* in);
+ template <typename T>
+ shared_ptr<transactional_object_cache<T> > insert_in_new_cache(T* ptr);
+
+ transaction_state const & state() const;
+ };
+
+[endsect]
+
+[endsect]
Added: sandbox/interthreads/libs/interthreads/doc/changes.qbk
==============================================================================
--- (empty file)
+++ sandbox/interthreads/libs/interthreads/doc/changes.qbk 2008-11-19 05:56:30 EST (Wed, 19 Nov 2008)
@@ -0,0 +1,19 @@
+[/
+ (C) Copyright 2008 Vicente J Botet Escriba.
+ Distributed under the Boost Software License, Version 1.0.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ http://www.boost.org/LICENSE_1_0.txt).
+]
+
+[section:changes Appendix A: History]
+
+[heading [*Version 0.1, Novembre 30, 2008] ['Announcement of Interthreads]
+
+[*Features:]
+* thread setup/cleanup decorator,
+* thread specific shared pointer,
+* thread keep alive mechanism,
+* thread tuples, set_once synchonizer, thread_tuple_once and thread_group_once.
+
+[endsect]
+
Added: sandbox/interthreads/libs/interthreads/doc/getting_started.qbk
==============================================================================
--- (empty file)
+++ sandbox/interthreads/libs/interthreads/doc/getting_started.qbk 2008-11-19 05:56:30 EST (Wed, 19 Nov 2008)
@@ -0,0 +1,223 @@
+[/
+[/
+ (C) Copyright 2008 Vicente J. Botet Escriba
+ Distributed under the Boost Software License, Version 1.0.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ http://www.boost.org/LICENSE_1_0.txt).
+]
+
+
+[/======================================]
+[section:getting_started Getting Started]
+[/======================================]
+
+[include installation.qbk]
+
+[/=============================]
+[section Hello World! decorator]
+[/=============================]
+
+This is a little bit more than a Hello World! example. It will also say Bye, Bye!
+
+ #include <boost/interthreads/thread_decorator.hpp>
+ #include <boost/thread.hpp>
+ #include <iostream>
+
+ namespace bith = boost::interthreads;
+
+ void my_setup() {
+ std::cout << "Hello World!" << std::endl;
+ }
+
+ void my_cleanup() {
+ std::cout << "Bye, Bye!" << std::endl;
+ }
+
+ bith::thread_decoration my_decoration(my_setup, my_cleanup);
+
+ void my_thread() {
+ std::cout << "..." << std::endl;
+ }
+
+ int main() {
+ boost::thread th(bith::make_decorator(my_thread));
+ th.join();
+ return 0;
+ }
+
+When `th` is created with the `bith::thread_decorator` wrapper, it will initialize all the decorations before calling `my_thread`.
+This `my_cleanup` will be registered with the `boost:this_thread::at_thread_exit` if the `my_setup` function succeeds i.e. do not throws.
+Then the thread function `my_thread` is called. At the thread exit, the `my_cleanup` function is called. This results on the following output
+
+[pre
+Hello World!
+...
+Bye, Bye!
+]
+
+[endsect]
+
+[/==========================]
+[section Monotonic Thread Id]
+[/==========================]
+
+This example use thread decorator and threas specific pointers to implement a monotonic thread identifier.
+
+ #include <boost/interthreads/thread_decorator.hpp>
+ #include <boost/interthreads/thread_specific_shared_ptr.hpp>
+ #include <boost/thread.hpp>
+ #include <iostream>
+
+ namespace bith = boost::interthreads;
+
+ class mono_thread_id {
+ static bith::thread_decoration decoration_;
+ typedef bith::thread_specific_shared_ptr<unsigned> tssp_type;
+ static tssp_type current_;
+ static unsigned counter_=0;
+ static boost::mutex sync_;
+
+ static unsigned create() {
+ boost::lock_guard<boost::mutex> lock(sync_);
+ unsigned res = counter_;
+ ++counter;
+ return res;
+ }
+ static void setup() {
+ current_.reset(new unsigned(create());
+ }
+ public:
+ static unsigned id() {
+ return *current_;
+ }
+ static unsigned id(boost::thread::id id) {
+ return *current_[id]
+ }
+
+ };
+
+ bith::thread_decoration mono_thread_id::decoration(mono_thread_id::setup);
+ mono_thread_id::tssp_type mono_thread_id::current_;
+ unsigned mono_thread_id::counter_=0;
+ boost::mutex mono_thread_id::sync_;
+
+
+The monotonic thread identifier is managed by the mono_thread_id class.
+There is a mutex protecting the access to the monotonic counter.
+A decoration for the setup function will set the thread specific shared pointer with the value of the monotonic counter which
+will be self increased.
+
+In this way the applications using the thread_decorator can have access to a monotonic thread id mono_thread_id::id()
+and this id is accesible to other threads providing the boost::thread::id.
+
+ void my_thread() {
+ std::cout << "mono_thread_id=" << mono_thread_id::id() << std::endl;
+ sleep(5);
+ }
+
+ int main() {
+ boost::thread th1(bith::make_decorator(my_thread));
+ boost::thread th2(bith::make_decorator(my_thread));
+ sleep(2);
+
+ std::cout << "thread::id=" << th1.get_id()
+ << " mono_thread_id=" << mono_thread_id::id(th1.get_id())
+ << std::endl;
+ th1.join();
+ th2.join();
+ return 0;
+ }
+
+This results on the following output
+
+[pre
+mono_thread_id=1
+mono_thread_id=2
+thread::id=xxxx mono_thread_id=1
+thread::id=xxxx mono_thread_id=2
+
+]
+
+[endsect]
+
+[/=======================]
+[section Basic keep alive]
+[/=======================]
+
+This example shows the keep_alive basics.
+
+ #include <boost/interthreads/thread_decorator.hpp>
+ #include <boost/interthreads/keep_alive.hpp>
+ #include <boost/thread/thread.hpp>
+ #include <iostream>
+
+ namespace bith = boost::interthreads;
+
+ void my_thread() {
+ bith::this_thread::enable_keep_alive enabler;
+ for (int i=0; i<5; i++) {
+ bith::this_thread::keep_alive_point();
+ std::cout << "thread_id=" << boost::this_thread::get_id() << std::endl;
+ sleep(1);
+ }
+ }
+
+ int main() {
+ boost::thread th1(bith::make_decorator(my_thread));
+ boost::thread th2(bith::make_decorator(my_thread));
+ sleep(2);
+
+ th1.join();
+ th2.join();
+ return 0;
+ }
+
+The user creates two threads using the thread_decorator wrapper to be able to use the keep_alive mechanism.
+It uses the default enabler (one keep_alive_point every 2 seconds)
+
+This results on the following output
+
+[pre
+]
+
+[endsect]
+
+[/==========================]
+[section Multiple algorithms]
+[/==========================]
+
+This example shows how to launch several algorithms and wait only for the more efficient.
+
+ #include <boost/interthreads/thread_tuple.hpp>
+ #include <boost/thread.hpp>
+ #include <iostream>
+
+ namespace bith = boost::interthreads;
+
+ void my_thread1() {
+ std::cout << "1 thread_id=" << boost::this_thread::get_id() << std::endl;
+ sleep(1);
+ }
+
+ void my_thread2() {
+ std::cout << "2 thread_id=" << boost::this_thread::get_id() << std::endl;
+ sleep(5);
+ }
+
+ int main() {
+ unsigned res = bith::thread_and_join_first_then_interrupt(my_thread1, my_thread2)
+ );
+ std::cout << "Algotithm " << res+1 << "finished the first" << std::endl;
+ return 0;
+ }
+
+This results on the following output
+
+[pre
+]
+
+[endsect]
+
+[endsect]
+
+
Added: sandbox/interthreads/libs/interthreads/doc/implementation.qbk
==============================================================================
--- (empty file)
+++ sandbox/interthreads/libs/interthreads/doc/implementation.qbk 2008-11-19 05:56:30 EST (Wed, 19 Nov 2008)
@@ -0,0 +1,84 @@
+[/
+ (C) Copyright 2008 Vicente J Botet Escriba.
+ Distributed under the Boost Software License, Version 1.0.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ http://www.boost.org/LICENSE_1_0.txt).
+]
+
+[section:implementation Appendix C: Implementation Notes]
+
+[section Thread Decorator]
+
+[variablelist
+
+[[Thread safe] [The library is thread safe as far as the thread decorations are declared staticaly, because these variables will be initialized sequentially.]]
+
+[[Chained thread decorations] [All the thread decorations are chained between them.]]
+
+]
+
+[endsect]
+
+[section Thread Specific Storage]
+
+The Thread-Specific Storage pattern can be implemented in various ways
+
+[variablelist
+
+[[External versus internal thread storage] [The thread specific data collections can be stored either externally
+to all threads or internally to each thread. The thread_specific_shared_ptr use both ensuring efficiency when the context is
+requested by the current thread and allowing threads to access thread specific pointer of other threads]]
+
+[[Fixed- vs. variable-sized thread specific key mapping]
+[This library is based on the the Boost.Thread thread_specific_ptr implementation which use a variable-size map indexed by the
+address of the object. Future releases will provide fixed and mixed keys.]]
+
+[[Fixed- vs. variable-sized mapping of thread IDs to thread specific pointers]
+[It may be impractical to have a fixed-size array with an entry for every possible thread ID value.
+Instead, it is more space efficient to have threads use a dynamic data structure to map thread IDs to thread specific pointers.]]
+
+[[One mapping of thread IDs to thread specific pointers or to thread specific key mapping]
+[This library maps thread IDs to thread specific pointers to avoid contention on a single map.]]
+
+[[Default versus explicit specific context setting]
+[This library provides explicit setting. In future release will provide explicit/eager and implicit/lazy specific context setting.]]
+
+[[Ordered or unordered map] [The current implementation use an ordered map. Future version will allow the user to configure this.]]
+
+[[Intrusive or not maps] [As the thread specific pointer is stored on only one map the current implementation use the intrusive container.]]
+
+[[Shared versus exclusive locking] [Locating the right TS pointer requires the use of mutex to prevent race conditions. The library use a a shared_mutex
+because most of the access to the map will be readings by other threads. The problem is that the native conditions can not be used directly with
+shared mutex. Some benchmarks will be needed before decidinf which implementation is the best.]]
+
+]
+
+[endsect]
+
+[section Keep Alive]
+
+[variablelist
+
+[[Ensuring keep alive manager singleton is initialized and the thread specific shared storage is set before use]
+[The use of the thread preamble ensures that the preables are called before the user thread function is called.
+The keep_alive preable use the call_once to ensure that the keep_alive manager is correctly initialized.]]
+
+[[backup/restore context] [The real thread specific data is stored directly on the stack of enablers/disablers avoiding heap memory.
+On enablers/disablers construction they store a backup pointer to the nesting context. This allows a quick restore.
+The keep alive pointer contains just a pointer to this data.]]
+
+]
+[endsect]
+
+
+[section Thread Tuple]
+
+[variablelist
+[[Joining the first finishing thread]
+[In order to synchronize the finish of all the threads we use a internal class which store the index of the first
+thread notifying that it is finished. As the user function has no ideaof this index we wrap the user thread functions.
+]]
+]
+
+[endsect]
+[endsect]
Added: sandbox/interthreads/libs/interthreads/doc/installation.qbk
==============================================================================
--- (empty file)
+++ sandbox/interthreads/libs/interthreads/doc/installation.qbk 2008-11-19 05:56:30 EST (Wed, 19 Nov 2008)
@@ -0,0 +1,76 @@
+[/
+ (C) Copyright 2008 Vicente J. Botet Escriba
+ Distributed under the Boost Software License, Version 1.0.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ http://www.boost.org/LICENSE_1_0.txt).
+]
+
+
+[/======================================]
+[section:install Installing InterThreads]
+[/======================================]
+
+[/=================================]
+[heading Getting Boost.InterThreads]
+[/=================================]
+
+You can get __Boost_InterThreads__ by downloading [^interthreads.zip] from
+[@http://www.boost-consulting.com/vault/index.php?directory=Concurrent%20Programming Vault]
+
+[/==================================]
+[heading Building Boost.InterThreads]
+[/==================================]
+
+__Boost_InterThreads__ is not a header only library. There is a need to compile it before use.
+
+[/=========================]
+[heading Build Requirements]
+[/=========================]
+
+
+[*Boost.InterThreads] depends on Boost. You must use either Boost version 1.37.x
+or the version in SVN trunk. In particular, __Boost_InterThreads__ depends on:
+
+[variablelist
+[
+ [[@http://www.boost.org/libs/thread [*Boost.Threads]]] [for thread, thread_specific_ptr, call_once, mutex, ...]
+]
+[
+ [[@http://www.boost.org/libs/smart_ptr [*Boost.SmartPtr]]] [for shared_ptr, ...]
+]
+[
+ [[@http://www.boost.org/libs/function [*Boost.Function]]] [for function, ...]
+]
+[
+ [[@http://www.boost.org/libs/bind [*Boost.Bind]]] [for bind, ...]
+]
+[
+ [[@http://www.boost.org/libs [*Boost.Preprocesor]]] [to implement variadic thread_tuples, ...]
+]
+]
+
+[/========================]
+[heading Exceptions safety]
+[/========================]
+
+All functions in the library are exception-neutral and provide strong guarantee of exception safety as long as
+the underlying parameters provide it.
+
+[/====================]
+[heading Thread safety]
+[/====================]
+
+All functions in the library are exception-safe except:
+
+
+[/=======================]
+[heading Tested compilers]
+[/=======================]
+Currently, __Boost_InterThreads__ has been tested in the following compilers/platforms:
+
+* GCC 3.4.4 Cygwin
+* GCC 4.3.2 Cygwin
+
+[note Please send any questions, comments and bug reports to boost <at> lists <dot> boost <dot> org.]
+
+[endsect]
Added: sandbox/interthreads/libs/interthreads/doc/interthreads.qbk
==============================================================================
--- (empty file)
+++ sandbox/interthreads/libs/interthreads/doc/interthreads.qbk 2008-11-19 05:56:30 EST (Wed, 19 Nov 2008)
@@ -0,0 +1,186 @@
+[/
+[/
+ (C) Copyright 2008 Vicente J. Botet Escriba
+ Distributed under the Boost Software License, Version 1.0.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ http://www.boost.org/LICENSE_1_0.txt).
+]
+
+[article InterThreads
+ [quickbook 1.4]
+ [authors [Botet Escriba, Vicente J.]]
+ [copyright 2008 Vicente J. Botet Escriba]
+ [purpose C++ Library extending the Boost.Thread library adding some inter threads mechanisms as
+ thread decorator, specific shared pointers and keep alive mechanism]
+ [category text]
+ [license
+ Distributed under the Boost Software License, Version 1.0.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ [@http://www.boost.org/LICENSE_1_0.txt])
+ ]
+]
+
+[template lockable_concept_link[link_text] [link thread.synchronization.mutex_concepts.lockable [link_text]]]
+[def __lockable_concept__ [lockable_concept_link `Lockable` concept]]
+[def __lockable_concept_type__ [lockable_concept_link `Lockable`]]
+
+[template timed_lockable_concept_link[link_text] [link thread.synchronization.mutex_concepts.timed_lockable [link_text]]]
+[def __timed_lockable_concept__ [timed_lockable_concept_link `TimedLockable` concept]]
+[def __timed_lockable_concept_type__ [timed_lockable_concept_link `TimedLockable`]]
+
+[template shared_lockable_concept_link[link_text] [link thread.synchronization.mutex_concepts.shared_lockable [link_text]]]
+[def __shared_lockable_concept__ [shared_lockable_concept_link `SharedLockable` concept]]
+[def __shared_lockable_concept_type__ [shared_lockable_concept_link `SharedLockable`]]
+
+[template upgrade_lockable_concept_link[link_text] [link thread.synchronization.mutex_concepts.upgrade_lockable [link_text]]]
+[def __upgrade_lockable_concept__ [upgrade_lockable_concept_link `UpgradeLockable` concept]]
+[def __upgrade_lockable_concept_type__ [upgrade_lockable_concept_link `UpgradeLockable`]]
+
+
+[template lock_ref_link[link_text] [link thread.synchronization.mutex_concepts.lockable.lock [link_text]]]
+[def __lock_ref__ [lock_ref_link `lock()`]]
+
+[template lock_multiple_ref_link[link_text] [link thread.synchronization.lock_functions.lock_multiple [link_text]]]
+[def __lock_multiple_ref__ [lock_multiple_ref_link `lock()`]]
+
+[template try_lock_multiple_ref_link[link_text] [link thread.synchronization.lock_functions.try_lock_multiple [link_text]]]
+[def __try_lock_multiple_ref__ [try_lock_multiple_ref_link `try_lock()`]]
+
+[template unlock_ref_link[link_text] [link thread.synchronization.mutex_concepts.lockable.unlock [link_text]]]
+[def __unlock_ref__ [unlock_ref_link `unlock()`]]
+
+[template try_lock_ref_link[link_text] [link thread.synchronization.mutex_concepts.lockable.try_lock [link_text]]]
+[def __try_lock_ref__ [try_lock_ref_link `try_lock()`]]
+
+[template timed_lock_ref_link[link_text] [link thread.synchronization.mutex_concepts.timed_lockable.timed_lock [link_text]]]
+[def __timed_lock_ref__ [timed_lock_ref_link `timed_lock()`]]
+
+[template timed_lock_duration_ref_link[link_text] [link thread.synchronization.mutex_concepts.timed_lockable.timed_lock_duration [link_text]]]
+[def __timed_lock_duration_ref__ [timed_lock_duration_ref_link `timed_lock()`]]
+
+[template lock_shared_ref_link[link_text] [link thread.synchronization.mutex_concepts.shared_lockable.lock_shared [link_text]]]
+[def __lock_shared_ref__ [lock_shared_ref_link `lock_shared()`]]
+
+[template unlock_shared_ref_link[link_text] [link thread.synchronization.mutex_concepts.shared_lockable.unlock_shared [link_text]]]
+[def __unlock_shared_ref__ [unlock_shared_ref_link `unlock_shared()`]]
+
+[template try_lock_shared_ref_link[link_text] [link thread.synchronization.mutex_concepts.shared_lockable.try_lock_shared [link_text]]]
+[def __try_lock_shared_ref__ [try_lock_shared_ref_link `try_lock_shared()`]]
+
+[template timed_lock_shared_ref_link[link_text] [link thread.synchronization.mutex_concepts.shared_lockable.timed_lock_shared [link_text]]]
+[def __timed_lock_shared_ref__ [timed_lock_shared_ref_link `timed_lock_shared()`]]
+
+[template timed_lock_shared_duration_ref_link[link_text] [link thread.synchronization.mutex_concepts.shared_lockable.timed_lock_shared_duration [link_text]]]
+[def __timed_lock_shared_duration_ref__ [timed_lock_shared_duration_ref_link `timed_lock_shared()`]]
+
+[template lock_upgrade_ref_link[link_text] [link thread.synchronization.mutex_concepts.upgrade_lockable.lock_upgrade [link_text]]]
+[def __lock_upgrade_ref__ [lock_upgrade_ref_link `lock_upgrade()`]]
+
+[template unlock_upgrade_ref_link[link_text] [link thread.synchronization.mutex_concepts.upgrade_lockable.unlock_upgrade [link_text]]]
+[def __unlock_upgrade_ref__ [unlock_upgrade_ref_link `unlock_upgrade()`]]
+
+[template unlock_upgrade_and_lock_ref_link[link_text] [link thread.synchronization.mutex_concepts.upgrade_lockable.unlock_upgrade_and_lock [link_text]]]
+[def __unlock_upgrade_and_lock_ref__ [unlock_upgrade_and_lock_ref_link `unlock_upgrade_and_lock()`]]
+
+[template unlock_and_lock_upgrade_ref_link[link_text] [link thread.synchronization.mutex_concepts.upgrade_lockable.unlock_and_lock_upgrade [link_text]]]
+[def __unlock_and_lock_upgrade_ref__ [unlock_and_lock_upgrade_ref_link `unlock_and_lock_upgrade()`]]
+
+[template unlock_upgrade_and_lock_shared_ref_link[link_text] [link thread.synchronization.mutex_concepts.upgrade_lockable.unlock_upgrade_and_lock_shared [link_text]]]
+[def __unlock_upgrade_and_lock_shared_ref__ [unlock_upgrade_and_lock_shared_ref_link `unlock_upgrade_and_lock_shared()`]]
+
+[template owns_lock_ref_link[link_text] [link thread.synchronization.locks.unique_lock.owns_lock [link_text]]]
+[def __owns_lock_ref__ [owns_lock_ref_link `owns_lock()`]]
+
+[template owns_lock_shared_ref_link[link_text] [link thread.synchronization.locks.shared_lock.owns_lock [link_text]]]
+[def __owns_lock_shared_ref__ [owns_lock_shared_ref_link `owns_lock()`]]
+
+[template mutex_func_ref_link[link_text] [link thread.synchronization.locks.unique_lock.mutex [link_text]]]
+[def __mutex_func_ref__ [mutex_func_ref_link `mutex()`]]
+
+[def __boost_thread__ [@http://www.boost.org/libs/thread [*Boost.Threads]]]
+[def __boost_interthreads__ [*Boost.InterThreads]]
+[def __Boost_InterThreads__ [*Boost.InterThreads]]
+
+[def __not_a_thread__ ['Not-a-Thread]]
+[def __interruption_points__ [link interruption_points ['interruption points]]]
+
+[def __mutex__ [link thread.synchronization.mutex_types.mutex `boost::mutex`]]
+[def __try_mutex__ [link thread.synchronization.mutex_types.try_mutex `boost::try_mutex`]]
+[def __timed_mutex__ [link thread.synchronization.mutex_types.timed_mutex `boost::timed_mutex`]]
+[def __recursive_mutex__ [link thread.synchronization.mutex_types.recursive_mutex `boost::recursive_mutex`]]
+[def __recursive_try_mutex__ [link thread.synchronization.mutex_types.recursive_try_mutex `boost::recursive_try_mutex`]]
+[def __recursive_timed_mutex__ [link thread.synchronization.mutex_types.recursive_timed_mutex `boost::recursive_timed_mutex`]]
+[def __shared_mutex__ [link thread.shronization.mutex_types.shared_mutex `boost::shared_mutex`]]
+
+[template unique_lock_link[link_text] [link thread.synchronization.locks.unique_lock [link_text]]]
+
+[def __lock_guard__ [link thread.synchronization.locks.lock_guard `boost::lock_guard`]]
+[def __unique_lock__ [unique_lock_link `boost::unique_lock`]]
+[def __shared_lock__ [link thread.synchronization.locks.shared_lock `boost::shared_lock`]]
+[def __upgrade_lock__ [link thread.synchronization.locks.upgrade_lock `boost::upgrade_lock`]]
+[def __upgrade_to_unique_lock__ [link thread.synchronization.locks.upgrade_to_unique_lock `boost::upgrade_to_unique_lock`]]
+
+
+[def __thread__ `boost::thread`]
+[def __thread_id__ `boost::thread::id`]
+
+
+[template join_link[link_text] [link interthreads.reference.thread_tuple_thread_tuple_hpp.thread_tuple_join_all [link_text]]]
+[def __join__ [join_link `join()`]]
+[template timed_join_link[link_text] [link interthreads.reference.thread_tuple_thread_tuple_hpp.thread_tuple_timed_join_all [link_text]]]
+[def __timed_join__ [timed_join_link `timed_join()`]]
+[def __detach__ [link thread.thread_management.thread.detach `detach()`]]
+[def __interrupt__ [link interthreads.reference.thread_tuple_thread_tuple_hpp.thread_tuple_class.thread_tuple_interrupt_all `interrupt_all()`]]
+
+
+[def __sleep__ [link thread.thread_management.this_thread.sleep `boost::this_thread::sleep()`]]
+
+[def __interruption_enabled__ [link thread.thread_management.this_thread.interruption_enabled `boost::this_thread::interruption_enabled()`]]
+[def __interruption_requested__ [link interthreads.thread_tuple.reference.thread_tuple_class.thread_tuple_join.interruption_requested `boost::this_thread::interruption_requested()`]]
+[def __interruption_point__ [link thread.thread_management.this_thread.interruption_point `boost::this_thread::interruption_point()`]]
+[def __disable_interruption__ [link thread.thread_management.this_thread.disable_interruption `boost::this_thread::disable_interruption`]]
+[def __restore_interruption__ [link thread.thread_management.this_thread.restore_interruption `boost::this_thread::restore_interruption`]]
+
+[def __thread_resource_error__ `boost::thread_resource_error`]
+[def __thread_interrupted__ `boost::thread_interrupted`]
+[def __barrier__ [link thread.synchronization.barriers.barrier `boost::barrier`]]
+
+[template cond_wait_link[link_text] [link thread.synchronization.condvar_ref.condition_variable.wait [link_text]]]
+[def __cond_wait__ [cond_wait_link `wait()`]]
+[template cond_timed_wait_link[link_text] [link thread.synchronization.condvar_ref.condition_variable.timed_wait [link_text]]]
+[def __cond_timed_wait__ [cond_timed_wait_link `timed_wait()`]]
+[template cond_any_wait_link[link_text] [link thread.synchronization.condvar_ref.condition_variable_any.wait [link_text]]]
+[def __cond_any_wait__ [cond_any_wait_link `wait()`]]
+[template cond_any_timed_wait_link[link_text] [link thread.synchronization.condvar_ref.condition_variable_any.timed_wait [link_text]]]
+[def __cond_any_timed_wait__ [cond_any_timed_wait_link `timed_wait()`]]
+
+
+[template thread_decorator_link[link_text] [link interthreads.reference.decorator_thread_decoration_file.thread_decorator [link_text]]]
+[def __thread_decorator__ [thread_decorator_link `thread_decorator`]]
+
+[template thread_decoration_link[link_text] [link interthreads.reference.decorator_thread_decoration_file.decorator_thread_decoration_class [link_text]]]
+[def __thread_decoration__ [thread_decorator_link `thread_decoration`]]
+
+[template thread_decorate_link[link_text] [link interthreads.reference.decorator_thread_decoration_file.decorate [link_text]]]
+[def __thread_decorator_decorate__ [thread_decorate_link `decorate()`]]
+
+[def __blocked__ ['blocked]]
+
+
+[def __thread_tuple__ `thread_tuple<>`]
+[def __thread_group__ `boots::thread_group`]
+
+[warning InterThreads is not a part of the Boost libraries.]
+
+[include overview.qbk]
+
+[include users_guide.qbk]
+
+[include reference.qbk]
+
+[include case_studies.qbk]
+
+[include appendices.qbk]
+
+
+
Added: sandbox/interthreads/libs/interthreads/doc/introduction.qbk
==============================================================================
--- (empty file)
+++ sandbox/interthreads/libs/interthreads/doc/introduction.qbk 2008-11-19 05:56:30 EST (Wed, 19 Nov 2008)
@@ -0,0 +1,172 @@
+[/
+ (C) Copyright 2008 Vicente J Botet Escriba.
+ Distributed under the Boost Software License, Version 1.0.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ http://www.boost.org/LICENSE_1_0.txt).
+]
+
+[section:motivation Motivation]
+
+[section:decorator_motivation Decorators]
+
+`boost::call_once` provides a mechanism for ensuring that an initialization routine is run exactly once on a
+programm without data races or deadlocks.
+`boost::this_thread::at_thread_exit` allows to execute a cleanup function at thread exit.
+
+If we want a setup function be executed once at the begining on the threads and a cleanup at thread exit we need to do
+
+ void thread_main() {
+ setup();
+ boost::this_thread::at_thread_exit(cleanup);
+ // do whatever
+ // ...
+ }
+ // ...
+ {
+ boost::thread th(thread_main);
+ //...
+ }
+
+Of course we can define an init function that call setup and do the registration.
+
+ void init() {
+ setup();
+ boost::this_thread::at_thread_exit(cleanup);
+ }
+
+Different services could require these setup/cleanup functions to be called, and so
+each thread function should do
+
+ void thread_main() {
+ serv1::init();
+ // ...
+ servN::init();
+ // do whatever using serv1, ..., servN.
+ // ...
+ }
+
+This approach is valid for services that the user can configure for specifics threads,
+but not for services that must be installed on every thread.
+
+__thread_decoration__ ensures that a setup function is called only once by thread before
+the thread function provided the thread is created with a decorator wrapper.
+This setup function is usualy used to set thread specific pointers and call functions once.
+
+The conterpart of the setup is the cleanup. The __thread_decoration__ takes an optional
+cleanup function which will be executed at thread exit.
+
+ // define in only the implementation file of each service
+
+ boost::interthreads::decoration serv1:decoration(serv1:setup, serv1:cleanup);
+ // ...
+ boost::interthreads::decoration servN:decoration(servN:setup, servN:cleanup);
+
+
+ void thread_main() {
+ // do whatever using serv1, ..., servN.
+ // ...
+ }
+
+ // ...
+ {
+ boost::thread th(boost::interthreads::make_decorator(thread_main));
+ //...
+ }
+
+
+[endsect]
+
+[section:thread_specific_shared_ptr_Motivation Sharing Thread Local Storage]
+
+Thread local storage allows multi-threaded applications to have a separate instance of a given data item for
+each thread. But do not provide any mechanism to access this data from other threads. Although this seems to
+defeat the whole point of thread-specific storage, it is useful when these contexts needs some kind of
+communication between them, or some central global object needs to control them.
+
+The intent of the `boost::thread_specific_shared_ptr` class is to allow two threads to establish a shared memory
+space, without requiring the user code to pass any information.
+`boost::thread_specific_shared_ptr` provides a portable mechanism for shared thread-local storage that works on
+all compilers supported by `boost::thread` and `boost::shared_ptr`. Each instance of
+`boost::thread_specific_shared_ptr` represents a pointer to a shared object where each thread must have a distinct
+value.
+
+Only the current thread can modify the thread specific shared pointer using the non const functions reset/release
+functions. Each time these functions are used a synchronization must be ensured to update the mapping.
+The other threads have only read access to the shared_ptr<T>. It is worh saying that the shared object T must be
+thread safe.
+
+[endsect]
+
+[section:keep_alive_motivation Keep Alive]
+
+On fault tolerant systems we need to be able to detect threads that could stay on a loop, or simply blocked.
+
+One way to detect this situations is to require the thread to signal it is alive by calling a check point function.
+Of course it should be up to the user to state when this mechanism is enabled or disabled.
+At the begining of a thread the keep alive mechanism is disabled.
+
+A thread will be considered dead if during a given period the number of checkins is inferior to a given threshold.
+These two parameters can be given when the keep alive mechanislm is enabled.
+
+The controler checks at predefined intervals if the thread is dead, and in this case it will call a user specific
+function which by default aborts the program.
+
+[endsect]
+
+[section:thread_tuple_motovation Thread Tuple]
+
+The __thread_group__ class allows to group dynamically threads. This means that the container must be dynamic.
+
+ {
+ boost::thread_group tg;
+ tg.create_thread(thread1);
+ tg.create_thread(thread2);
+ tg.join_all(thread1);
+ }
+
+
+The __thread_tuple__ class is responsible for launching and managing a static collection of threads
+that are related in some fashion. No new threads can be added to the tuple once constructed. So we can write
+
+ boost::interthreads:thread_join(thread1, tg.create_thread(thread2));
+
+In addition the user can join the first finishing thread.
+
+ unsigned i = boost::interthreads:thread_join_first_then_interrupt(thread1, thread2);
+
+
+Evidently, thread_tuple can not be used when we needs dynamic creation or deletion
+
+The __thread_group__ class allows to group dynamically threads.
+
+ {
+ boost::thread_group tg;
+ tg.create_thread(thread1);
+
+ // later on
+ tg.create_thread(thread2);
+ boost::thread th3(thread3)
+ tg.add_thread(th3);
+
+ // later on
+ tg.remove_thread(th3);
+
+ tg.join_all(thread1);
+ }
+
+Objects of type __thread_tuple__ are movable, so they can be stored in move-aware containers, and returned from
+functions. This allows the details of thread tuple creation to be wrapped in a function.
+
+ boost::interthreads::thread_tuple<2> make_thread_tuple(...);
+
+ void f()
+ {
+ boost::interthreads::thread_tuple<2> some_thread_tuple=make_thread_tuple(f1, g2);
+ some_thread_tuple.join();
+ }
+
+[endsect]
+
+
+
+[endsect]
Added: sandbox/interthreads/libs/interthreads/doc/overview.qbk
==============================================================================
--- (empty file)
+++ sandbox/interthreads/libs/interthreads/doc/overview.qbk 2008-11-19 05:56:30 EST (Wed, 19 Nov 2008)
@@ -0,0 +1,78 @@
+[/
+ (C) Copyright 2008 Vicente J Botet Escriba.
+ Distributed under the Boost Software License, Version 1.0.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ http://www.boost.org/LICENSE_1_0.txt).
+]
+
+[/========================]
+[section:overview Overview]
+[/========================]
+
+[/==================]
+[heading Description]
+[/==================]
+
+__boost_interthreads__ extends __boost_thread__ adding some features:
+
+* thread decorator: thread_decorator allows to define setup/cleanup functions which will be called only once by thread:
+setup before the thread function and cleanup at thread exit.
+
+* thread specific shared pointer: this is an extension of the thread_specific_ptr providing access
+to this thread specific context from other threads.
+As it is shared the stored pointer is a shared_ptr instead of a raw one.
+
+* thread keep alive mechanism: this mechanism allows to detect threads that do not prove that they are alive by
+calling to the keep_alive_point regularly.
+When a thread is declared dead a user provided function is called, which by default will abort the program.
+
+* thread tuple: defines a thread groupe where the number of threads is know statically and the threads are
+created at construction time.
+
+* set_once: a synchonizer that allows to set a variable only once, notifying
+to the variable value to whatever is waiting for that.
+
+* thread_tuple_once: an extension of the boost::thread_tuple which allows to join the thread finishing
+the first, using for that the set_once synchronizer.
+
+* thread_group_once: an extension of the boost::thread_group which allows to join the thread finishing
+the first, using for that the set_once synchronizer.
+
+(thread_decorator and thread_specific_shared_ptr) are based on the original implementation of
+[@http://www.boost-consulting.com/vault/index.php?directory=Concurrent%20Programming [*threadalert]] written by Roland Schwarz.
+
+
+[/====================================]
+[heading How to Use This Documentation]
+[/====================================]
+
+This documentation makes use of the following naming and formatting conventions.
+
+* Code is in `fixed width font` and is syntax-highlighted.
+* Replaceable text that you will need to supply is in [~italics].
+* If a name refers to a free function, it is specified like this:
+ `free_function()`; that is, it is in code font and its name is followed by `()`
+ to indicate that it is a free function.
+* If a name refers to a class template, it is specified like this:
+ `class_template<>`; that is, it is in code font and its name is followed by `<>`
+ to indicate that it is a class template.
+* If a name refers to a function-like macro, it is specified like this: `MACRO()`;
+ that is, it is uppercase in code font and its name is followed by `()` to
+ indicate that it is a function-like macro. Object-like macros appear without the
+ trailing `()`.
+* Names that refer to /concepts/ in the generic programming sense are
+ specified in CamelCase.
+
+[note In addition, notes such as this one specify non-essential information that
+provides additional background or rationale.]
+
+Finally, you can mentally add the following to any code fragments in this document:
+
+ // Include all of Proto
+ #include <boost/interthreads/interthreads.hpp>
+
+ // Create a namespace aliases
+ namespace bith = boost::interthreads;
+
+[include introduction.qbk]
+[endsect]
Added: sandbox/interthreads/libs/interthreads/doc/rationale.qbk
==============================================================================
--- (empty file)
+++ sandbox/interthreads/libs/interthreads/doc/rationale.qbk 2008-11-19 05:56:30 EST (Wed, 19 Nov 2008)
@@ -0,0 +1,199 @@
+[/
+ (C) Copyright 2008 Vicente J Botet Escriba.
+ Distributed under the Boost Software License, Version 1.0.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ http://www.boost.org/LICENSE_1_0.txt).
+]
+
+[/=======================================]
+[section:rationale Appendix B: Rationale]
+[/=======================================]
+
+[/=======================]
+[section Thread Decorator]
+[/=======================]
+
+[variablelist
+
+[[Function wrapper versus thread refinement] [The threadalert on which this library was initialy based redefined the
+boost:thread class which called implicitly the wrapper. As the single varying features between both threads was this
+wrapping of the thread function it has been isolated in the interthreads library.]]
+
+[[Static decoration variables] [Thread decorations construction is not thread safe and must be done before other threads have been
+created and the __thread_decorator_decorate__ function is called.]]
+]
+
+[endsect]
+
+[/==============================]
+[section Thread Specific Storage]
+[/==============================]
+
+[variablelist
+
+[[Non copiable/Non movable] [Specific pointers are singletons.]]
+
+[[External locking] [In order to ensure thread safety while providing as much functionality as possible the class allows
+to get the map of thread specific contexts as soon as the application provides a `unique_lock`.]]
+
+[[Mimic thread_specific_ptr] [From the point of view of the current thread thread_specific_shared_ptr behaves as a thread_specific_ptr.
+From it we takes:
+
+```thread_specific_shared_ptr();
+explicit thread_specific_shared_ptr(void (*cleanup_)(shared_ptr_type));
+T* get() const;
+T* operator->() const;
+T& operator*() const;
+void reset();
+template<class Y> void reset(Y * p);
+```
+]]
+
+[[Mimic shared_ptr] [From the point of the other threads thread_specific_shared_ptr behaves as a shared_ptr lockup.
+From the point of view of the current thread the stored pointer is located in a shared_pointer, so we can use the shared_ptr deleter feature
+From it we takes:
+
+```T* get() const;
+T* operator->() const;
+T& operator*() const;
+void reset();
+template<class Y> void reset(Y * p);
+template<class Y, class D> void reset(Y * p, D d);
+template<class Y, class D, class A> void reset(Y * p, D d, A a);
+```
+]]
+
+[[Why doesn't thread_specific_shared_ptr provide a release() function?]
+[As it store a shared_ptr, it cannot give away ownership unless it's unique() because the other copy will still destroy the object.]]
+
+]
+
+[/==============================]
+[heading Comparaing TSSS and TSS]
+[/==============================]
+
+
+[table Comparaing TSSS and TSS
+[[Feature][thread_specific_shared_ptr][thread_specific_ptr][Compatible]]
+[[ownership][shared][exclusive][no]]
+[[default constructor][[*yes]][[*yes]][[*yes]]]
+[[cleanup constructor][[*yes]: can not delete the pointer][yes:must delete the pointer][no]]
+[[`get()`][[*yes]][[*yes]][[*yes]]]
+[[`operator->()`][[*yes]][[*yes]][[*yes]]]
+[[`operator*()`][[*yes]][[*yes]][[*yes]]]
+[[`reset()`][[*yes]][[*yes]][[*yes]]]
+[[`reset(T*)`][[*yes]][[*yes]][[*yes]]]
+[[`reset(Y*)`][[*yes]][no][no]]
+[[`reset(Y*,D)`][[*yes]][no][no]]
+[[`reset(Y*,D,A)`][[*yes]][no][no]]
+[[`release()`][no][[*yes]][no]]
+[[`get_mutex()`][[*yes]][no][no]]
+[[`get_map()`][[*yes]][no][no]]
+[[`operator[]()`][[*yes]][no][no]]
+[[`wait_and_get()`][[*yes]][no][no]]
+]
+
+
+[endsect]
+
+[/=================]
+[section Keep Alive]
+[/=================]
+
+[variablelist
+
+[[Can a thread that has just do a check point be considered as dead less than one second after?]
+[Whell this depends on the enabling parameters. If the checking parameter is greater tan one it could be possible that
+the thread do a check_point just before the keep alive manager do the check, seen that there are not enough check_points and declaring the thread dead.
+If you want to avoid this situation let the checkin to 1.
+]]
+
+[[Nesting enablers and disablers] [Enablers/disablers use RAII, so they can be nested and the context be restored on the destructor.]]
+
+[[Configurable on dead action] [The default action is to abort the process because I don't see any generic and cleaner way to manage this event.
+The library provides this hook for user wanting to try something specific.]]
+
+[[Who control the controler?] [There is no way to control this thread other than adding an external process.]]
+
+]
+
+[endsect]
+
+
+[/===================]
+[section Thread Tuple]
+[/===================]
+
+[variablelist
+[[Why must be not copiable?] [Thread tuples can not be copiable since boost::thread is not copiable.]]
+[[Why should be movable?] [If we want functions to return Thread tuples it is necessary to make them movable.]]
+[[Mimic boost::thread_group] [thread_tuple has in common some function found in thread_group. From it we take
+
+```void join_all();
+void interrupt_all();
+std::size_t size();
+```
+
+]]
+[[Mimic boost::thread] [We can consider a thread tuple as a compound thread, and so we can mimic
+the thread intyerface. From it we takes
+
+```void join();
+void interrupt();
+void detach/detach_all();
+bool interruption_requested() const;
+void timed_join/timed_join_all();
+bool joinable/all_joinable() const;
+```
+
+]]
+
+[[Why the user can not modify directly the embeeded threads?] [
+
+The library provides a safe function to get a constant thread reference
+
+```const thread& operator[](std::size_t i) const;```
+
+The problem if we provide the non constant variant is that the user can detach them.
+
+
+]]
+
+
+[[Joining the first finishing thread of a thread tuple]
+[This functionallity has a price. We need to synchronize all the threads transparently, so we need to wrap the user thread functions.
+]]
+]
+
+[/=======================================================]
+[heading Comparaing thread, thread_group and thread_tuple]
+[/=======================================================]
+
+
+[table Comparaing thread, thread_group and thread_tuple
+[[Feature][thread][thread_group][thread_tuple][Compatible]]
+[[default constructor][[*yes]][[*yes]][[*yes]][[*yes]]]
+[[copiable][no][no][no][[*yes]]]
+[[movable][[*yes]][no][[*yes]][no]]
+[[`joinable()`][[*yes]][no][[*yes]][no*]]
+[[`join()`][[*yes]][no][[*yes]][no*]]
+[[`timed_join()`][[*yes]][no][[*yes]][no*]]
+[[`interruption_requested()`][[*yes]][no][[*yes]][no*]]
+[[`join_all()`][no][[*yes]][[*yes]][no]]
+[[`timed_join_all()`][no][[*yes]][[*yes]][no]]
+[[`interrupt_all()`][no][[*yes]][[*yes]][no]]
+[[`size()`][no][[*yes]][[*yes]][no* *]]
+[[`join_first_then_interrupt()`][no][no][[*yes]][no* * *]]
+[[`timed_join_first_then_interrupt()`][no][no][[*yes]][no* * *]]
+[[`operator[]()`][no][no][[*yes]][no]]
+[[`swap()`][[*yes]][no][[*yes]][no*]]
+]
+
+[variablelist
+
+[[*][thread_group could add these synonym functions]]
+[[* *][thread could add the size function returning 1]]
+[[* * *][thread & thread_group could add these functions]]
+]
+[endsect]
+[endsect]
Added: sandbox/interthreads/libs/interthreads/doc/reference.qbk
==============================================================================
--- (empty file)
+++ sandbox/interthreads/libs/interthreads/doc/reference.qbk 2008-11-19 05:56:30 EST (Wed, 19 Nov 2008)
@@ -0,0 +1,1777 @@
+[/
+ (C) Copyright 2008 Vicente J Botet Escriba.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ http://www.boost.org/LICENSE_1_0.txt).
+]
+
+[/==========================]
+[section:reference Reference]
+[/==========================]
+
+[/==========================================================================================]
+[section:decorator_thread_decoration_file Header `<boost/interthreads/thread_decorator.hpp>`]
+[/==========================================================================================]
+
+ namespace boost {
+ namespace interthreads {
+ class thread_decoration;
+ class thread_decorator;
+ void decorate();
+ }
+ }
+
+[/==================================================]
+[section:thread_decoration Class `thread_decoration`]
+[/==========================================================================================]
+
+`thread_decoration` defines a couple of setup/cleanup functions chained to the last constructed decoration, i.e. decorations are chained between them.
+
+ class thread_decoration {
+ public:
+ template<typename Callable1>
+ thread_decoration(Callable1 setup);
+
+ template<typename Callable1,typename Callable2>
+ thread_decoration(Callable1 setup, Callable2 cleanup);
+
+ ~thread_decoration() {
+ };
+
+[section:thread_decoration_class_constructor_setup Constructor with setup]
+[/==========================================================================================]
+
+ template<typename Callable>
+ thread_decoration(Callable func);
+
+[variablelist
+
+[[Requires:] [`Callable` is `CopyConstructible`. Copying `setup` shall have no side effects, and the effect of calling the copy shall
+be equivalent to calling the original. ]]
+
+[[Effects:] [`setup` is copied into storage managed internally by the library, and that copy is invoked by the
+__thread_decorator_decorate__ function.]]
+
+[[Postconditions:] [`*this` refers to a decoration.]]
+
+[[Throws:] [Nothing]]
+
+[[Thread safety:][unsafe]]
+
+]
+
+[note The library do not ensures any order of decorations.]
+
+[endsect]
+
+[section Constructor with setup & cleanup]
+[/==========================================================================================]
+
+ template<typename Callable1,typename Callable2>
+ thread_decoration(Callable1 setup, Callable2 cleanup);
+
+
+[variablelist
+
+[[Requires:] [`Callable1` & `Callable1` are `CopyConstructible`. Copying `setup` or `cleanup` shall have no side effects, and the effect of calling the copy shall
+be equivalent to calling the original. ]]
+
+[[Effects:] [`setup` and `cleanup` are copied into storage managed internally by the library, and the `setup` copy is invoked by the
+__thread_decorator_decorate__ function. If successful the cleanup function is registered to the to the thread exit handler.]]
+
+[[Postconditions:] [`*this` refers to a decoration.]]
+
+[[Throws:] [Nothing]]
+
+[[Thread safety:][unsafe]]
+
+]
+
+[note The library do not ensures any order of setup decorations neither of cleanup decorations.]
+
+[endsect]
+[endsect]
+
+[section:thread_decorator Class `thread_decorator`]
+[/==========================================================================================]
+
+`thread_decorator` is a functor wrapping a function with the setup and the cleanup of chained decorations which will be called only once by thread:
+decoration's setup are called before the thread function and decoration's cleanup at thread exit.
+
+ class thread_decorator {
+ public:
+
+ template <class Callable>
+ explicit thread_decorator(Callable&& f);
+ template <class Callable>
+ explicit thread_decorator(detail::thread_move_t<Callable> f):
+ template<typename Callable>
+ thread_decorator(Callable f,
+ typename disable_if<boost::is_convertible<F&,detail::thread_move_t<F> >
+ , detail::dummy* >::type=0);
+
+ template <typename Callable, typename A1, typename A2, ...>
+ thread_decorator(Callable f, A1 a1, A2 a2, ...)
+
+ thread_decorator(thread_decorator&& other);
+
+ thread_decorator& operator=(thread_decorator&& other);
+
+ thread_decorator&& move();
+
+ void swap(thread_decorator& x);
+
+ void operator ()();
+
+ };
+
+Functor wrapping the user function thread to ensure that all the decorations are called.
+
+Objects of type `thread_decorator` are movable, so they can be stored in move-aware containers, and returned from functions.
+This allows the details of thread decoration to be wrapped in a function.
+
+thread_decorator make_decorator();
+
+void f()
+{
+ boot::thread some_thread(make_decorator());
+ some_thread.join();
+}
+
+[note On compilers that support rvalue references, `thread_decorator` provides a proper move constructor and move-assignment operator,
+and therefore meets the C++0x `MoveConstructible` and `MoveAssignable` concepts. With such compilers, `thread_decorator` can therefore
+be used with containers that support those concepts.
+
+For other compilers, move support is provided with a move emulation layer, so containers must explicitly detect that move emulation
+layer. See `<boost/thread/detail/move.hpp>` for details.]
+
+[section:decorator_thread_decoration_decorate_constructor Constructor]
+[/==========================================================================================]
+
+ template <class Callable>
+ thread_decorator(Callable&& func);
+ template<typename Callable>
+ thread_decorator(Callable func);
+
+[variablelist
+
+[[Template parameters:] [`Callable` must by `CopyConstructible`.]]
+
+[[Effects:] [`func` is copied into storage managed internally by the library, and that copy will be invoked after the operator() function when the decorate is used as Callable of a newly-created
+thread of execution.]]
+
+[[Throws:] [`std::bad_alloc` when resources unavailable.]]
+
+[[Thread safety:][safe]]
+
+]
+
+[endsect]
+
+[section:decorator_thread_decoration_decorate_constructorn Constructor]
+[/==========================================================================================]
+
+ template <typename Callable, typename A1, typename A2, ...>
+ thread_decorator(Callable func, A1 a1, A2 a2, ...)
+
+[variablelist
+
+[[Template parameters:] [`Callable` must by `CopyConstructible`.]]
+
+[[Effects:] [`func` is copied into storage managed internally by the library, and that copy will be invoked after the operator() function when the decorate is used as Callable of a newly-created
+thread of execution.]]
+
+[[Throws:] [`std::bad_alloc` when resources unavailable.]]
+
+[[Thread safety:][safe]]
+
+]
+
+[endsect]
+
+[section:decorator_thread_decoration_decorate_copy_move_constructor Copy Move Constructor]
+[/==========================================================================================]
+
+ thread_decorator(thread_decorator&& other);
+ thread_decorator(detail::thread_move_t<thread_decorator> other) {
+
+[variablelist
+
+[[Template parameters:] [`Callable` must by `CopyConstructible`.]]
+
+[[Effects:] [`func` is copied into storage managed internally by the library, and that copy will be invoked after the operator() function when the decorate is used as Callable of a newly-created
+thread of execution.]]
+
+[[Throws:] [`std::bad_alloc` when resources unavailable.]]
+
+[[Thread safety:][safe]]
+
+]
+
+[endsect]
+
+
+[section:decorator_thread_decoration_decorate_copy_constructor Assign Move Constructor]
+[/==========================================================================================]
+
+ thread_decorator& operator=(thread_decorator&& other);
+ thread_decorator& operator=(detail::thread_move_t<thread_decorator> x) {
+
+
+[variablelist
+
+[[Requires:] [`Callable` must by `CopyConstructible`.]]
+
+
+[[Effects:] [`func` is copied into storage managed internally by the library, and that copy will be invoked after the operator() function when the decorate is used as Callable of a newly-created
+thread of execution.]]
+
+[[Returns:] [a reference to `*this`.]]
+
+[[Throws:] [Nothing.]]
+
+[[Thread safety:][neutral]]
+
+]
+
+[endsect]
+
+[section:decorator_thread_decoration_conversion Member Function `operator detail::thread_move_t<thread_decorator>()`]
+[/==========================================================================================]
+
+ operator detail::thread_move_t<thread_decorator>();
+
+
+[variablelist
+
+[[Effects:] [helper for move semantics emulation.]]
+
+[[Returns:] [the move form `*this`.]]
+
+[[Throws:] [Nothing.]]
+
+[[Thread safety:][neutral]]
+
+]
+
+[endsect]
+
+[section:decorator_thread_decoration_move Member Function `move()`]
+[/==========================================================================================]
+
+ detail::thread_move_t<thread_decorator> move() {
+ thread_decorator&& move();
+
+
+[variablelist
+
+[[Effects:] [Move *this to the caller.]]
+
+[[Returns:] [the move form `*this`.]]
+
+[[Throws:] [Nothing.]]
+
+[[Thread safety:][neutral]]
+
+]
+
+[endsect]
+
+[section:decorator_thread_decoration_swap Member Function `swap()`]
+[/==========================================================================================]
+
+ void swap(thread_decorator& x);
+
+
+[variablelist
+
+[[Effects:] []]
+
+[[Throws:] [Nothing.]]
+
+[[Thread safety:][neutral]]
+
+]
+
+[endsect]
+
+[section:decorator_thread_decoration_operator_f Member Function `operator()()`]
+[/==========================================================================================]
+
+ void operator()();
+
+
+[variablelist
+
+[[Effects:] [Functor operator]]
+
+[[Throws:] [Any exception thrown by the decorations or the user function.]]
+
+[[Thread safety:][unsafe - depends on the decorations constructio/destruction.]]
+
+]
+
+[endsect]
+[endsect]
+
+[section:decorate Non Member Function `decorate()`]
+[/==========================================================================================]
+
+ void decorate();
+
+
+[variablelist
+
+[[Requires:] [`Callable` is `CopyConstructible`. Copying `f` shall have no side effects, and the effect of calling the copy shall
+be equivalent to calling the original. ]]
+
+[[Effects:] [Calls every declared decoration using the thread_decoration class.
+]]
+
+[[Postconditions:] [All the decorations have been called.]]
+
+[[Throws:] [Any exception thrown by the decorations.]]
+
+[[Thread safety:][unsafe - depends on the decorations constructio/destruction.]]
+
+]
+
+[endsect]
+
+
+[endsect]
+
+
+[section:thread_specific_shared_ptr_reference_Header Header `<boost/thread/thread_specific_shared_ptr.hpp>`]
+[/==========================================================================================]
+
+ namespace boost {
+ namespace interthreads {
+
+ template <typename T>
+ class thread_specific_shared_ptr;
+
+ }
+ }
+
+
+[section:thread_specific_shared_ptr_reference_thread_specific_shared_ptr Template Class `thread_specific_shared_ptr<>`]
+[/==========================================================================================]
+
+`bith::thread_specific_shared_ptr<>` is an extension of the thread_specific_ptr providing access
+to this thread specific context from other threads.
+
+ template <typename T>
+ class thread_specific_shared_ptr : private noncopyable
+ {
+ public:
+ typedef shared_ptr<T> shared_ptr_type;
+ typedef 'implementation defined' map_type;
+ typedef 'implementation defined' mutex_type;
+ typedef 'implementation defined' lock_type;
+
+ thread_specific_shared_ptr();
+ explicit thread_specific_shared_ptr(void (*cleanup_)(shared_ptr_type));
+ ~thread_specific_shared_ptr();
+
+ T* get() const;
+ T* operator->() const;
+ T& operator*() const;
+ void reset();
+ template<class Y>
+ void reset(Y * p);
+ template<class Y, class D>
+ void reset(Y * p, D d);
+ template<class Y, class D, class A>
+ void reset(Y * p, D d, A a);
+
+ mutex_type& get_mutex();
+ const map_type& get_map(lock_type&) const;
+ shared_ptr_type operator[](thread::id id) const;
+ shared_ptr_type wait_and_get(thread::id id) const;
+ private:
+ shared_ptr_type get_shared_ptr() const;
+ };
+
+[section:thread_specific_shared_ptr_reference_parameters Template parameters]
+[/==========================================================================================]
+
+`thread_specific_shared_ptr<>` is instantiated with the following types:
+
+* T The type of the pointeed object
+
+[endsect]
+
+[section:thread_specific_shared_ptr_reference_types Public types]
+[/==========================================================================================]
+
+`thread_specific_shared_ptr<>` defines the following types:
+
+* [*`shared_ptr_type`] The shared pointed type.
+* [*`map_type`] The mapping type from `thread::id` to `shared_ptr_type`
+* [*`mutex_type`] The protecting mutext type follwing the Lockable Concept
+* [*`lock_type`] The lock used to get the map follwing the unique_lock subjacent Concept
+
+[endsect]
+
+[section:thread_specific_shared_ptr_default_constructor Constructor]
+[/==========================================================================================]
+
+ thread_specific_shared_ptr();
+
+[variablelist
+
+[[Effects:] [Construct a `thread_specific_shared_ptr<>` object for storing a pointer to an object of type `T` specific to each thread.]]
+
+[[Throws:] [`std::bad_alloc` when resources unavailable.]]
+
+[[Thread safety:][safe.]]
+
+]
+
+[endsect]
+
+[section:thread_specific_shared_ptr_constructor_with_custom_cleanup Cleanup Constructor]
+[/==========================================================================================]
+
+ explicit thread_specific_shared_ptr(void (*cleanup_)(shared_ptr_type));
+
+[variablelist
+
+[[Requires:] [`cleanup_function(this->get())` does not throw any exceptions.]]
+
+[[Effects:] [Construct a `thread_specific_shared_ptr<>` object for storing a pointer to an object of type `T` specific to each thread. The
+supplied `cleanup_function` will be called at thread exit.]]
+
+[[Throws:] [`std::bad_alloc` when resources unavailable.]]
+
+[[Thread safety:][safe.]]
+
+]
+
+[endsect]
+
+[section:thread_specific_shared_ptr_destructor Destructor]
+[/==========================================================================================]
+
+ ~thread_specific_shared_ptr();
+
+[variablelist
+
+[[Effects:] [Remove from the map the current thread::id and destroys `*this`.]]
+
+[[Throws:] [Nothing.]]
+
+[[Thread safety:][safe.]]
+
+]
+
+[note Care needs to be taken to ensure that any threads still running after an instance of `boost::thread_specific_shared_ptr<>` has been
+destroyed do not call any member functions on that instance. Is for this raison that usualy instance of this class are static.]
+
+[endsect]
+
+
+[section:thread_specific_shared_ptr_get Member Function `get()`]
+[/==========================================================================================]
+
+ shared_ptr_type get() const;
+
+[variablelist
+
+[[Returns:] [The pointer associated with the current thread.]]
+
+[[Throws:] [Nothing.]]
+
+[[Thread safety:][safe.]]
+
+]
+
+[note The initial value associated with an instance of `boost::thread_specific_shared_ptr<>` is `NULL` for each thread.]
+
+[endsect]
+
+[section:thread_specific_shared_ptr_operator_arrow Member Function `operator->()`]
+[/==========================================================================================]
+
+ T* operator->() const;
+
+[variablelist
+
+[[Requires:] [`this->get()` is not `NULL`.]]
+
+[[Returns:] [`this->get()`]]
+
+[[Throws:] [Nothing.]]
+
+[[Thread safety:][safe.]]
+
+]
+
+[endsect]
+
+[section:thread_specific_shared_ptr_operator_star Member Function `operator*()`]
+[/==========================================================================================]
+
+ T& operator*() const;
+
+[variablelist
+
+[[Requires:] [`this->get()` is not `NULL`.]]
+
+[[Returns:] [`*(this->get())`]]
+
+[[Throws:] [Nothing.]]
+
+[[Thread safety:][safe.]]
+
+]
+
+[endsect]
+
+[section:thread_specific_shared_ptr_reset Member Function `reset()`]
+[/==========================================================================================]
+
+ void reset();
+
+[variablelist
+[[Effects:] [Equivalent to `shared_ptr().swap(this->get_shared_ptr())`. Update the mapping.]]
+[[Postcondition:] [`this->get()==0`]]
+[[Throws:] [`std::bad_alloc` when resources unavailable.]]
+[[Thread safety:][safe.]]
+
+]
+
+ template<class Y> void reset(Y * new_value);
+
+[variablelist
+[[Effects:] [Equivalent to `shared_ptr(new_value).swap(this->get_shared_ptr())`. Update the mapping.]]
+[[Postcondition:] [`this->get()==new_value`]]
+[[Throws:] [`std::bad_alloc` when resources unavailable.]]
+[[Thread safety:][safe.]]
+]
+
+ template<class Y, class D> void reset(Y * new_value, D deleter);
+
+[variablelist
+[[Effects:] [Equivalent to `shared_ptr(new_value, deleter).swap(this->get_shared_ptr())`. Update the mapping.]]
+[[Postcondition:] [`this->get()==new_value`]]
+[[Throws:] [`std::bad_alloc` when resources unavailable.]]
+[[Thread safety:][safe.]]
+]
+
+ template<class Y, class D, class A> void reset(Y * new_value, D deleter, A a);
+
+[variablelist
+[[Effects:] [Equivalent to `shared_ptr(new_value, deleter, a).swap(this->get_shared_ptr())`. Update the mapping.]]
+[[Postcondition:] [`this->get()==new_value`]]
+[[Throws:] [`std::bad_alloc` when resources unavailable.]]
+[[Thread safety:][safe.]]
+]
+
+[variablelist
+
+[[Effects:] [If `this->get()!=new_value` and `this->get()` is non-`NULL`, invoke `delete this->get()` or
+`deleter(this->get())` as appropriate. Store `new_value` as the pointer associated with the current thread.]]
+
+[[Throws:] [`std::bad_alloc` when resources unavailable.]]
+
+]
+
+[endsect]
+
+[section:thread_specific_shared_ptr_wait_and_get Member Function `wait_and_get()`]
+[/==========================================================================================]
+
+ shared_ptr_type wait_and_get(thread::id id) const;
+
+[variablelist
+
+[[Effects:] [Waits until the specific shared pointer has been set and returns a shared pointer to this context.]]
+
+[[Throws:] [`boost::thread_interrupted` if the current thread of execution is interrupted.]]
+
+]
+
+[endsect]
+
+[section:thread_specific_shared_ptr_operatora Member Function `operator[]()`]
+[/==========================================================================================]
+
+ shared_ptr_type operator[](thread::id id) const;
+
+[variablelist
+
+[[Effects:] [Returns a copy of the specific shared_ptr of the thread of execution identified by the `thread::id`.]]
+
+[[Throws:] [Nothing.]]
+
+]
+
+[endsect]
+
+
+[section:get_mutex Member Function `get_mutex()`]
+[/==========================================================================================]
+
+ mutex_type& get_mutex();
+
+[variablelist
+
+[[Effects:] [Returns a reference to the protection mutex.]]
+
+[[Throws:] [Nothing.]]
+
+]
+
+[endsect]
+
+[section:get_map Member Function `get_map()`]
+[/==========================================================================================]
+
+ const map_type& get_map(lock_type&) const;
+
+[variablelist
+
+[[Effects:] [Returns a reference to the mapping from `thread::id` to the specific pointers provided the user gives a lock on the motext get using `get_mutex()`.]]
+
+[[Throws:] [Nothing.]]
+
+]
+[endsect]
+
+
+
+
+[endsect]
+
+[endsect]
+
+[section:keep_alive_file Header `<boost/interthreads/thread_keep_alive.hpp>`]
+[/==========================================================================================]
+
+ namespace boost {
+ namespace interthreads {
+ namespace this_thread {
+ class enable_keep_alive;
+ class disable_keep_alive;
+
+ void keep_alive_check_point();
+ bool keep_alive_enabled();
+ typedef void (*on_dead_thread_type)(thread::id);
+ void set_on_dead_thread(on_dead_thread_type fct);
+ }
+ }
+ }
+
+[section:keep_alive_enable_keep_alive class `enable_keep_alive`]
+[/==========================================================================================]
+
+ class enable_keep_alive : private noncopyable{
+ public:
+ enable_keep_alive(std::size_t threshold=2, std::size_t tap=1);
+ ~enable_keep_alive();
+ };
+
+[section:keep_alive_enable_keep_alive_Constructor Constructor]
+[/==========================================================================================]
+
+ enable_keep_alive(std::size_t threshold=2, std::size_t tap=1);
+
+[variablelist
+
+[[Effects:] [Enable the keep alive mechanism on this thread of execution.]]
+
+[[Throws:] [Nothing]]
+
+]
+
+[endsect]
+
+[section:keep_alive_enable_keep_alive_Destructor Destructor]
+[/==========================================================================================]
+
+ ~enable_keep_alive();
+
+[variablelist
+
+[[Effects:] [Restore the keep alive mechanism as it was before the constructor.]]
+
+[[Throws:] [Nothing]]
+
+]
+
+[endsect]
+[endsect]
+
+[section:keep_alive_disable_keep_alive class `disable_keep_alive`]
+[/==========================================================================================]
+
+ class disable_keep_alive private noncopyable {
+ public:
+ disable_keep_alive();
+ ~disable_keep_alive();
+ };
+
+[section:keep_alive_disable_keep_alive_Constructor Constructor]
+[/==========================================================================================]
+
+ disable_keep_alive();
+
+[variablelist
+
+[[Effects:] [Disable the keep alive mechanism on this thread of execution.]]
+
+[[Throws:] [Nothing]]
+
+]
+
+[endsect]
+
+[section:keep_alive_disable_keep_alive_Destructor Destructor]
+[/==========================================================================================]
+
+ ~disable_keep_alive();
+
+[variablelist
+
+[[Effects:] [Restore the keep alive mechanism as it was before the constructor.]]
+
+[[Throws:] [Nothing]]
+
+]
+
+[endsect]
+[endsect]
+
+[section:keep_alive_keep_alive_check_point Non Member Function `keep_alive_check_point()`]
+[/==========================================================================================]
+
+ void keep_alive_check_point();
+
+[variablelist
+
+[[Effects:] [States that the current thread is alive.]]
+[[Postconditions:] [The thread is alive.]]
+[[Throws:] [Nothing]]
+
+]
+
+[endsect]
+
+[section:keep_alive_keep_alive_enabled Non Member Function `keep_alive_enabled()`]
+[/==========================================================================================]
+
+ bool keep_alive_enabled();
+
+[variablelist
+
+[[Effects:] [States if the keep alive mechanism is enabled on this thread.]]
+[[Throws:] [Nothing]]
+
+]
+
+[endsect]
+
+[section:keep_alive_keep_alive_set_on_dead_thread Non Member Function `set_on_dead_thread()`]
+[/==========================================================================================]
+
+ void set_on_dead_thread(on_dead_thread_type fct);
+
+[variablelist
+
+[[Effects:] [Modifies the action to be done when a this thread is declared dead.]]
+
+[[Throws:] [Nothing]]
+
+]
+
+[endsect]
+
+[endsect]
+
+[section:thread_tuple_thread_tuple_hpp Header `<boost/interthreads/thread_tuple.hpp>`]
+[/==========================================================================================]
+
+ namespace boost {
+ namespace interthreads {
+
+ template <std::size_t N>
+ class thread_tuple;
+
+ template<typename F0, ..., typename Fn-1>
+ thread_tuple<n> make_thread_tuple(F0 f0, ..., Fn fn-1);
+
+ }
+ }
+
+
+[section:thread_tuple_class Template Class `thread_tuple<>`]
+[/==========================================================================================]
+
+`thread_tuple<>` defines a thread groupe where the number of threads is know statically and the threads are
+created at construction time.
+
+ template <std::size_t n>
+ class thread_tuple {
+ public:
+ template<typename F0, ..., typename Fn-1>
+ thread_tuple(F0 f0, ..., Fn-1 fn-1);
+
+ template <class F>
+ thread_tuple(boost::move_t<F> f);
+ ~thread_tuple();
+
+ // move support
+ thread_tuple(boost::move_t<thread_tuple<n>> x);
+ thread_tuple& operator=(boost::move_t<thread_tuple<n>> x);
+ operator boost::move_t<thread_tuple<n>>();
+ boost::move_t<thread_tuple<n>> move();
+
+ void swap(thread_tuple<n>& x);
+
+ bool joinable() const;
+ void join();
+ void join_all();
+ bool timed_join(const system_time& wait_until);
+ template<typename TimeDuration>
+ bool timed_join(TimeDuration const& rel_time);
+ bool timed_join_all(const system_time& wait_until);
+ template<typename TimeDuration>
+ bool timed_join_all(TimeDuration const& rel_time);
+
+ void detach();
+ void detach_all();
+
+ void interrupt();
+ void interrupt_all();
+ bool interruption_requested() const;
+
+ size_t size();
+
+ const thread& operator[](std::size_t i);
+ };
+ }
+ }
+
+
+The __thread_tuple__ class is responsible for launching and managing a static collection of threads that are related in some fashion.
+No new threads can be added to the tuple once constructed.
+
+[section Template parameters]
+[/==========================================================================================]
+
+`thread_tuple<>` is instantiated with the following value:
+
+* n is the size of the tuple.
+
+[endsect]
+
+[section:thread_tuple_callable_constructor Constructor]
+[/==========================================================================================]
+
+ template<typename F0, ..., typename Fn-1>
+ thread_tuple(F0 func_0, ..., Fn-1 func_n-1);
+
+[variablelist
+
+[[Preconditions:] [`Fk` must by copyable.]]
+
+[[Effects:] [`func_k` is copied into storage managed internally by the library, and that copy is invoked on a newly-created
+thread of execution. If this invocation results in an exception being propagated into the internals of the library that is
+not of type __thread_interrupted__, then `std::terminate()` will be called.]]
+
+[[Postconditions:] [`*this` refers to the newly created tuple of threads of execution.]]
+
+[[Throws:] [__thread_resource_error__ if an error occurs.]]
+
+[[Note:] [Currently up to ten arguments `func_0` to `funct_9` can be specified.]]
+
+]
+
+[endsect]
+
+
+[section:thread_tuple_destructor Destructor]
+[/==========================================================================================]
+
+ ~thread_tuple();
+
+[variablelist
+
+[[Effects:] [If *this have associateds thread of execution, calls detach() on them. Destroys *this.]]
+
+]
+
+[endsect]
+
+[section:thread_tuple_joinable Member function `joinable()`]
+[/==========================================================================================]
+
+ bool joinable() const;
+
+[variablelist
+
+[[Returns:] [`true` if `*this` refers to threads of execution, `false` otherwise.]]
+
+[[Throws:] [Nothing]]
+
+]
+
+[endsect]
+
+[section:thread_tuple_join Member function `join()|join_all()`]
+[/==========================================================================================]
+
+ void join();
+ void join_all();
+
+[variablelist
+
+[[Effects:] [Call `join()` on each __thread__ object in the tuple.]]
+
+[[Postcondition:] [Every thread in the tuple has terminated.]]
+
+[[Throws:] [Nothing]]
+
+[[Note:] [Since `boost::thread::join` is one of the predefined interruption points, `thread_tuple<>::join_all()` is also an interruption point.]]
+
+]
+
+[endsect]
+
+[section:thread_tuple_timed_join Member function `timed_join()|timed_join_all()`]
+[/==========================================================================================]
+
+ bool timed_join(const system_time& wait_until);
+ template<typename TimeDuration>
+ bool timed_join(TimeDuration const& rel_time);
+
+ bool timed_join_all(const system_time& wait_until);
+ template<typename TimeDuration>
+ bool timed_join_all(TimeDuration const& rel_time);
+
+[variablelist
+
+[[Effects:] [Call `timed_join()` on each __thread__ object in the tuple.]]
+
+[[Postcondition:] [Every thread in the tuple has terminated.]]
+
+[[Returns:] [true if joined.]]
+
+[[Throws:] [Nothing]]
+
+[[Note:] [Since `boost::thread::timed_join` is one of the predefined interruption points, `thread_tuple<>::timed_join_all()` is also an interruption point.]]
+
+]
+
+[endsect]
+
+[section:join_first_then_interrupt Member function `join_first_then_interrupt()`]
+[/==========================================================================================]
+
+ std::size_t join_first_then_interrupt();
+
+[variablelist
+
+[[Effects:] [Call `join_first()` and the `interrup_all()`.]]
+
+[[Postcondition:] [Every thread in the tuple has terminated.]]
+
+[[Throws:] [Nothing]]
+
+[[Note:] [Since `boost::thread::join` is one of the predefined interruption points, `thread_tuple<>::join_all()` is also an interruption point.]]
+
+]
+
+[endsect]
+
+[section:timed_join_first_then_interrupt Member function `timed_join_first_then_interrupt()`]
+[/==========================================================================================]
+
+ std::pair<bool,std::size_t> timed_join_first_then_interrupt(
+ const system_time& wait_until);
+ template<typename TimeDuration>
+ std::pair<bool,std::size_t> timed_join_first_then_interrupt(
+ TimeDuration const& rel_time);
+
+[variablelist
+
+[[Effects:] [Call `timed_join_first()` and the `interrup_all()`.]]
+
+[[Postcondition:] [Every thread in the tuple has terminated.]]
+
+[[Returns:] [true if joined.]]
+
+[[Throws:] [Nothing]]
+
+[[Note:] [Since `boost::thread::timed_join` is one of the predefined interruption points, `thread_tuple<>::timed_join_all()` is also an interruption point.]]
+
+]
+
+[endsect]
+
+[section:detach_all Member function `detach|detach_all()`]
+[/==========================================================================================]
+
+ void detach();
+ void detach_all();
+
+[variablelist
+
+[[Effects:] [Call `detach()` on each __thread__ object in the tuple.]]
+
+]
+
+[endsect]
+
+[section:thread_tuple_interrupt_all Member function `interrupt()|interrupt_all()`]
+[/==========================================================================================]
+
+ void interrupt();
+ void interrupt_all();
+
+[variablelist
+
+[[Effects:] [Call `thread::interrupt()` on each __thread__ object in the tuple.]]
+
+]
+
+[endsect]
+
+[section:thread_tuple_size Member function `size()`]
+[/==========================================================================================]
+
+ int size();
+
+[variablelist
+
+[[Returns:] [The number of threads in the tuple.]]
+
+[[Throws:] [Nothing.]]
+
+]
+
+[endsect]
+
+[endsect]
+
+[section:thread_tuple_make_thread_tuple Non Member Function `make_thread_tuple()`]
+[/==========================================================================================]
+
+ template<typename F0, ..., typename Fn-1>
+ thread_tuple make_thread_tuple(F0 f0, ..., Fn fn-1);
+
+[variablelist
+
+[[Effects:] [makes a new thread_tuple<>.]]
+[[Returns:] [the created thread tuple.]]
+
+
+]
+
+[endsect]
+
+[endsect]
+
+[section:set_once_hpp Header `<boost/interthreads/set_once.hpp>`]
+[/==========================================================================================]
+
+ namespace boost {
+ namespace interthreads {
+ template <typename T>
+ class set_once;
+ }
+ }
+
+
+[section:set_onceclass Template Class `set_once_once<>`]
+[/==========================================================================================]
+
+`set_once_once<>` is a synchonizer that allows to set a variable only once, notifying
+to the variable value to whatever is waiting for that.
+
+
+ template <typename T>
+ class set_once {
+ public:
+ typedef T value_type;
+
+ set_once();
+ void wait();
+ value_type get();
+
+ std::pair<bool,value_type> timed_get(const system_time& wait_until);
+
+ bool set(value_type id);
+
+ template<typename F>
+ static void decorator(this_type& once, T value, F fct);
+ template<typename F>
+ static boost::detail::thread_move_t<thread> make_thread(this_type& once, T value, F fct);
+ };
+
+[endsect]
+
+[endsect]
+
+[section:thread_tuple_once_hpp Header `<boost/interthreads/thread_tuple_once.hpp>`]
+[/==========================================================================================]
+
+ namespace boost {
+ namespace interthreads {
+
+ template <std::size_t N>
+ class thread_tuple_once;
+
+ template<typename F0, ..., typename Fn-1>
+ thread_tuple make_thread_tuple_once(F0 f0, ..., Fn fn-1);
+
+ }
+ }
+
+
+[section:thread_tuple_once_class Template Class `thread_tuple_once<>`]
+[/==========================================================================================]
+
+`biththread_tuple_once` is an extension of the `bith::thread_tuple` which allows to join the thread finishing
+the first, using for that the `bith::set_once` synchronizer.
+
+ template <std::size_t n>
+ class thread_tuple_once {
+ public:
+ template<typename F0, ..., typename Fn-1>
+ thread_tuple_once(F0 f0, ..., Fn-1 fn-1);
+
+ template <class F>
+ thread_tuple_once(boost::move_t<F> f);
+ ~thread_tuple_once();
+
+ // move support
+ thread_tuple_once(boost::move_t<thread_tuple_once<n>> x);
+ thread_tuple_once& operator=(boost::move_t<thread_tuple_once<n>> x);
+ operator boost::move_t<thread_tuple_once<n>>();
+ boost::move_t<thread_tuple_once<n>> move();
+
+ void swap(thread_tuple_once<n>& x);
+
+ bool joinable() const;
+ void join();
+ void join_all();
+ bool timed_join(const system_time& wait_until);
+ template<typename TimeDuration>
+ bool timed_join(TimeDuration const& rel_time);
+ bool timed_join_all(const system_time& wait_until);
+ template<typename TimeDuration>
+ bool timed_join_all(TimeDuration const& rel_time);
+
+ std::size_t join_first_then_interrupt();
+ std::pair<bool,std::size_t> timed_join_first_then_interrupt(
+ const system_time& wait_until);
+ template<typename TimeDuration>
+ std::pair<bool,std::size_t> timed_join_first_then_interrupt(
+ TimeDuration const& rel_time);
+
+ void detach();
+ void detach_all();
+
+ void interrupt();
+ void interrupt_all();
+ bool interruption_requested() const;
+
+ size_t size();
+
+ const thread& operator[](std::size_t i);
+ };
+ }
+ }
+
+
+The __thread_tuple_once__ class is responsible for launching and managing a static collection of threads that are related in some fashion.
+No new threads can be added to the tuple once constructed.
+
+[section Template parameters]
+[/==========================================================================================]
+
+`thread_tuple_once<>` is instantiated with the following value:
+
+* n is the size of the tuple.
+
+[endsect]
+
+[section:thread_tuple_once_callable_constructor Constructor]
+[/==========================================================================================]
+
+ template<typename F0, ..., typename Fn-1>
+ thread_tuple_once(F0 func_0, ..., Fn-1 func_n-1);
+
+[variablelist
+
+[[Preconditions:] [`Fk` must by copyable.]]
+
+[[Effects:] [`func_k` is copied into storage managed internally by the library, and that copy is invoked on a newly-created
+thread of execution. If this invocation results in an exception being propagated into the internals of the library that is
+not of type __thread_interrupted__, then `std::terminate()` will be called.]]
+
+[[Postconditions:] [`*this` refers to the newly created tuple of threads of execution.]]
+
+[[Throws:] [__thread_resource_error__ if an error occurs.]]
+
+[[Note:] [Currently up to ten arguments `func_0` to `funct_9` can be specified.]]
+
+]
+
+[endsect]
+
+
+[section:thread_tuple_once_destructor Destructor]
+[/==========================================================================================]
+
+ ~thread_tuple_once();
+
+[variablelist
+
+[[Effects:] [If *this have associateds thread of execution, calls detach() on them. Destroys *this.]]
+
+]
+
+[endsect]
+
+[section:thread_tuple_once_joinable Member function `joinable()`]
+[/==========================================================================================]
+
+ bool joinable() const;
+
+[variablelist
+
+[[Returns:] [`true` if `*this` refers to threads of execution, `false` otherwise.]]
+
+[[Throws:] [Nothing]]
+
+]
+
+[endsect]
+
+[section:thread_tuple_once_join Member function `join()|join_all()`]
+[/==========================================================================================]
+
+ void join();
+ void join_all();
+
+[variablelist
+
+[[Effects:] [Call `join()` on each __thread__ object in the tuple.]]
+
+[[Postcondition:] [Every thread in the tuple has terminated.]]
+
+[[Throws:] [Nothing]]
+
+[[Note:] [Since `boost::thread::join` is one of the predefined interruption points, `thread_tuple_once<>::join_all()` is also an interruption point.]]
+
+]
+
+[endsect]
+
+[section:thread_tuple_once_timed_join Member function `timed_join()|timed_join_all()`]
+[/==========================================================================================]
+
+ bool timed_join(const system_time& wait_until);
+ template<typename TimeDuration>
+ bool timed_join(TimeDuration const& rel_time);
+
+ bool timed_join_all(const system_time& wait_until);
+ template<typename TimeDuration>
+ bool timed_join_all(TimeDuration const& rel_time);
+
+[variablelist
+
+[[Effects:] [Call `timed_join()` on each __thread__ object in the tuple.]]
+
+[[Postcondition:] [Every thread in the tuple has terminated.]]
+
+[[Returns:] [true if joined.]]
+
+[[Throws:] [Nothing]]
+
+[[Note:] [Since `boost::thread::timed_join` is one of the predefined interruption points, `thread_tuple_once<>::timed_join_all()` is also an interruption point.]]
+
+]
+
+[endsect]
+
+[section:join_first_then_interrupt Member function `join_first_then_interrupt()`]
+[/==========================================================================================]
+
+ std::size_t join_first_then_interrupt();
+
+[variablelist
+
+[[Effects:] [Call `join_first()` and the `interrup_all()`.]]
+
+[[Postcondition:] [Every thread in the tuple has terminated.]]
+
+[[Throws:] [Nothing]]
+
+[[Note:] [Since `boost::thread::join` is one of the predefined interruption points, `thread_tuple_once<>::join_all()` is also an interruption point.]]
+
+]
+
+[endsect]
+
+[section:timed_join_first_then_interrupt Member function `timed_join_first_then_interrupt()`]
+[/==========================================================================================]
+
+ std::pair<bool,std::size_t> timed_join_first_then_interrupt(
+ const system_time& wait_until);
+ template<typename TimeDuration>
+ std::pair<bool,std::size_t> timed_join_first_then_interrupt(
+ TimeDuration const& rel_time);
+
+[variablelist
+
+[[Effects:] [Call `timed_join_first()` and the `interrup_all()`.]]
+
+[[Postcondition:] [Every thread in the tuple has terminated.]]
+
+[[Returns:] [true if joined.]]
+
+[[Throws:] [Nothing]]
+
+[[Note:] [Since `boost::thread::timed_join` is one of the predefined interruption points, `thread_tuple_once<>::timed_join_all()` is also an interruption point.]]
+
+]
+
+[endsect]
+
+[section:detach_all Member function `detach|detach_all()`]
+[/==========================================================================================]
+
+ void detach();
+ void detach_all();
+
+[variablelist
+
+[[Effects:] [Call `detach()` on each __thread__ object in the tuple.]]
+
+]
+
+[endsect]
+
+[section:thread_tuple_once_interrupt_all Member function `interrupt()|interrupt_all()`]
+[/==========================================================================================]
+
+ void interrupt();
+ void interrupt_all();
+
+[variablelist
+
+[[Effects:] [Call `thread::interrupt()` on each __thread__ object in the tuple.]]
+
+]
+
+[endsect]
+
+[section:thread_tuple_once_size Member function `size()`]
+[/==========================================================================================]
+
+ int size();
+
+[variablelist
+
+[[Returns:] [The number of threads in the tuple.]]
+
+[[Throws:] [Nothing.]]
+
+]
+
+[endsect]
+
+[endsect]
+
+[section:thread_tuple_once_make_thread_tuple_once Non Member Function `make_thread_tuple_once()`]
+[/==========================================================================================]
+
+ template<typename F0, ..., typename Fn-1>
+ thread_tuple_once make_thread_tuple_once(F0 f0, ..., Fn fn-1);
+
+[variablelist
+
+[[Effects:] [makes a new thread_tuple_once<>.]]
+[[Returns:] [the created thread tuple.]]
+
+
+]
+
+[endsect]
+
+[endsect]
+
+
+[section:thread_and_join_hpp Header `<boost/interthreads/thread_and_join.hpp>`]
+[/==========================================================================================]
+
+ namespace boost {
+ namespace interthreads {
+
+ template<typename F0, ..., typename Fn-1>
+ void thread_and_join_all(F0 f0, ..., Fn fn-1);
+ template<typename F0, ..., typename Fn-1>
+ bool thread_and_timed_join_all(const system_time& wait_until, F0 f0, ..., Fn fn-1);
+ template<typename TimeDuration, typename F0, ..., typename Fn-1>
+ bool thread_and_timed_join_all(TimeDuration wait_for, F0 f0, ..., Fn fn-1);
+
+ template<typename F0, ..., typename Fn-1>
+ std::size_t thread_and_join_first_then_interrupt(F0 f0, ..., Fn fn-1);
+ template<typename F0, ..., typename Fn-1>
+ std::pair<bool,std::size_t> thread_and_timed_join_first_then_interrupt(
+ const system_time& wait_until, F0 f0, ..., Fn fn-1);
+ template<typename TimeDuration, typename F0, ..., typename Fn-1>
+ std::pair<bool,std::size_t> thread_and_timed_join_first_then_interrupt(
+ TimeDuration wait_for, F0 f0, ..., Fn fn-1);
+ }
+ }
+
+
+
+[section:thread_tuple_join_all Non Member Function `thread_join_all()`]
+[/==========================================================================================]
+
+ template<typename F0, ..., typename Fn-1>
+ void thread_join_all(F0 f0, ..., Fn fn-1);
+
+[variablelist
+
+[[Effects:] [launch in each function on a thread of execution and join all.]]
+
+]
+
+[endsect]
+
+[section:thread_tuple_timed_join_all Non Member Function `thread_timed_join_all()`]
+[/==========================================================================================]
+
+ template<typename F0, ..., typename Fn-1>
+ bool thread_timed_join_all(
+ const system_time& wait_until, F0 f0, ..., Fn fn-1);
+ template<typename TimeDuration, typename F0, ..., typename Fn-1>
+ bool thread_timed_join_all(
+ TimeDuration wait_for, F0 f0, ..., Fn fn-1);
+
+[variablelist
+
+[[Effects:] [launch each function on a thread of execution and join all until a given time or duration if not interrup all.]]
+[[Returns:] [true if joined.]]
+
+]
+
+[endsect]
+
+
+[section:thread_tuple_join_first_then_interrupt Non Member Function `thread_join_first_then_interrupt()`]
+[/==========================================================================================]
+
+ template<typename F0, ..., typename Fn-1>
+ std::size_t thread_join_first_then_interrupt(F0 f0, ..., Fn fn-1);
+
+[variablelist
+
+[[Effects:] [launch in each function on a thread of execution, join the first and then interrupt the others.]]
+[[Returns:] [the index on the tuple of the first thread joined.]]
+
+]
+
+[endsect]
+
+[section:thread_tuple_timed_join_first_then_interruptl Non Member Function `thread_timed_join_first_then_interrupt()`]
+[/==========================================================================================]
+
+ template<typename F0, ..., typename Fn-1>
+ std::pair<bool,std::size_t> thread_timed_join_first_then_interrupt(
+ const system_time& wait_until, F0 f0, ..., Fn fn-1);
+ template<typename TimeDuration, typename F0, ..., typename Fn-1>
+ std::pair<bool,std::size_t> thread_timed_join_first_then_interrupt(
+ TimeDuration wait_for, F0 f0, ..., Fn fn-1);
+
+[variablelist
+
+[[Effects:] [launch in each function on a thread of execution, join the first and then interrupt the others or interrup all.]]
+[[Returns:] [a pair consisting of a boolean stating if the a thread has been joined before the given time and the index on the tuple of the first thread joined.]]
+
+]
+
+[endsect]
+
+[endsect]
+
+[section:thread_group_once_hpp Header `<boost/interthreads/thread_group_once.hpp>`]
+[/==========================================================================================]
+
+ namespace boost {
+ namespace interthreads {
+
+ template <std::size_t N>
+ class thread_group;
+
+ }
+ }
+
+
+[section:thread_group_once_class Template Class `thread_group_once<>`]
+[/==========================================================================================]
+
+`thread_group_once<>` is an extension of the boost::thread_group which allows to join the thread finishing
+the first, using for that the set_once synchronizer.
+
+ template <std::size_t n>
+ class thread_group_once {
+ public:
+ thread_group_once();
+ ~thread_group_once();
+
+ template<typename F>
+ thread* create_thread(F threadfunc);
+ void remove_thread(thread* thrd);
+
+ // move support
+ thread_group_once(boost::move_t<thread_group_once<n>> x);
+ thread_group_once& operator=(boost::move_t<thread_group_once<n>> x);
+ operator boost::move_t<thread_group_once<n>>();
+ boost::move_t<thread_group_once<n>> move();
+
+ void swap(thread_group_once<n>& x);
+
+ // bool joinable() const;
+ void join();
+ void join_all();
+ // bool timed_join(const system_time& wait_until);
+ // template<typename TimeDuration>
+ // bool timed_join(TimeDuration const& rel_time);
+ // bool timed_join_all(const system_time& wait_until);
+ // template<typename TimeDuration>
+ // bool timed_join_all(TimeDuration const& rel_time);
+
+ std::size_t join_first_then_interrupt();
+ std::pair<bool,std::size_t> timed_join_first_then_interrupt(
+ const system_time& wait_until);
+ template<typename TimeDuration>
+ std::pair<bool,std::size_t> timed_join_first_then_interrupt(
+ TimeDuration const& rel_time);
+
+ // void detach();
+ // void detach_all();
+
+ void interrupt();
+ void interrupt_all();
+ // bool interruption_requested() const;
+
+ size_t size();
+
+ const thread& operator[](std::size_t i);
+ };
+ }
+ }
+
+
+[section:thread_group_once_callable_constructor Constructor]
+[/==========================================================================================]
+
+ thread_group_once();
+
+[variablelist
+
+
+[[Effects:] [creates a thread group.]]
+
+[[Postconditions:] [`*this` refers to the newly created group of threads of execution.]]
+
+[[Throws:] [__thread_resource_error__ if an error occurs.]]
+
+]
+
+[endsect]
+
+
+[section:thread_group_once_destructor Destructor]
+[/==========================================================================================]
+
+ ~thread_group_once();
+
+[variablelist
+
+[[Effects:] [If *this have associateds thread of execution, calls detach() on them. Destroys *this.]]
+
+]
+
+[endsect]
+
+[section:thread_group_once_joinable Member function `joinable()`]
+[/==========================================================================================]
+
+ bool joinable() const;
+
+[variablelist
+
+[[Returns:] [`true` if `*this` refers to threads of execution, `false` otherwise.]]
+
+[[Throws:] [Nothing]]
+
+]
+
+[endsect]
+
+[section:thread_group_once_join Member function `join()|join_all()`]
+[/==========================================================================================]
+
+ void join();
+ void join_all();
+
+[variablelist
+
+[[Effects:] [Call `join()` on each __thread__ object in the group.]]
+
+[[Postcondition:] [Every thread in the group has terminated.]]
+
+[[Throws:] [Nothing]]
+
+[[Note:] [Since `boost::thread::join` is one of the predefined interruption points, `thread_group_once<>::join_all()` is also an interruption point.]]
+
+]
+
+[endsect]
+
+[section:thread_group_once_timed_join Member function `timed_join()|timed_join_all()`]
+[/==========================================================================================]
+
+ bool timed_join(const system_time& wait_until);
+ template<typename TimeDuration>
+ bool timed_join(TimeDuration const& rel_time);
+
+ bool timed_join_all(const system_time& wait_until);
+ template<typename TimeDuration>
+ bool timed_join_all(TimeDuration const& rel_time);
+
+[variablelist
+
+[[Effects:] [Call `timed_join()` on each __thread__ object in the group.]]
+
+[[Postcondition:] [Every thread in the group has terminated.]]
+
+[[Returns:] [true if joined.]]
+
+[[Throws:] [Nothing]]
+
+[[Note:] [Since `boost::thread::timed_join` is one of the predefined interruption points, `thread_group_once<>::timed_join_all()` is also an interruption point.]]
+
+]
+
+[endsect]
+
+[section:join_first_then_interrupt Member function `join_first_then_interrupt()`]
+[/==========================================================================================]
+
+ std::size_t join_first_then_interrupt();
+
+[variablelist
+
+[[Effects:] [Call `join_first()` and the `interrup_all()`.]]
+
+[[Postcondition:] [Every thread in the group has terminated.]]
+
+[[Throws:] [Nothing]]
+
+[[Note:] [Since `boost::thread::join` is one of the predefined interruption points, `thread_group_once<>::join_all()` is also an interruption point.]]
+
+]
+
+[endsect]
+
+[section:timed_join_first_then_interrupt Member function `timed_join_first_then_interrupt()`]
+[/==========================================================================================]
+
+ std::pair<bool,std::size_t> timed_join_first_then_interrupt(
+ const system_time& wait_until);
+ template<typename TimeDuration>
+ std::pair<bool,std::size_t> timed_join_first_then_interrupt(
+ TimeDuration const& rel_time);
+
+[variablelist
+
+[[Effects:] [Call `timed_join_first()` and the `interrup_all()`.]]
+
+[[Postcondition:] [Every thread in the group has terminated.]]
+
+[[Returns:] [true if joined.]]
+
+[[Throws:] [Nothing]]
+
+[[Note:] [Since `boost::thread::timed_join` is one of the predefined interruption points, `thread_group_once<>::timed_join_all()` is also an interruption point.]]
+
+]
+
+[endsect]
+
+[section:detach_all Member function `detach|detach_all()`]
+[/==========================================================================================]
+
+ void detach();
+ void detach_all();
+
+[variablelist
+
+[[Effects:] [Call `detach()` on each __thread__ object in the group.]]
+
+]
+
+[endsect]
+
+[section:thread_group_once_interrupt_all Member function `interrupt()|interrupt_all()`]
+[/==========================================================================================]
+
+ void interrupt();
+ void interrupt_all();
+
+[variablelist
+
+[[Effects:] [Call `thread::interrupt()` on each __thread__ object in the group.]]
+
+]
+
+[endsect]
+
+[section:thread_group_once_size Member function `size()`]
+[/==========================================================================================]
+
+ int size();
+
+[variablelist
+
+[[Returns:] [The number of threads in the group.]]
+
+[[Throws:] [Nothing.]]
+
+]
+
+[endsect]
+
+[endsect]
+
+[section:thread_group_once_make_thread_group_once Non Member Function `make_thread_group_once()`]
+[/==========================================================================================]
+
+ template<typename F0, ..., typename Fn-1>
+ thread_group_once make_thread_group_once(F0 f0, ..., Fn fn-1);
+
+[variablelist
+
+[[Effects:] [makes a new thread_group_once<>.]]
+[[Returns:] [the created thread group.]]
+
+
+]
+
+[endsect]
+
+[endsect]
+
+
+
+[endsect]
Added: sandbox/interthreads/libs/interthreads/doc/tutorial.qbk
==============================================================================
--- (empty file)
+++ sandbox/interthreads/libs/interthreads/doc/tutorial.qbk 2008-11-19 05:56:30 EST (Wed, 19 Nov 2008)
@@ -0,0 +1,632 @@
+[/
+ (C) Copyright 2008 Vicente J Botet Escriba.
+ Distributed under the Boost Software License, Version 1.0.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ http://www.boost.org/LICENSE_1_0.txt).
+]
+
+[section:tutorial Tutorial]
+[/========================]
+
+[section Thread Decorator]
+[/==========================================================================================]
+
+[section Declaring a decoration]
+Objects of type __thread_decoration__ are usualy static and initialized with a `Callable` object:
+
+ static boost::interthreads::thread_decoration d;
+ void setup();
+
+ boost::interthreads::thread_decoration d(setup)
+
+These decorations will be called either when we request this explicitly at the initialization of the
+thread (this is needed on the main thread) or when we create a thread using specific decorator wrapper. This is explained in more detail
+in the next sections.
+
+[endsect]
+
+[section Creating threads with decorators]
+When we want the decorations to decorate one thread of execution we can create the thread using the decorator wrapper.
+
+ boost::thread th(boost::interthreads::thread_decorator(fct));
+
+[endsect]
+
+[section:decorator_explicit_call Calling explictly the decoration on a thread]
+For threads that are not created using the boost::thread class, as it is the case of the main thread, we need to call explicitly the function
+__thread_decoration_decorate__ at the begining of the thread.
+
+ int main() {
+ boost::interthreads::decorate();
+ // ...
+ }
+
+[endsect]
+
+[endsect]
+
+[section Thread Specific Shared Pointer]
+[/==========================================================================================]
+
+
+[section Key initialization]
+[/==========================================================================================]
+
+As the curent implementation use the address of the thread_specific_shared_ptr<> object, there is no need to do whatever to get the key.
+
+ bith::thread_specific_shared_ptr<myclass> ptr;
+
+[endsect]
+
+[section Context initialization]
+[/==========================================================================================]
+
+Initially the pointer has a value of `NULL` in each thread, but the value for the
+current thread can be set using the `reset()` member functions.
+
+If the value of the pointer for the current thread is changed using `reset()`, then the previous value is destroyed by calling the
+deleter routine. Alternatively, the stored value can be reset to `NULL` and the prior value returned by calling the `release()`
+member function, allowing the application to take back responsibility for destroying the object.
+
+Initialization can be done
+
+* explicitly on the curret thread. Basically it works like a thread local storage from inside the thread.
+
+ bith::thread_specific_shared_ptr<myclass> ptr;
+
+ { // current thread
+ // ...
+ ptr.reset(p);
+ // ...
+ }
+
+* When we associate a thread decoration to the thread_specific_shared_ptr<> we can initialize all the decorations either calling
+decorate or creating a thread wrapping the function with the thread thread_decorator functor.
+
+ void myclass_init() {
+ ptr.reset(new myclass(any specific parameters));
+ }
+ bith::thread_decoratio myclass_decoration(myclass_init);
+
+[endsect]
+
+[section Obtain the pointer to the thread-specific object on the current thread]
+[/==========================================================================================]
+
+All functions known from boost::thread_specific_ptr are available, and so its semantics from inside the thread.
+The value for the current thread can be obtained using the `get()` member function, or by using
+the `*` and `->` pointer deference operators.
+
+[endsect]
+
+[section Waiting for the seeting of the thread-specific object from another thread]
+[/==========================================================================================]
+
+The setting of a thread specific context must be done on a thread. When access the thread specific shared context of another thread it can be NULL.
+The library provides a wait_and_get() function that allows a thread tosynchonize with the setting from another thread.
+
+ shared_ptr<myclass> shp = ptr.wait_and_get(th->get_id());
+
+where `th` is a `boost::thread*`.
+
+[section Obtain the pointer to the thread-specific object of another thread]
+[/==========================================================================================]
+
+Besides this another thread can get access to the data when it can get the thread::id by:
+
+ shared_ptr<myclass> shp = ptr[th->get_id()]->foo();
+
+where `th` is a `boost::thread*` and `foo()` is a function of `myclass`.
+
+The lifetime of the myclass instance is managed by a shared_ptr. One reference is held by the thread (by means of a tss), a second is
+held by the thread::id to shared_ptr<T> map and additional references might be held by other threads that obtained it by `*pmyclass[th]`.
+
+[endsect]
+
+[section Iterating through all the thread specific context]
+[/==========================================================================================]
+
+Another use case appears when some global controller needs to access the thread specific data of all the threads. Several approaches are
+here possible; the library has choosed to provide a map getter using a external locking mechanism that 'ensure' that the map access is
+locked during the map query.
+
+ {
+ bith::thread_specific_shared_ptr<myclass>::lock_type lock(ptr.get_mutex(lock));
+ const bith::thread_specific_shared_ptr<myclass>::map_type amap = ptr.get_map(lock);
+ // use the map
+ }
+
+
+[endsect]
+
+
+[section Deleting the context]
+[/==========================================================================================]
+
+When a thread exits, the objects associated with each `boost::thread_specific_shared_ptr<>` instance is not inmediately destroyed due to
+its shared nature. It is detached from the current thread and removed from the map.
+Only when there are no more references to the shared pointer it will be destroyed. By default, the object
+pointed to by a pointer `p` is destroyed by invoking `delete p`, but this can be overridden for a specific instance of
+`boost::thread_specific_shared_ptr<>` by providing a deleter routine to the constructor. In this case, the object is destroyed by invoking
+`deleter(p)` where `deleter` is the deleter routine supplied to the constructor. The deleter function is called only when there are no
+more references to the shared pointer.
+
+[endsect]
+
+[section Cleanup at thread exit]
+[/==========================================================================================]
+
+When a thread exits, the objects associated with each thread_specific_shared_ptr<> instance is not inmediately destroyed due to its shared nature. Only when there are no more references to the shared pointer it will be destroyed. By default, the object pointed to by a pointer p is destroyed by invoking delete p, but this can be overridden for a specific instance of boost::thread_specific_shared_ptr<> by providing a cleanup routine to the constructor. In this case, the object is destroyed by invoking func(p) where func is the cleanup routine supplied to the constructor. The cleanup functions are called in an unspecified order. If a cleanup routine sets the value of associated with an instance of boost::thread_specific_shared_ptr<> that has already been cleaned up, that value is added to the cleanup list. Cleanup finishes when there are no outstanding instances of boost::thread_specific_shared_ptr<> with values.
+If a cleanup routine sets the value associated with an instance of `boost::thread_specific_shared_ptr<>` that has already been
+cleaned up, that value is added to the cleanup list. Cleanup finishes when there are no outstanding instances of
+`boost::thread_specific_shared_ptr<>` with values.
+
+[endsect]
+[endsect]
+
+
+[section Keep alive]
+[/==========================================================================================]
+
+We will use the implementation of the keep alive mechanism as tutorial for the thread decorators,
+thread specific shared pointers and the kepp alive mechanism itself.
+
+We want to detect situations on which a thread is looping or blocked on some component.
+The user needs to state when this mechanism is enabled or disabled.
+
+Since the only purpose is to find threads that don't work, the thread needs to say if it is alive to a controller.
+The controler request at predefined intervals if the thread is dead, and in this case it will call a user specific function
+which by default aborts the program.
+
+A thread is considered dead if during a given period the number of checkins is inferior to a given threshold.
+These two parameters are given when the keep alive mechanislm is enabled.
+At the begining of a thread the keep alive mechanism is disabled.
+
+[section Interface]
+[/==========================================================================================]
+
+Next follows the keep alive interface.
+
+ namespace boost {
+ namespace interthreads {
+ namespace this_thread {
+ class enable_keep_alive
+ {
+ enable_keep_alive(const enable_keep_alive&);
+ enable_keep_alive& operator=(const enable_keep_alive&);
+ public:
+ enable_keep_alive(std::size_t periods=2, std::size_t checkins=1);
+ ~enable_keep_alive();
+ };
+
+ class disable_keep_alive
+ {
+ disable_keep_alive(const disable_keep_alive&);
+ disable_keep_alive& operator=(const disable_keep_alive&);
+ public:
+ disable_keep_alive();
+ ~disable_keep_alive();
+ };
+
+ void keep_alive_point();
+ bool keep_alive_enabled();
+
+ typedef void (*on_dead_thread_type)(thread*);
+ void set_on_dead_thread(on_dead_thread_type fct);
+
+ }
+ }
+ }
+
+[endsect]
+
+[section Keep alive mechanism initialization]
+[/==========================================================================================]
+
+There is a single controller `keep_alive_mgr`. The controler needs to access some thread specific shared context
+`thread_keep_alive_ctx` to be able to control a thread.
+
+ namespace detail {
+ struct thread_keep_alive_ctx {
+ // ...
+ static void init();
+
+ typedef thread_specific_shared_ptr<thread_keep_alive_ctx> tssp;
+ static tssp instance_;
+ static thread_decoration initializer_;
+ thread_keep_alive_internal* data_;
+ };
+ struct keep_alive_mgr {
+ // ...
+ static void initialize() {
+ boost::call_once(flag, init);
+ }
+ static void init() {
+ instance_=new keep_alive_mgr();
+ }
+ boost::thread thread_;
+ static boost::once_flag flag_;
+ static keep_alive_mgr* instance_;
+ };
+ }
+
+The initialization of the controller itself and the setting the thread specific context is done
+using an internal thread decoration `thread_keep_alive_ctx::initializer_`
+with thread_keep_alive_ctx::init as setup function.
+
+ thread_specific_shared_ptr<detail::thread_keep_alive_ctx> thread_keep_alive_ctx::instance_;
+ thread_decoration thread_keep_alive_ctx::initializer_(thread_keep_alive_ctx::init);
+ boost::once_flag keep_alive_mgr::flag;
+
+This setup function will initialize the `keep_alive_mgr` and then set the `thread_specific_shared_ptr<>` with a new `thread_keep_alive_ctx`.
+
+ void thread_keep_alive_ctx::init() {
+ keep_alive_mgr::initialize();
+ instance_.reset(new thread_keep_alive_ctx());
+ }
+
+The keep_alive_mgr::initialize function ensure just that the init function is called once using the boost::call_once.
+This init function create the instance of the keep_alive_mgr singleton.
+
+ void keep_alive_mgr::initialize() {
+ boost::call_once(flag_, init);
+ }
+ void keep_alive_mgr::init() {
+ instance_=new keep_alive_mgr();
+ }
+[endsect]
+
+[section:keep_alive_threads Which threads can be controlled?]
+[/==========================================================================================]
+
+As the keep alive mechanism use a thread decoration, the user needs to explicit calls the
+`bith::decorate` function at the begining of the thread function or by wrapping the thread function.
+Instead of having a specific function to call or thread function wrapper the keep alive uses the functions
+provided by the thread decorator (`bith::decorate` and `bith::thread_decorator`).
+So we must either call `bith::decorate` explicitly on the thread function
+
+ void fct() {
+ bith::decorate();
+ // ...
+ }
+
+or create the thread with the `bith::make_decorator` wrapper
+
+ boost::thread th(bith::make_decorator(fct));
+
+
+[endsect]
+
+[section:keep_alive_enabling Enabling the keep alive mechanism]
+[/==========================================================================================]
+
+To be controled by the keep alive manager we need to enable the mechanism using the enable_keep_alive.
+By default this enabler requires the application to do at least one check point every 2 seconds using the
+`bith::keep_alive_point()` function.
+
+ void fct() {
+ using bith::this_thread;
+
+ // ...
+ enable_keep_alive ena;
+ // states that the thread will be declared dead if threre are
+ // less that 1 check_points in 2 seconds.
+ for(;;) {
+ // do a check point
+ keep_alive_point();
+ // ...
+ }
+ }
+
+
+[endsect]
+
+[section:keep_alive_disabling Disabling the keep alive mechanism]
+[/==========================================================================================]
+Some times we need to do an external task that could take an undefined time. We can then disable the
+keep alive mechanisme by using a disabler `bith::disable_keep_alive`.
+
+ void fct() {
+ using bith::this_thread;
+
+ // ...
+ // states that the thread will be declared dead if threre are
+ // less that 1 check_points in 2 seconds.
+ enable_keep_alive enabler;
+ for(;;) {
+ // do a check point
+ keep_alive_point();
+
+ if (cnd) {
+ // when a blocking task spending an undefined time
+ // you can disable the keep alive mechanism
+ disable_keep_alive disabler;
+
+ }
+ }
+ }
+
+If on the contrary we don't want to disable the keep alive mechanism, it will be interesting to do a
+`boost::interruption_check_point()` just after the blocking task. In this way if the task takes too much time and
+the thread is declared dead, you let the possibility to manage the keep alive error by interrupting
+the dead thread, once the task is finished.
+
+ void fct() {
+ using boost::this_thread;
+ using bith::this_thread;
+
+ // ...
+ // states that the thread will be declared dead if threre are
+ // less that 1 check_points in 2 seconds.
+ enable_keep_alive enabler;
+ for(;;) {
+ // do a check point
+ keep_alive_point();
+
+ if (cnd) {
+ // when a blocking task spending an undefined time
+ // you can disable the keep alive mechanism
+ unknow_time_task();
+ interruption_check_point();
+
+ }
+ }
+ }
+
+[endsect]
+
+[section:keep_alive_persistent Configuring the dead persistency]
+[/==========================================================================================]
+
+The default enabling parameters could be too restrictive in some cases. But the `enable_keep_alive` configure that with the two parameters.
+We can declare a thread dead when the thread has not done a number of checkins in a given period.
+This can be useful when you know the time a given task should take.
+
+ void fct() {
+ using bith::this_thread;
+
+ // ...
+ // states that the thread will be declared dead if threre are
+ // less that 4 check_points in 30 seconds.
+ enable_keep_alive enabler(15, 1);
+ for(;;) {
+
+ if (cnd) {
+ // it is know that this task will take no more than 15 seconds
+ enable_keep_alive control(15, 1);
+ know_time_task();
+ keep_alive_point();
+ this_thread::interruprion_check_point();
+ }
+
+
+ }
+ // ...
+ }
+
+
+[endsect]
+
+[section Access from the current thread]
+[/==========================================================================================]
+
+But how all this works. We start with enablers/disablers.
+Enablers/disablers use RAII, so they can be nested and the context be restored on the destructor.
+At the construction they store the current state of the keep alive of this thread using the backup
+function and then they enable/disable the KA mechanism. On destruction they restore the backuped context.
+
+ enable_keep_alive::enable_keep_alive(
+ std::size_t periods, std::size_t checkins)
+ {
+ backup_=detail::thread_keep_alive_ctx::instance()->backup(data_);
+ detail::thread_keep_alive_ctx::instance()->enable_keep_alive(periods, checkins);
+ }
+
+ enable_keep_alive::~enable_keep_alive() {
+ detail::thread_keep_alive_ctx::instance()->restore(backup_);
+ }
+
+ disable_keep_alive::disable_keep_alive() {
+ backup_=detail::thread_keep_alive_ctx::instance()->backup(data_);
+ detail::thread_keep_alive_ctx::instance()->disable_keep_alive();
+ }
+
+ disable_keep_alive::~disable_keep_alive() {
+ detail::thread_keep_alive_ctx::instance()->restore(backup_);
+ }
+
+These function are quite simple
+
+ thread_keep_alive_internal* thread_keep_alive_ctx::backup(thread_keep_alive_internal* new_data) {
+ thread_keep_alive_internal* the_backup=data_;
+ data_=new_data
+ return the_backup;
+ }
+
+ void thread_keep_alive_ctx::restore(thread_keep_alive_internal* backup) {
+ data_=backup;
+ }
+
+ void thread_keep_alive_ctx::enable_keep_alive() {
+ data_->enabled_=true;
+ }
+ void thread_keep_alive_ctx::disable_keep_alive() {
+ data_->enabled_ = false;
+ }
+
+Note that there is no need to check if the `detail::thread_keep_alive_ctx::instance_`
+contains a poiter because we have ensured that at initialization time.
+
+Next there is the central function `keep_alive_point()`. This function
+does nothing more than relaying the request to the specific context of this thread.
+This function just increase the number of `checkins_`.
+
+ void keep_alive_point() {
+ detail::thread_keep_alive_ctx::instance()->check_point();
+ }
+
+ void thread_keep_alive_ctx::check_point() {
+ ++data_->checkins_;
+ }
+
+The `set_on_dead_thread()` do the same. This function just store the on dead action.
+
+ void set_on_dead_thread(on_dead_thread_type fct);
+ detail::thread_keep_alive_ctx::instance()->set_on_dead_thread(fct);
+ }
+
+ void set_on_dead_thread(on_dead_thread_type fct);
+ data_->on_dead_=fct;
+ }
+
+[endsect]
+
+[section Access from the controller thread]
+[/==========================================================================================]
+
+Up to now we have see the use of `bith::thread_keep_alive_ctx` as a `boost::thread_specific_ptr`, i.e. it is used
+from the current thread.
+
+We will see now how the controler behaves. The single instance of the keep_alive_mgr has been created on the
+init function.
+
+The constructor just construct a thread with the loop function.
+
+ keep_alive_mgr::keep_alive_mgr() : thread_(loop) {}
+
+
+The loop function will every second iterate over all the thread_keep_alive_ctx threads specific contexts asking them to control itselfs.
+Note that as the map can be modified when threads are created or finish we nee to protect the iteration externally with a lock on the
+protecting mutex.
+
+ void keep_alive_mgr::loop() {
+ boost::xtime t;
+ boost::xtime_get(&t,1);
+ for(;;) {
+ t.sec += 1;
+ boost::thread::sleep(t);
+ lock_type lock(thread_keep_alive_ctx::instance().get_mutex());
+ const detail::thread_keep_alive_ctx::tssp::map_type& tmap(
+ thread_keep_alive_ctx::instance().get_map());
+ thread_keep_alive_ctx::tssp::map_type::const_iterator it = tmap.begin();
+ for (;it != tmap.end(); ++it) {
+ it->second.control(it->first);
+ }
+ }
+ }
+
+
+The thread_keep_alive_ctx::control function behaves as follows: if it is enabled decrease
+the number of remaining periods and if the thread is declared dead execute the on dead
+action and reset the checkings and periods.
+
+ void control(thread::id id) {
+ if (data_->enabled_) {
+ --data_->periods_;
+ if (dead()) {
+ on_dead(id);
+ data_->checkins_=0;
+ data_->periods_=data_->total_periods;
+ }
+ }
+ }
+
+[endsect]
+
+[endsect]
+
+[section Thread Tuple]
+[/==========================================================================================]
+
+[section:thread_tuple_launching Launching thread tuple]
+[/==========================================================================================]
+
+A new thread tuple is launched by passing a collection of object of some callable type that can be invoked with no parameters to the constructor.
+These objects are then copied into internal storage, and invoked on the newly-created threads of execution.
+If the objects must not (or cannot) be copied, then `boost::ref` can be used to pass in a reference to the function object.
+In this case, the user of __thread_tuple__ must ensure that the referred-to object outlives the newly-created thread of execution.
+
+ struct callable
+ {
+ void operator()();
+ };
+
+ bith::thread_tuple<2> copies_are_safe()
+ {
+ callable x;
+ callable y;
+ return bith::thread_tuple<2>(x, y);
+ } // x and y are destroyed, but the newly-created threads have a copy, so this is OK
+
+ bith::thread_tuple<2> oops()
+ {
+ callable x;
+ callable y;
+ return bith::thread_tuple<2>(boost::ref(x), boost::ref(y));
+ } // x and y are destroyed, but the newly-created threads still have a reference
+ // this leads to undefined behaviour
+
+If you wish to construct an instance of __thread_tuple__ with a function or callable object that requires arguments to be supplied,
+this can NOT be done by passing additional arguments as is the case for threads to the __thread_tuple__ constructor, and you will need to use bind explicitlly.
+
+ void find_the_question(int the_answer);
+
+ bith::thread_tuple<2> deep_thought_2(boost::bind(find_the_question,42), boost::bind(find_the_question,16));
+
+The arguments are ['copied] into the internals of Boost.Bind structure: if a reference is required, use `boost::ref`, just as for references
+to callable functions.
+
+The limit on the number of additional arguments that can be passed are specified by the Boost.Bind.
+
+[endsect]
+
+[section:thread_tuple_exceptions Exceptions in thread functions]
+[/==========================================================================================]
+
+If the function or callable objects passed to the __thread_tuple__ constructor propagates an exception when invoked that is not of type
+__thread_interrupted__, `std::terminate()` is called.
+
+[endsect]
+
+[section:thread_tuple_joining Joining and detaching]
+[/==========================================================================================]
+
+When the __thread_tuple__ object that represents a collection of threads of execution is destroyed the threads become ['detached].
+Once a threads are detached, they will continue executing until the invocation of the functions or callable objects supplied on construction have completed,
+or the program is terminated. The threads of a __thread_tuple__ can also be detached by explicitly invoking the detach member function on the __thread_tuple__
+object. In this case, all the threads of the __thread_tuple__ object ceases to represent the now-detached thread, and instead represents 'Not-a-Thread.
+
+In order to wait for a tuple of threads of execution to finish, the __join__ or __timed_join__ member functions of the __thread_tuple__ object must be
+used.
+__join__ will block the calling thread until the all the threads represented by the __thread_tuple__ object have completed.
+If the threads of execution represented by the __thread_tuple__ object have already completed, or
+the __thread_tuple__ objects represents __not_a_thread__, then __join__ returns immediately.
+__timed_join__ is similar, except that a call to __timed_join__ will also return if the threads being waited for
+does not complete when the specified time has elapsed.
+
+There is also a the possibility to wait until the first thread completes, interrupting the rest of the threads.
+[endsect]
+
+[section:thread_tuple_interruption Interruption]
+[/==========================================================================================]
+
+A tuple of running threads can be ['interrupted] by invoking the __interrupt__ member function of the corresponding __thread_tuple__ object.
+When the interrupted threads next executes one of the specified __interruption_points__ (or if it is currently __blocked__ whilst executing one)
+with interruption enabled, then a __thread_interrupted__ exception will be thrown in the interrupted thread. If not caught,
+this will cause the execution of the interrupted thread to terminate. As with any other exception, the stack will be unwound, and
+destructors for objects of automatic storage duration will be executed.
+
+See the __boost_thread__ library on how to avoid a thread being interrupted.
+
+At any point, the interruption state for the current thread can be queried by calling `interruption_enabled`.
+
+
+[#interruption_points]
+
+See the __boost_thread__ library for the Predefined Interruption Points.
+
+[endsect]
+[endsect]
+
+
+[endsect]
Added: sandbox/interthreads/libs/interthreads/doc/users_guide.qbk
==============================================================================
--- (empty file)
+++ sandbox/interthreads/libs/interthreads/doc/users_guide.qbk 2008-11-19 05:56:30 EST (Wed, 19 Nov 2008)
@@ -0,0 +1,51 @@
+[/
+[/
+ (C) Copyright 2008 Vicente J. Botet Escriba
+ Distributed under the Boost Software License, Version 1.0.
+ (See accompanying file LICENSE_1_0.txt or copy at
+ http://www.boost.org/LICENSE_1_0.txt).
+]
+
+[/==============================]
+[section:users_guide Users'Guide]
+[/==============================]
+
+[include getting_started.qbk]
+[include tutorial.qbk]
+
+
+[/==============================]
+[section:bibliography References]
+[/==============================]
+
+[variablelist
+[[boost::call_once] [Boost.Thread implementation for call_once.]]
+[[boost::this_thread::at_thread_exit] [Boost.Thread implementation for at thread exit cleanup registration.]]
+[[boost::thread_specific_ptr] [Boost.Thread implementation for TSS.]]
+[[boost::thread_group] [Boost.Thread thread_group.]]
+]
+[endsect]
+
+[/=======================]
+[section:glosary Glossary]
+[/=======================]
+
+
+[variablelist
+
+[[alive (thread)] [a thread is considered alive when not dead.]]
+[[cleanup decoration] [function called at thread exit.]]
+[[dead (thread)] [a thread is considered dead when has not done enough keep alive check points for a given duration.]]
+[[decoration] [Couple of setup/cleanup thread decorating functions.]]
+[[decorator] [Functor wrapper decorating a thread with all the setups and cleanups decorations.]]
+[[deleter TSSS] [specific function used to delete the TSSS.]]
+[[KA] [Keep Alive.]]
+[[setup decoration] [function called before the thread starts.]]
+[[TSS] [Thread Specific Storage.]]
+[[TSSS] [Thread Specific Shared Storage.]]
+[[tuple (thread)] [group of threads statically determined and launched at construction time.]]
+]
+
+[endsect]
+
+[endsect]
\ No newline at end of file
Boost-Commit list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk