Boost logo

Boost-Commit :

Subject: [Boost-commit] svn:boost r51522 - sandbox/interthreads/libs/interthreads/doc
From: vicente.botet_at_[hidden]
Date: 2009-03-01 18:14:50


Author: viboes
Date: 2009-03-01 18:14:49 EST (Sun, 01 Mar 2009)
New Revision: 51522
URL: http://svn.boost.org/trac/boost/changeset/51522

Log:
0.4.1 : Adaptation to the Boost.ThreadPoold Version 0.21 + Scoped forking + Parallel sort

Text files modified:
   sandbox/interthreads/libs/interthreads/doc/appendices.qbk | 48 +-----------
   sandbox/interthreads/libs/interthreads/doc/case_studies.qbk | 154 +++++++++++++++++++++++++--------------
   sandbox/interthreads/libs/interthreads/doc/changes.qbk | 63 +++++++++++++++-
   sandbox/interthreads/libs/interthreads/doc/getting_started.qbk | 8 +-
   sandbox/interthreads/libs/interthreads/doc/introduction.qbk | 75 +++++++++---------
   sandbox/interthreads/libs/interthreads/doc/tutorial.qbk | 87 +++++++++++-----------
   6 files changed, 245 insertions(+), 190 deletions(-)

Modified: sandbox/interthreads/libs/interthreads/doc/appendices.qbk
==============================================================================
--- sandbox/interthreads/libs/interthreads/doc/appendices.qbk (original)
+++ sandbox/interthreads/libs/interthreads/doc/appendices.qbk 2009-03-01 18:14:49 EST (Sun, 01 Mar 2009)
@@ -11,9 +11,9 @@
 
 [include changes.qbk]
 
-[/include rationale.qbk]
+[include rationale.qbk]
 
-[/include implementation.qbk]
+[include implementation.qbk]
 
 [include acknowledgements.qbk]
 
@@ -27,7 +27,7 @@
 [section Tasks to do before review]
 
 
-[heading Add an overloading for wait_for_all(ae, f, seq)]
+[heading Add an overloading for wait_for_all_in_sequence(ae, f, seq)]
 
 This will be quite useful on recursive algorithms evaluating asynchronously the same function on different parts.
 
@@ -46,53 +46,13 @@
             BOOST_AUTO(partition, partition_view(input));
             // evaluates asynchronously inplace_solve on each element of the partition
             // using the asynchronous executor as scheduler
- wait_for_all(ae, inplace_solve, partition);
+ wait_for_all_in_sequence(ae, inplace_solve, partition);
             // compose the result in place from subresults
             Composer()(partition);
         }
     }
 
 
-[heading Scoped forking]
-
-In [@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2802.html N2802: A plea to reconsider detach-on-destruction for thread objects]
-Hans-J. Boehm explain why is detaching on destruction so dangerous and gives two solutions: 1) replace the call to detach by a join, 2) replace by a terminate call
-
-The library could provide
-
-* a RAII scoped_join which will join the assocaited act on the destructor if not already joined,
-
- basic_threader ae;
- BOOST_AUTO(act,bith::fork(ae, simple_thread));
- scoped_join<BOOST_TYPEOF(act)> j(act);
-
-* a RAII scoped_terminate which will call to teminate on the destructor if not already joined,
-
- basic_threader ae;
- BOOST_AUTO(act,bith::fork(ae, simple_thread));
- scoped_terminate<BOOST_TYPEOF(act)> j(act);
-
-* a RAII scoped_fork_join which will fork on construction and join the assocaited act on the destructor if not already joined,
-
- basic_threader ae;
- scoped_fork_join<BOOST_TYPEOF(bith::fork(ae, simple_thread)) > act(ae, simple_thread);
-
-* a RAII scoped_fork_terminate which will fork on construction and call to teminate on the destructor if not already joined,
-
- basic_threader ae;
- scoped_fork_terminate<BOOST_TYPEOF(bith::fork(ae, simple_thread) > act(ae, simple_thread);
-
-* an __AE__ adapter which will return an __ACT__ that will join the assocaited act on the destructor if not already joined.
-
- ae_scoped_fork_join<basic_threader> ae;
- BOOST_AUTO(act,bith::fork(ae, simple_thread));
-
-* an __AE__ adapter ae_scoped_fork_terminate which will return an__ACT__ that will join the assocaited act on the destructor if not already joined.
-
- ae_scoped_fork_terminate<basic_threader> ae;
- BOOST_AUTO(act,bith::fork(ae, simple_thread));
-
-
 [heading Add polymorphic act and adapters]
 When we need to chain __ACT__ using the fork_after the nature of the __ACT__ can change over time, an why not change also its
 template parameter. So at least we need to make polymorphic every function used by fork_after.

Modified: sandbox/interthreads/libs/interthreads/doc/case_studies.qbk
==============================================================================
--- sandbox/interthreads/libs/interthreads/doc/case_studies.qbk (original)
+++ sandbox/interthreads/libs/interthreads/doc/case_studies.qbk 2009-03-01 18:14:49 EST (Sun, 01 Mar 2009)
@@ -16,41 +16,82 @@
 [section Parallel sort]
 [/==================================]
 
+Next follows a generic algorithm based on partitioning od a given problem in smaler problems, and compose a solution from the solution of the smaller problems.
+
+ template <
+ typename DirectSolver,
+ typename Composer,
+ typename AE,
+ typename Range
+ >
+ void inplace_solve( AE & ae,
+ boost::iterator_range<typename boost::range_iterator<Range>::type> range,
+ unsigned cutoff );
+
     template <
         typename DirectSolver,
         typename Composer,
- typename AsynchronousExecutor,
- typename Input>
- void inplace_solve(AsynchronousExecutor& ae, Input& input) {
- // if (problem is small)
- if (size(range) < concurrency_threshold) {
- // directly solve problem
- DirectSolver()(input);
- } else {
- // split problem into independent parts
- BOOST_AUTO(partition, partition_view(input));
- // evaluates asynchronously inplace_solve on each element of the partition
- // using the asynchronous executor as scheduler
- wait_for_all(ae, inplace_solve, partition);
- // compose the result in place from subresults
- Composer()(partition);
+ typename AE,
+ typename Range
+ >
+ void inplace_solve( AE & ae,
+ boost::iterator_range<typename boost::range_iterator<Range>::type> range,
+ unsigned cutoff )
+ {
+ unsigned size = boost::size(range);
+ //std::cout << "<<par_ " << size;
+ if ( size <= cutoff) DirectSolver()(range);
+ else {
+ partition<Range> parts(range, BOOST_PARTS);
+
+ // wait_for_all_in_sequence(ae, &inplace_solve<DirectSolver,Composer,AE,Range>, parts);
+ std::list<task_type> tasks;
+ for (unsigned i=0;i < BOOST_PARTS-1; ++i) {
+ task_type tmp(ae.submit(
+ boost::bind(
+ &inplace_solve<DirectSolver,Composer,AE,Range>,
+ boost::ref(ae),
+ parts[i],
+ cutoff
+ )));
+ tasks.push_back(tmp);
+ }
+ inplace_solve<DirectSolver,Composer,AE,Range>(ae, parts[BOOST_PARTS-1], cutoff);
+ boost::for_each(tasks, &boost::interthreads::wait_act<task_type>);
+ // wait_for_all_in_sequence
+
+ Composer()(range);
         }
- }
+ }
+
 
 So parallel sort could be
 
- template <typename Range>
- void parallel_sort(range& range) {
- boost::tp::pool<> ae;
- parallel::inplace_solve<sort, merge>(ae, input);
- }
+ struct sort_fct {
+ template<class RandomAccessRange>
+ RandomAccessRange& operator()(RandomAccessRange rng) {
+ return boost::sort(rng);
+ }
+ };
 
+ struct inplace_merge_fct {
+ template<class BidirectionalRange>
+ BidirectionalRange&
+ operator()( BidirectionalRange rng) {
+ return boost::inplace_merge(rng, boost::begin(rng)+(boost::size(rng)/2));
+ }
+ };
+ template <typename AE, typename Range>
+ void parallel_sort(AE& ae, Range& range, unsigned cutoff=10000) {
+ boost::iterator_range<typename boost::range_iterator<Range>::type> rng(range);
+ inplace_solve<sort_fct,inplace_merge_fct,pool_type,Range>( ae, rng, cutoff);
+ }
 
 
 [endsect]
 
 [/==================================]
-[section From a single to a multi threaded appliation]
+[section From a single to a multi threaded application]
 [/==================================]
 
 
@@ -61,7 +102,7 @@
 [section Thread safe deferred traces]
 [/==================================]
 
-When executing on a multi thread environment, the outputs lines on
+When executing on a multi thread environment, the output lines on
 std::cout could interleave. We can synchronize these outputs with a
 global mutex
 
@@ -86,10 +127,10 @@
 
 [/$../images/star.png]
 
-Another approach could be to use a queue of output stream buffers for each thread.
-Each buffer is timestamped with the creation date and there is a concentrator that takes one by one the elements ordered by the timestamp.
-Only the current thread can push on this queue because is specific to the thread.
-There is a single thread, the concentrator that pops from these queue.
+Another approach could be using a queue of output stream buffers for each thread.
+Each buffer is timestamped with the creation date and there is a concentrator that takes one by one the elements ordered by their timestamp.
+Only the current thread can push on this queue because it is specific to the thread.
+There is a single thread, the concentrator, that pops from these queue.
 In this context we can ensure thread safety without locking as far as
 the queue has at least two messages.
 
@@ -144,8 +185,8 @@
         };
     }
 
-This class declares the just minimum in order to model a sink. In addition as in order to mask the implementation the PImpl idiom is used.
-The implementation of these function is straiforward:
+This class declares the just minimum in order to model a sink. In addition, in order to mask the implementation, the Pimpl idiom is used.
+The implementation of these functions is straigtforward:
 
     async_ostream::async_ostream(std::ostream& os)
         : base_type(os) {}
@@ -225,8 +266,8 @@
         , thread_(boost::bind(loop, this))
         {}
 
-The terminate cleanup function is used to ensure that the queue is empty before the thread finish.
-To avoid optimizations a non const call inc is done while waitig the queue empties.
+The terminate cleanup function is used to ensure that the queue is empty before the thread finishes.
+To avoid optimizations a non const call inc is done while waiting the queue empties.
 
         void async_ostream_sink::impl::terminate(shared_ptr<async_ostream_thread_ctx> that) {
             while (!that->empty()) {
@@ -235,13 +276,13 @@
         }
 
 The central sink function is write. Here instead to lock a mutex the function forwards to
-the thread specific shared pointer. We will see above the how `async_ostream_thread_ctx` handles this call.
+the thread specific shared pointer. We will see below how `async_ostream_thread_ctx` handles this call.
 
         std::streamsize write(const char* s, std::streamsize n) {
             return tsss_->write(s, n);
         }
 
-It is time to analyze the thread specific context before seen how the concentrator is implemented.
+It is time to analyze the thread specific context before seeing how the concentrator is implemented.
 
     struct async_ostream_thread_ctx {
         async_ostream_thread_ctx();
@@ -265,7 +306,7 @@
             return n;
         }
 
-Once the user do a flush the current element is enqueued on the queue. The `sec_` integer is used as monotonic sequence in conjuntion with the timestamp.
+Once the user does a flush, the current element is pushed on the queue. The `sec_` integer is used as monotonic sequence in conjuntion with the timestamp.
 
         void flush() {
             current_->reset_date(seq_);
@@ -279,9 +320,10 @@
             current_ = new element_type();
         }
 
-As stated in the introduction, we don't need to lock the mutex if the number of elements in the queue are enough.
+As stated in the introduction, we don't need to lock the mutex if the number of elements in the queue is enough.
 
 These queue elements will be read by the concentrator using the get function.
+
         element_type* get() {
             if (queue_.size()>1) {
                 return get_i();
@@ -334,19 +376,19 @@
 [section:stm STM]
 [/========================]
 
-This section do not includes a complete example using the library, but a case study that could use in some way the library. I'm curently working on this.
+This section does not include a complete example using the library, but only a case study that could use the library in some way. I'm curently working on this.
 
 Transactional memory (TM) is a recent parallel programming concept which reduces challenges found in parallel programming.
 TM offers numerous advantages over other synchronization mechanisms.
 
-This case study contains some thoughts on how I see a boostified version of DracoSTM, a software transactional memory (STM) system.
+This case study contains some thoughts on how I see a "boostified" version of DracoSTM, a software transactional memory (STM) system.
 DracoSTM is a high performance lock-based C++ STM research library.
 DracoSTM uses only native object-oriented language semantics, increasing its intuitiveness for developers while maintaining
 high programmability via automatic handling of composition, locks and transaction termination.
 
-The example will show only the part concerning how the different context are stored.
+The example will show only the part concerning how the different contexts are stored.
 
-Let me start of a typical use of this library with the Hello World! of transactional concurrent programming, Banck accounts and transfer.
+Let me start of with a typical use of this library with the Hello World! of transactional concurrent programming, Bank accounts and transfers.
 Let BankAccount be a simple account.
 
     class BankAccount {
@@ -369,12 +411,12 @@
         }
     };
 
-And here a little programm that emulates an employer and two employeeds behabior
-The employee has requested to its employer to transfer its salary to its checking account every month its salary.
-The employer do the transfer the 28th of each month.
-Employee do some withdrawals and query its accounts from an ATM.
-Some people has requested to the Back automatic periodic transfers from its checking account to its saving account.
-The transfer is done 3th of each month.
+And here a little programm that emulates an employer and two employees behaviors.
+The employees have requested the employer to transfer their salaries to their checking accounts every month.
+The employer does the transfer on the 28th of each month.
+The employees perform withdrawals and queries from their accounts using an ATM.
+Some people have requested the Bank for automatic periodic transfers from their checking accounts to their saving accounts.
+The transfer is done on the 3rd of each month.
 
 
     BankAccount *emp;
@@ -428,7 +470,7 @@
 If nothing is said the transaction will be aborted at `_` destruction.
 When everything is ok we need to do a `_.commit()`.
 
-When there are a lot of uses of this we can write instead
+If there is a large use of make_transactional_ptr we can write instead
 
         {
             stm::this_tread::atomic _;
@@ -448,7 +490,7 @@
         }
 
 The other `BankAccount` functions are coded as expected. Here is the code introducing a `using stm::this_tread;`
-which make it mush more readable.
+which makes it much more readable.
 
     class BankAccount {
         int balance_;
@@ -483,7 +525,7 @@
     }
 
 The core of all this stuff is `stm::this_tread::atomic` and `stm::transactional_ptr<>`.
-`stm::make_transactional_ptr()` and `stm::this_tread::atomic_ptr<>` are defined in terms of them.
+`stm::make_transactional_ptr()` and `stm::this_tread::atomic_ptr<>` are defined in terms of `stm::this_tread::atomic` and `stm::transactional_ptr<>`.
 
 Next follows the interface of the atomic class.
 
@@ -499,7 +541,7 @@
     } // stm
 
 The atomic constructor will construct a
-transaction on the current thread and pust it to the stack of nested transactions.
+transaction on the current thread and push it to the stack of nested transactions.
 The atomic destructor will rollback the transaction if not commited and pop the stack of nested transactions.
 We will see later the transaction class.
 
@@ -534,13 +576,13 @@
             void delete_ptr();
     };
 
-Let me start with the simpler constructor:
+Let me start with the simple constructor:
 
         transactional_ptr(T* p);
 
 This creates a smart pointer pointing to a specific transaction memory of the current transaction.
 
-It contains the clasic functions of a smart pointer overloaded with `const` or non `const`.
+It contains the classic functions of a smart pointer overloaded with `const` or non `const`.
 
             const T* operator->() const;
             const T& operator*() const;
@@ -555,7 +597,7 @@
 
             this_ptr->balance_ += amount;
 
-The use of `this_ptr->balance_on` the left hand side of the assignement operator requires a non const access,
+the use of `this_ptr->balance_` on the left hand side of the assignement operator requires a non const access,
 so the upgrade to writable is done.
 
 When we know a priori that the pointer contents will be modified we can create it as follows:
@@ -567,18 +609,18 @@
             _.commit();
         }
 
-Every `new`/`delete` operation on a transaction must be in some way be signaled to the transaction service.
-The new created objects would be wrapper by a `transactional_ptr<>` initialized like that;
+Every `new`/`delete` operation on a transaction must be in some way signaled to the transaction service.
+The new created objects would be wrapped by a `transactional_ptr<>` initialized like that;
 
     transactional_ptr<BankAccount> this_ptr(new BackAccount(), is_new);
 
-When we want ot delete a pointer in a transaction we use `transactional_ptr::delete_ptr`
+When we want to delete a pointer in a transaction we use `transactional_ptr::delete_ptr`
 
     transactional_ptr<BankAccount> p_ptr(p, writable);
     // ...
     p_ptr.delete_ptr();
 
-Before to finish with the `transaction` class le me show you the
+Before finishing with the `transaction` class let me show you the
 `transactional_object_cache<T>` and its base class `transactional_object_cache_base`.
 
     class transaction {

Modified: sandbox/interthreads/libs/interthreads/doc/changes.qbk
==============================================================================
--- sandbox/interthreads/libs/interthreads/doc/changes.qbk (original)
+++ sandbox/interthreads/libs/interthreads/doc/changes.qbk 2009-03-01 18:14:49 EST (Sun, 01 Mar 2009)
@@ -7,7 +7,60 @@
 
 [section:changes Appendix A: History]
 
-[section [*Version 0.4, January 31, 2009] bug fixes]
+[section [*Version 0.4.1, Mars 1, 2009] Adaptation to the Boost.ThreadPoold Version 0.21 + Scoped forking + Parallel sort]
+
+[*New Features:]
+
+* Adaptation to the Boost.ThreadPoold Version 0.21
+
+* Scoped forking: In [@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2802.html N2802: A plea to reconsider detach-on-destruction for thread objects]
+Hans-J. Boehm explain why is detaching on destruction so dangerous and gives two solutions: 1) replace the call to detach by a join, 2) replace by a terminate call. The library provides the following classes
+
+* a RAII scoped_join which will join the assocaited act on the destructor if not already joined,
+
+ basic_threader ae;
+ BOOST_AUTO(act,bith::fork(ae, simple_thread));
+ scoped_join<BOOST_TYPEOF(act)> j(act);
+
+* a RAII scoped_terminate which will call to teminate on the destructor if not already joined,
+
+ basic_threader ae;
+ BOOST_AUTO(act,bith::fork(ae, simple_thread));
+ scoped_terminate<BOOST_TYPEOF(act)> j(act);
+
+* a RAII scoped_fork_join which will fork on construction and join the assocaited act on the destructor if not already joined,
+
+ basic_threader ae;
+ scoped_fork_join<BOOST_TYPEOF(bith::fork(ae, simple_thread)) > act(ae, simple_thread);
+
+* a RAII scoped_fork_terminate which will fork on construction and call to teminate on the destructor if not already joined,
+
+ basic_threader ae;
+ scoped_fork_terminate<BOOST_TYPEOF(bith::fork(ae, simple_thread) > act(ae, simple_thread);
+
+In addition unique_joiner/shared_joiner have a on_destruction parameter allowing to parameterize this behavior.
+
+
+[/*Tests:
+
+Add tests for move only __ACT__ basic_threader, unique_threader and unique_launcher)
+Change the implementation of the queue on the async_ostream.
+]
+[*Examples:]
+
+* Parallel sort
+
+[/*Documentation:
+
+* Complete ae/act framework.
+]
+[/*Fixed Bugs:]
+
+*]
+
+[endsect]
+
+[section [*Version 0.4.0, February 8, 2009] Improvements + bug fixes]
 
 [*New Features:]
 
@@ -29,7 +82,7 @@
 
 [*v0.2#1: `ae::get_all` do not work yet.]
 `get_all()` do not work because fusion transform sequence function can not take non const sequences.
-I have emulated it using `set_all()` and a transformation for a tuple of __ACT_ to a tuple of result_type.
+I have emulated it using `set_all()` and a transformation for a tuple of __ACT__ to a tuple of result_type.
 
 [*v0.3.1#1: keep alive mechanism crash when setting `set_on_dead_thread()` before enabling the mechanism.]
 This was due to the fact that there were no default data for the backup.
@@ -66,7 +119,7 @@
 
 [endsect]
 
-[section [*Version 0.3, January 19, 2009] fork after dependant act completes]
+[section [*Version 0.3.0, January 19, 2009] fork after dependant act completes]
 
 [*New Features:]
 
@@ -75,7 +128,7 @@
 
 [endsect]
 
-[section [*Version 0.2, January 14, 2009] Asynchronous execution and test on more toolsets]
+[section [*Version 0.2.0, January 14, 2009] Asynchronous execution and test on more toolsets]
 
 [*New Features:]
 
@@ -104,7 +157,7 @@
 
 [endsect]
 
-[section [*Version 0.1, November 30, 2008] ['Announcement of Interthreads]]
+[section [*Version 0.1.0, November 30, 2008] ['Announcement of Interthreads]]
 
 [*Features:]
 

Modified: sandbox/interthreads/libs/interthreads/doc/getting_started.qbk
==============================================================================
--- sandbox/interthreads/libs/interthreads/doc/getting_started.qbk (original)
+++ sandbox/interthreads/libs/interthreads/doc/getting_started.qbk 2009-03-01 18:14:49 EST (Sun, 01 Mar 2009)
@@ -46,7 +46,7 @@
     }
 
 When `th` is created with the `bith::thread_decorator` wrapper, it will initialize all the decorations before calling `my_thread`.
-This `my_cleanup` will be registered with the `boost:this_thread::at_thread_exit` if the `my_setup` function succeeds i.e. do not throws.
+This `my_cleanup` will be registered with the `boost:this_thread::at_thread_exit` if the `my_setup` function succeeds i.e. does not throw.
 Then the thread function `my_thread` is called. At the thread exit, the `my_cleanup` function is called. This results on the following output
 
 [pre
@@ -74,9 +74,9 @@
 
 The monotonic thread identifier is managed by the mono_thread_id class.
 There is a mutex protecting the access to the monotonic counter.
-The main difference between a thread_specific_shared_ptr and thread_specific_ptr is that we can get the specific pointer of another thread (*)
-Whith the help of bith::thread_decoration the setting of the thread specific shared pointer is done transparently as far as the thread
-is created using a thread decorator. This setup function reset the specific pointer with the value of the monotonic counter which will be self increased.
+The main difference between a thread_specific_shared_ptr and a thread_specific_ptr is that we can get the specific pointer of another thread (*)
+With the help of bith::thread_decoration, the setting of the thread specific shared pointer is done transparently, as far as the thread
+is created using a thread decorator. This setup function resets the specific pointer with the value of the monotonic counter which will be self increased.
 
     #include <boost/interthreads/thread_decorator.hpp>
     #include <boost/interthreads/thread_specific_shared_ptr.hpp>

Modified: sandbox/interthreads/libs/interthreads/doc/introduction.qbk
==============================================================================
--- sandbox/interthreads/libs/interthreads/doc/introduction.qbk (original)
+++ sandbox/interthreads/libs/interthreads/doc/introduction.qbk 2009-03-01 18:14:49 EST (Sun, 01 Mar 2009)
@@ -14,22 +14,22 @@
 [/=======================================================================]
 
 In [@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1883.pdf N1833 - Preliminary Threading Library Proposal for TR2]
-Kevlin Henney introduce the concept of `threader` an asynchronous executor and a function `thread()` that evaluate a function
-asynchronously and returns an asynchronous completion token `joiner`, able to join but also to to get the value of the function result.
+Kevlin Henney introduces the concept of `threader` an asynchronous executor and a function `thread()` that evaluates a function
+asynchronously and returns an asynchronous completion token `joiner`, able to join but also to get the value of the function result.
 
 In [@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2185.html N2185 - Proposed Text for Parallel Task Execution]
-Peter Dimov introduce a `fork()` function able to evaluate a function asynchronously and returns a `future` handle.
+Peter Dimov introduces a `fork()` function able to evaluate a function asynchronously and returns a `future` handle.
 
 In [@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2276.html N2276 - Thread Pools and Futures]
-Anthony William introduce `launch_in_thread` and `launch_in_pool` function templates which
-evaluate a function asynchronously either in a specific `thread` or a thread pool and
+Anthony William introduces `launch_in_thread` and `launch_in_pool` function templates which
+evaluates a function asynchronously either in a specific `thread` or a thread pool and
 returns a `unique_future` handle.
 
 In [@http://www.boostpro.com/vault/index.php?action=downloadfile&filename=boost-threadpool.3.tar.gz&directory=Concurrent%20Programming& Boost.ThreadPool]
-Oliver Kowalke propose a complete implementation of a thread `pool` with a `submit()` function
-which evaluate a function asynchronously and returns a `task` handle.
+Oliver Kowalke proposes a complete implementation of a thread `pool` with a `submit()` function
+which evaluates a function asynchronously and returns a `task` handle.
 
-Behind all these proposal there is a concept of asynchronous executor, fork-like function and
+Behind all these proposals there is a concept of asynchronous executor, fork-like function and
 the asynchronous completion token handle.
 
 [table AE/ACT/fork-like relationship
@@ -56,13 +56,13 @@
     ]
 ]
 
-The asynchronous completion token models can follows two interfaces, the thread interface and
-the unique_future interface. Some asynchronous completion token handle allows to recover the result of the evaluation of
-the function, other allows to manage the underlying thread of execution.
+The asynchronous completion token models can follow two interfaces, the thread interface and
+the future interface. Some asynchronous completion tokens handle allow to recover the result of the evaluation of
+the function, others allow to manage the underlying thread of execution.
 
 It seems natural to make a generic __fork__ function that will evaluate a function asynchronously
 with respect to the calling thread and returns an ACT handle. The following metafunction
-associated an ACT handle to a asynchronous executor.
+associates an ACT handle to an asynchronous executor.
 
     template <typename AE, typename T>
     struct asynchronous_completion_token {
@@ -86,7 +86,7 @@
         return ae.fork(fn);
     }
 
-Forking n-ary functions relies on the nullary version and bind.
+Forking n-ary functions rely on the nullary version and bind.
 
     template< typename AE, typename F, typename A1, ..., typename An >
     typename asynchronous_completion_token<AE,
@@ -157,7 +157,7 @@
         std::cout << m2 - m1 + m3 - m4 << std::endl;
     }
 
-this library allows a programmer to switch to parallel execution as follows:
+The library allows a programmer to switch to parallel execution as follows:
 
     int main()
     {
@@ -171,9 +171,9 @@
     }
 
 
-The question now is how we can adapt it to an existing asynchronous executor such as
+The question now is how we can adapt the example to an existing asynchronous executor such as
 the Boost.ThreadPool library. We need to specialize the template class
-asynchronous_completion_token to states which is the __ACT__ associate to the __tp_pool__.
+asynchronous_completion_token to state which is the __ACT__ associate to the __tp_pool__.
 
     namespace boost { namespace interthreads {
 
@@ -184,7 +184,7 @@
 
     }}
 
-and also to specialize the fork function as the default requires a form member function and __tp_pool__ provides a `submit()` member function`
+and also to specialize the fork function as the default one requires a fork member function and __tp_pool__ provides a `submit()` member function`
 
     namespace boost { namespace interthreads {
 
@@ -196,9 +196,9 @@
     }
     }
 
-Evidently these specialization must be done on the `boost::interthreads` namespace.
+Evidently these specializations must be done on the `boost::interthreads` namespace.
 
-As the preceding is ilegal in C++03 we need to use an auxiliary class to define the default behaviour of fork
+As the preceding is ilegal in C++03 we need to use an auxiliary class to define the default behaviour of the fork function
 
     namespace partial_specialization_workaround {
         template< typename AE, typename F >
@@ -214,7 +214,7 @@
         return partial_specialization_workaround::fork<AE,F>::apply(ae,fn);
     }
 
-And specialize partially the fork_auc class
+And specialize partially the partial_specialization_workaround::fork class
 
     namespace boost { namespace interthreads {
         namespace partial_specialization_workaround {
@@ -228,7 +228,7 @@
         }
     }}
 
-Note that only the __fork__ function needs to be specialized. The library provides he other overloadings.
+Note that only the __fork__ function needs to be specialized. The library provides the other overloadings.
 
 We can write the preceding main function in a more generic way
 
@@ -258,13 +258,13 @@
         do(ae);
     }
 
-Instead of definng a type the user can make use of BOOST_AUTO once she includes the
-associated files on the threadpool sub-directory.
+Instead of defining a type, the user can make use of BOOST_AUTO once the
+associated files included on the threadpool sub-directory.
 
         BOOST_AUTO(fm1, bith::fork(ae, f, 1.0, 1000000 ));
 
 
-The library allows also to fork several functions at the same time
+The library allows also to fork several functions at one time
 
     result_of::fork_all<AE, int(*)(), int(*)(), int(*)()>::type handles = bith::fork_all(ae, f, g, h);
     std::cout << get<1>(res).get() - get<0>(res).get() + get<2>(res).get() << std::endl;
@@ -278,7 +278,7 @@
 The asynchronous completion token models follows two interfaces, the thread interface and the
 unique_/shared_future interface.
 
-To make common tasks easier the library provide some functors in the name space fct:
+To make common tasks easier the library provides some functors in the name space fct:
 for the thread interface as
 
 * fct::join
@@ -375,7 +375,7 @@
     std::cout << get<1>(res) - get<0>(res) + get<2>(res) << std::endl;
 
 and wait_for_any, which works only with functions that return the same type or are convertible to the same
-type, and return the index and the value of the any of the completed functions.
+type, and return the index and the value of any of the completed functions.
 
     result_of::wait_for_any<AE, int(*)(), int(*)(), int(*)()>::type res = bith::wait_for_any(ae, f, g, h);
     std::cout << "function " << res.first << " finshed first with result=" << res.second << std::endl;
@@ -416,10 +416,10 @@
 [/=============================================================================]
 
 See the [@http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1883.pdf N1833 - Preliminary Threading Library Proposal for TR2]
-where Kevlin Henney introduce the concept of threader as an asynchronous executor and a function thread that evaluate a function
-asynchronously and returns an asynchronous completion token joiner, able to join but also to to get the value of the function result.
+where Kevlin Henney introduces the concept of threader as an asynchronous executor and a function thread that evaluates a function
+asynchronously and returns an asynchronous completion token joiner, able to join but also to get the value of the function result.
 
-The main specifcities is that here we make a difference between unique_joiner (move-only) and shared_joiner and as consequence unique_threader and shared_threader.
+The main specificity is that here we make a difference between unique_joiner (move-only) and shared_joiner and as consequence unique_threader and shared_threader.
 
 
 [endsect]
@@ -431,7 +431,7 @@
 programm without data races or deadlocks.
 `boost::this_thread::at_thread_exit` allows to execute a cleanup function at thread exit.
 
-If we want a setup function be executed once at the begining on the threads and a cleanup at thread exit we need to do
+If we want a setup function to be executed once at the begining on the threads and a cleanup at thread exit we need to do
 
     void thread_main() {
         setup();
@@ -445,7 +445,7 @@
         //...
     }
 
-Of course we can define an init function that call setup and do the registration.
+Of course we can define an init function that calls setup and do the registration.
 
     void init() {
         setup();
@@ -506,8 +506,8 @@
 [/=============================================================================]
 
 Thread local storage allows multi-threaded applications to have a separate instance of a given data item for
-each thread. But do not provide any mechanism to access this data from other threads. Although this seems to
-defeat the whole point of thread-specific storage, it is useful when these contexts needs some kind of
+each thread. But does not provide any mechanism to access this data from other threads. Although this seems to
+defeat the whole point of thread-specific storage, it is useful when these contexts need some kind of
 communication between them, or some central global object needs to control them.
 
 The intent of the `boost::thread_specific_shared_ptr` class is to allow two threads to establish a shared memory
@@ -518,8 +518,8 @@
 value.
 
 Only the current thread can modify the thread specific shared pointer using the non const functions reset/release
-functions. Each time these functions are used a synchronization must be ensured to update the mapping.
-The other threads have only read access to the shared_ptr<T>. It is worh saying that the shared object T must be
+functions. Each time these functions are used, a synchronization must be ensured to update the mapping.
+The other threads have only read access to the shared_ptr<T>. It is worth saying that the shared object T must be
 thread safe.
 
 [endsect]
@@ -554,7 +554,7 @@
 
 
 The __thread_tuple__ class is responsible for launching and managing a static collection of threads
-that are related in some fashion. No new threads can be added to the tuple once constructed. So we can write
+that are related in some way. No new threads can be added to the tuple once constructed. So we can write
 
     {
         bith::thread_tuple<2> tt(thread1, thread2);
@@ -562,6 +562,7 @@
     }
 
 As this
+
     bith::conc_join_all(thread1, thread2);
 
 In addition the user can join the first finishing thread.
@@ -569,7 +570,7 @@
     unsigned i = bith::conc_join_any(thread1, thread2);
 
 
-Evidently, thread_tuple can not be used when we needs dynamic creation or deletion. The __thread_group__ class allows to group dynamically threads.
+Evidently, thread_tuple can not be used when we need dynamic creation or deletion. The __thread_group__ class allows to group dynamically threads.
 
     {
         boost::thread_group tg;

Modified: sandbox/interthreads/libs/interthreads/doc/tutorial.qbk
==============================================================================
--- sandbox/interthreads/libs/interthreads/doc/tutorial.qbk (original)
+++ sandbox/interthreads/libs/interthreads/doc/tutorial.qbk 2009-03-01 18:14:49 EST (Sun, 01 Mar 2009)
@@ -57,7 +57,7 @@
 [section Key initialization]
 [/==========================================================================================]
 
-As the curent implementation use the address of the thread_specific_shared_ptr<> object, there is no need to do whatever to get the key.
+As the curent implementation uses the address of the thread_specific_shared_ptr<> object, there is no need to do anything to get the key.
 
     bith::thread_specific_shared_ptr<myclass> ptr;
 
@@ -66,11 +66,11 @@
 [section Context initialization]
 [/==========================================================================================]
 
-Initially the pointer has a value of `NULL` in each thread, but the value for the
+Initially the pointer has a value of `0` in each thread, but the value for the
 current thread can be set using the `reset()` member functions.
 
 If the value of the pointer for the current thread is changed using `reset()`, then the previous value is destroyed by calling the
-deleter routine. Alternatively, the stored value can be reset to `NULL` and the prior value returned by calling the `release()`
+deleter routine. Alternatively, the stored value can be reset to `0` and the prior value returned by calling the `release()`
 member function, allowing the application to take back responsibility for destroying the object.
 
 Initialization can be done
@@ -103,8 +103,7 @@
 [section Obtain the pointer to the thread-specific object on the current thread]
 [/==========================================================================================]
 
-All functions known from boost::thread_specific_ptr are available except release, and so its
-semantics from inside the thread.
+All functions known from boost::thread_specific_ptr are available except the release function.
 The value for the current thread can be obtained using the `get()` member function, or by using
 the `*` and `->` pointer deference operators.
 
@@ -123,7 +122,7 @@
 [section Obtain the pointer to the thread-specific object of another thread]
 [/==========================================================================================]
 
-Besides this another thread can get access to the data when it can get the thread::id by:
+Besides, other threads can get access to the data provided a thread::id by:
 
     boost::thread th=bith::make_decorated_thread(func);
 
@@ -131,8 +130,8 @@
 
 where `foo()` is a function of `myclass`.
 
-This could work or not. the issue apears as we can get a reference to a thread before the threads has started,
-so the setting of the threads specific context could be not done yet. One way to manage with this error is to
+This could work or not. The issue appears as we can get a reference to a thread before the thread has started,
+so the setting of the thread specific context could be not yet done. One way to manage with this error is to
 get the shared pointer and check if it contains something or not.
 
 
@@ -159,7 +158,7 @@
     ptr.wait_and_get(th->get_id()->foo();
 
 
-In order to ensure that the decorations have been called a cleaner and safer option is don't retunr the thread until it has been started.
+In order to ensure that the decorations have been called, a cleaner and safer option is not to return the thread until it has been started.
 This behavior is obtained each time the thread is created with an __AE__ decorator, as
 
     bith::basic_threader_decorator ae;
@@ -168,7 +167,7 @@
     // so we can access any thread_specific_shared_ptr of the created thread.
 
 The lifetime of the myclass instance is managed by a shared_ptr. One reference is held by the thread (by means of a tss), a second is
-held by the thread::id to shared_ptr<T> map and additional references might be held by other threads that obtained it by `*pmyclass[th]`.
+held by the thread::id to shared_ptr<T> map and additional references might be held by other threads, obtained by `*pmyclass[th]`.
 
 [endsect]
 
@@ -222,13 +221,13 @@
 [/==========================================================================================]
 
 We will use the implementation of the keep alive mechanism as tutorial for the thread decorators,
-thread specific shared pointers and the kepp alive mechanism itself.
+thread specific shared pointers and the keep alive mechanism itself.
 
 We want to detect situations on which a thread is looping or blocked on some component.
 The user needs to state when this mechanism is enabled or disabled.
 
 Since the only purpose is to find threads that don't work, the thread needs to say if it is alive to a controller.
-The controler request at predefined intervals if the thread is dead, and in this case it will call a user specific function
+The controler requests at predefined intervals if the thread is dead, and in this case it will call a user specific function
 which by default aborts the program.
 
 A thread is considered dead if during a given period the number of checkins is inferior to a given threshold.
@@ -314,8 +313,8 @@
         instance_.reset(new thread_keep_alive_ctx());
     }
 
-The `keep_alive_mgr::initialize` function ensure just that the init function is called once using the `boost::call_once`.
-This `init` function create the instance of the `keep_alive_mgr` singleton.
+The `keep_alive_mgr::initialize` function ensures just that the init function is called once using the `boost::call_once`.
+This `init` function creates the instance of the `keep_alive_mgr` singleton.
 
     void keep_alive_mgr::initialize() {
         boost::call_once(flag_, init);
@@ -329,9 +328,9 @@
 [section:keep_alive_threads Which threads can be controlled?]
 [/==========================================================================================]
 
-As the keep alive mechanism use a thread decoration, the user needs to explicit calls the
+As the keep alive mechanism uses a thread decoration, the user needs to explicitly call the
 `bith::decorate` function at the begining of the thread function or by wrapping the thread function.
-Instead of having a specific function to call or thread function wrapper the keep alive uses the functions
+Instead of having a specific function to call or use a thread function wrapper, the keep alive uses the functions
 provided by the thread decorator (`bith::decorate` and `bith::thread_decorator`).
 So we must either call `bith::decorate` explicitly on the thread function
 
@@ -396,10 +395,10 @@
         }
     }
 
-If on the contrary we don't want to disable the keep alive mechanism, it will be interesting to do a
+If on the contrary we don't want to disable the keep alive mechanism, it should be interesting to do a
 `boost::interruption_check_point()` just after the blocking task. In this way if the task takes too much time and
-the thread is declared dead, you let the possibility to manage the keep alive error by interrupting
-the dead thread, once the task is finished.
+the thread is declared dead, the possibility to manage the keep alive error by interrupting
+the dead thread remains, once the task is finished.
 
     void fct() {
         using boost::this_thread;
@@ -428,9 +427,9 @@
 [section:keep_alive_persistent Configuring the dead persistency]
 [/==========================================================================================]
 
-The default enabling parameters could be too restrictive in some cases. But the `enable_keep_alive` configure that with the two parameters.
+The default enabling parameters could be too restrictive in some cases. But the `enable_keep_alive` configures it with the two parameters.
 We can declare a thread dead when the thread has not done a number of checkins in a given period.
-This can be useful when you know the time a given task should take.
+This can be useful when one knows the time a given task should take.
 
     void fct() {
         using bith::this_thread;
@@ -463,7 +462,7 @@
 But how all this works. We start with enablers/disablers.
 Enablers/disablers use RAII, so they can be nested and the context be restored on the destructor.
 At the construction they store the current state of the keep alive of this thread using the backup
-function and then they enable/disable the KA mechanism. On destruction they restore the backuped context.
+function and then they enable/disable the KA mechanism. On destruction they restore the backed up context.
 
     enable_keep_alive::enable_keep_alive(
             std::size_t periods, std::size_t checkins)
@@ -485,7 +484,7 @@
         detail::thread_keep_alive_ctx::instance()->restore(backup_);
     }
 
-These function are quite simple
+These functions are quite simple
 
     thread_keep_alive_internal* thread_keep_alive_ctx::backup(thread_keep_alive_internal* new_data) {
         thread_keep_alive_internal* the_backup=data_;
@@ -505,11 +504,11 @@
     }
 
 Note that there is no need to check if the `detail::thread_keep_alive_ctx::instance_`
-contains a poiter because we have ensured that at initialization time.
+contains a pointer because we have ensured that at initialization time.
 
 Next there is the central function `keep_alive_point()`. This function
 does nothing more than relaying the request to the specific context of this thread.
-This function just increase the number of `checkins_`.
+This function just increases the number of `checkins_`.
 
     void keep_alive_point() {
         detail::thread_keep_alive_ctx::instance()->check_point();
@@ -519,7 +518,7 @@
         ++data_->checkins_;
     }
 
-The `set_on_dead_thread()` do the same. This function just store the on dead action.
+The `set_on_dead_thread()` does the same. This function just stores the on-dead action.
 
     void set_on_dead_thread(on_dead_thread_type fct, thread* th) {
         detail::thread_keep_alive_ctx::instance()->set_on_dead_thread(fct, th);
@@ -535,19 +534,19 @@
 [section Access from the controller thread]
 [/==========================================================================================]
 
-Up to now we have see the use of `bith::thread_keep_alive_ctx` as a `boost::thread_specific_ptr`, i.e. it is used
+Up to now we have seen the use of `bith::thread_keep_alive_ctx` as a `boost::thread_specific_ptr`, i.e. it is used
 from the current thread.
 
 We will see now how the controler behaves. The single instance of the keep_alive_mgr has been created on the
 init function.
 
-The constructor just construct a thread with the loop function.
+The constructor just constructs a thread with the loop function.
 
     keep_alive_mgr() : end_(false), thread_(boost::bind(loop, boost::ref(end_))) {}
 
 
-The loop function will every second iterate over all the thread_keep_alive_ctx threads specific contexts asking them to control itselfs.
-Note that as the map can be modified when threads are created or finish we nee to protect the iteration externally with a lock on the
+The loop function will iterate, every second, over all the thread_keep_alive_ctx threads specific contexts asking them to control themselves.
+Note that as the map can be modified when threads are created or finished, we need to protect the iteration externally with a lock on the
 protecting mutex.
 
     static void loop(bool& end) {
@@ -568,7 +567,7 @@
 
 The thread loops until the end variable is true. In order to stop proprely this thread we will use the destructor of singleton instance.
 This end variable is a reference to a variable stored on the keep_alive_mgr context which
-has been initialized staticly. So its destrcutor will be called when the program finish.
+has been initialized staticly. So its destrcutor will be called when the program finishes.
 So it is up to the destructor to set this variable and wait for the thread completion
 
     ~keep_alive_mgr() {
@@ -576,9 +575,9 @@
         thread_.join();
     }
 
-The thread_keep_alive_ctx::control function behaves as follows: if it is enabled decrease
-the number of remaining periods and if the thread is declared dead execute the on dead
-action and reset the checkings and periods.
+The thread_keep_alive_ctx::control function behaves as follows: if it is enabled, it decreases
+the number of remaining periods and if the thread is declared dead it executes the on dead
+action and resets the check-ins and periods.
 
     void control(thread::id id) {
         if (data_->enabled_) {
@@ -601,7 +600,7 @@
 [section:thread_tuple_launching Launching thread tuple]
 [/==========================================================================================]
 
-A new thread tuple is launched by passing a collection of object of some callable type that can be invoked with no parameters to the constructor.
+A new thread tuple is launched by passing a collection of objects of some callable type, that can be invoked with no parameters.
 These objects are then copied into internal storage, and invoked on the newly-created threads of execution.
 If the objects must not (or cannot) be copied, then `boost::ref` can be used to pass in a reference to the function object.
 In this case, the user of __thread_tuple__ must ensure that the referred-to object outlives the newly-created thread of execution.
@@ -627,7 +626,7 @@
       // this leads to undefined behaviour
 
 If you wish to construct an instance of __thread_tuple__ with a function or callable object that requires arguments to be supplied,
-this can NOT be done by passing additional arguments as is the case for threads to the __thread_tuple__ constructor, and you will need to use bind explicitlly.
+this can NOT be done by passing additional arguments as is the case for threads to the __thread_tuple__ constructor, and you will need to use bind explicitly.
 
     void find_the_question(int the_answer);
 
@@ -643,18 +642,18 @@
 [section:thread_tuple_exceptions Exceptions in thread functions]
 [/==========================================================================================]
 
-If the function or callable objects passed to the __thread_tuple__ constructor propagates an exception when invoked that is not of type
-__thread_interrupted__, `std::terminate()` is called.
+If the function or callable object passed to the __thread_tuple__ constructor propagates an exception that is not of type
+__thread_interrupted__, when invoked, `std::terminate()` is called.
 
 [endsect]
 
 [section:thread_tuple_joining Joining and detaching]
 [/==========================================================================================]
 
-When the __thread_tuple__ object that represents a collection of threads of execution is destroyed the threads become ['detached].
-Once a threads are detached, they will continue executing until the invocation of the functions or callable objects supplied on construction have completed,
+When the __thread_tuple__ object that represents a collection of threads of execution is destroyed, the threads become ['detached].
+Once threads are detached, they will continue executing until the invocation of the functions or callable objects supplied on construction completes,
 or the program is terminated. The threads of a __thread_tuple__ can also be detached by explicitly invoking the detach member function on the __thread_tuple__
-object. In this case, all the threads of the __thread_tuple__ object ceases to represent the now-detached thread, and instead represents 'Not-a-Thread.
+object. In this case, all the threads of the __thread_tuple__ object cease to represent the now-detached thread, and instead represents 'Not-a-Thread.
 
 In order to wait for a tuple of threads of execution to finish, the __join__ or __timed_join__ member functions of the __thread_tuple__ object must be
 used.
@@ -662,16 +661,16 @@
 If the threads of execution represented by the __thread_tuple__ object have already completed, or
 the __thread_tuple__ objects represents __not_a_thread__, then __join__ returns immediately.
 __timed_join__ is similar, except that a call to __timed_join__ will also return if the threads being waited for
-does not complete when the specified time has elapsed.
+do not complete when the specified time has elapsed.
 
-There is also a the possibility to wait until the first thread completes, interrupting the rest of the threads.
+There is also a possibility to wait until the first thread completes, interrupting the rest of the threads.
 [endsect]
 
 [section:thread_tuple_interruption Interruption]
 [/==========================================================================================]
 
 A tuple of running threads can be ['interrupted] by invoking the __interrupt__ member function of the corresponding __thread_tuple__ object.
-When the interrupted threads next executes one of the specified __interruption_points__ (or if it is currently __blocked__ whilst executing one)
+When the interrupted threads next execute one of the specified __interruption_points__ (or if it is currently __blocked__ whilst executing one)
 with interruption enabled, then a __thread_interrupted__ exception will be thrown in the interrupted thread. If not caught,
 this will cause the execution of the interrupted thread to terminate. As with any other exception, the stack will be unwound, and
 destructors for objects of automatic storage duration will be executed.


Boost-Commit list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk