Boost logo

Threads-Devel :

Subject: [Threads-devel] How to implement a sequential, dual condition multi-thread algorithm
From: David White (dawhite32_at_[hidden])
Date: 2010-08-16 03:36:14


Dear boost-thread and std::queue experts,

I would be grateful for your advice on the best way to implement a
sequential, dual condition multi-thread algorithm.

For computationally demanding sequential processing tasks, I have
developed a multi thread class which handles processing in three
threads. Leaving aside the details of the algorithm, three threads
are required as follows:

thread A: performs processing of 'n' blocks in a forward run (eg 1, 2, 3, 4, 5)
thread B: performs processing of 'n' blocks in a reverse run (eg 5, 4, 3, 2, 1)
thread C: performs processing of 'n' blocks when both forward and
reverse runs are complete.

Using vector, I maintain three copies of 'n' blocks so as to enable
threads A, B and C to operate on each block independently. Again, I
will omit details as to why this is done this way.

The problem for which I am seeking your advice is as this: thread C
can commence processing block 'i' only when both forward and reverse
processing runs (threads A and B) have completed block 'i'.

Hence, in the above example, one can intuitively see that block 3 will
(most likely) be the first to be processed, then 2, 4, 1 and 5. Of
course, the order is dependent upon thread execution speed.

I have implemented this to date as follows:

Using std::vector<int>, I manage a forward and reverse list of
processed blocks. Each time the forward process has completed
processing block 'i', 'i' is added to the forward vector. Same for
reverse. A thread safe class (with mutex::scoped_lock) is used to
handle vector manipulation via threads A and B.

...
private:
vector<int> processed_blocks_fwd;
vector<int> processed_blocks_rev;
mutable mutex forward_mutex, reverse_mutex;

public:
void add_block_forward(const T& block) {
  mutex::scoped_lock lock(forward_mutex);
  processed_blocks_fwd.push_back(block);
}
void add_block_reverse(const T& block) {
  mutex::scoped_lock lock(reverse_mutex);
  processed_blocks_rev.push_back(block);
}
...

Immediately afer adding block 'i' to the forward vector in thread A,
the reverse vector is searched for block 'i' to see whether thread C
can process block i. Same happens for reverse in thread B, except the
forward vector is searched. binary_search is used since both vectors
will always be sorted.

...
bool forward_block_processed(const T& block) {
  mutex::scoped_lock lock(forward_mutex);
  return binary_search(processed_blocks_fwd.begin(),
processed_blocks_fwd.end(), block);
}
bool reverse_block_processed(const T& block) {
  mutex::scoped_lock lock(reverse_mutex);
  return binary_search(processed_blocks_rev.begin(),
processed_blocks_rev.end(), block, greater<UINT32>());
}
...

Thus, if thread C can commence processing of block 'i' (such as the
case when 'i' has been processed in thread A and
reverse_block_processed(i) returns true), then 'i' is pushed to a
queue and a condition variable set with notify_one. Otherwise, the
condition variable is set to wait inside thread C. Thread C also
waits if the queue is empty.

To notify execution of thread C, I use a thread-safe queue as
described the excellent article posted here:
http://www.justsoftwaresolutions.co.uk/threading/implementing-a-thread-safe-queue-using-condition-variables.html

Thread C terminates when the queue is empty, or threads A or B throw
an exception. If the queue is empty, thread C can only exit when
execution of both threads A and B has ended.

Whilst the vector arrangement is rather fast and seems to work rather
well, I can't help but feel that the vector push_back and
binary_search via threads A and B is a little cumbersome, "mechanical"
and potentially prone to deadlock.

My question is therefore - can anyone suggest a better way of managing
the conditional execution of thread C for block 'i' when both thread A
and thread B have both completed processing block 'i'.

In advance, thankyou for your thoughts and suggestions.

Kind regards, David


Threads-Devel list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk