Boost logo

Boost Users :

Subject: [Boost-users] [interprocess] theoretical questions about interprocess conditions
From: Kishalay Kundu (kishalay.kundu_at_[hidden])
Date: 2009-09-15 07:55:03


Hi,

I am designing a multithreaded application that uses
interprocess_condition variables as part of my synchronization
process. I have some questions about the internal workings of
interprocess_condition and would be grateful if someone provides their
feedback on my general design. My application is designed such that
there are several threads (say A, B, C & D) that all access different
chunks of data asynchronously based on individual conditions (say cA,
cB, cC, cD). However, each chunk needs to be accessed in sequence. So,
for a single chunk of data, an example would be as following:
1. A waits on condition cA, executes its code, then issues
notification for condition cB.
2. B waits on condition cB, executes its code, then issues
notification for condition cC.
2. C waits on condition cC, executes its code, then issues
notification for condition cD.
2. D waits on condition cD, executes its code, then issues
notification for condition cA.

>From the examples on the web-site, it looks like, one can use
scoped_lock and interprocess_condition in the following manner:
void thread_func(...)
{
// example code for thread B
scoped_lock < boost::interprocess::interprocess_mutex > lock( my_mutex );
cB.wait( lock );

// work code

// notify next thread
cC.notify_all( );
}

In this case, thread B will get into the my_mutex queue and wait for
thread A to notify when and the condition cB is fulfilled and then
take ownership of the mutex.

Question 1: If thread B locks my_mutex after thread A has already
issued the cB notification, will B be able to recognize this and take
ownership of the mutex? My design is such that no other thread is
actually waiting for cB. If B does not recognize the condition, it
will result in a stall of the whole pipeline (since it won't issue
notification on cC). Is this a correct assessment?

Question 2: I give below an example of the code I'm using. I'd like to
know if this will generate the behavior I'm looking for. The design is
based on the fact that different threads perform different operations
on the data and they do these operations in a particular order (eg. A,
then B, then C, then D). I divide the data up into chunks so that
every thread is working on some part of the data instead of being held
up. I use the try_to_lock stuff so that if a thread doesn't get an
immediate lock, it just moves on to the next chunk of data and sees if
it can work on that. Is this an efficient way to make use of my
resources.

// example function
void thread_func( ... )
{
  scoped_lock < boost::interprocess::interprocess_mutex > my_lock(
my_mutex, try_to_lock );
  if( my_lock ){
    cB. wait( my_lock );
    // do some work

    cC.notify_all( );
  }
}

Thanks in advance. I truly appreciate the help.

Kish


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net