Boost logo

Boost Users :

Subject: Re: [Boost-users] Several novice questions concerning threading
From: Ovanes Markarian (om_boost_at_[hidden])
Date: 2012-04-11 16:22:58


Hi!

just the solution to your question. It could have been shorter with lambda
expressions, but that has a better learning effect.

On Wed, Apr 11, 2012 at 9:09 PM, Master <master.huricane_at_[hidden]> wrote:

> thank you very much for all the info . i really do appreciate it :)
> have look at this link please :
> http://docs.wxwidgets.org/trunk/classwx_thread.html
> these are some of the features i would like to have , when using boost .
> unfortunately i think boost threading has yet to be complete , specially
> for being used by the novices like me ,.
> those managements are really hard , and i wanted to use boost to avoid
> digging into any os realtaed apis myself :( it seems i need to get all i
> need from the Os API .
> i have a C#.net back ground , working with threads in c# was indeed simple
> , but this much complexity and chores of management to be done is really
> alot for me .
> i need to re organize my thoughts on how i should go about it .
> if only i had examples showing the real world problems and issues using
> threads , i would be fine .
> any way about the program and why i tried mixing the stuff : have look at
> here please : http://en.highscore.de/cpp/boost/ the exercise section ,
> i was asked to speed things up , and there i said to myself , imagine that
> you faced a situation in which you couldnt simply separate the critical
> region , so i tried to teach myself the ways of coping such situations .
>

#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/cstdint.hpp>
#include <iostream>

#include <boost/thread.hpp>
#include <boost/ref.hpp>

void sum_single_threaded(boost::uint64_t iterations)
{
  boost::posix_time::ptime start =
boost::posix_time::microsec_clock::local_time();

  boost::uint64_t sum = 0;
  for (int i = 0; i < iterations; ++i)
    sum += i;

  boost::posix_time::ptime end =
boost::posix_time::microsec_clock::local_time();
  std::cout << end - start << std::endl;

  std::cout << sum << std::endl;
}

// calculate the sum of the range (from, to[
struct calc_sum
{
  typedef boost::uint64_t result_type;

  calc_sum(boost::uint64_t begin, boost::uint64_t end)
    : begin_(begin)
    , end_(end)
  {}

  result_type operator()()
  {
    result_type sum=0;
    for(; begin_<end_; ++begin_)
      sum+=begin_;

    return sum;
  }

private:
  boost::uint64_t begin_;
  const boost::uint64_t end_;
};

void sum_multi_threaded(boost::uint64_t iterations)
{
  using namespace boost;
  posix_time::ptime start = posix_time::microsec_clock::local_time();

  // create 2 tasks to be executed in the threads
  packaged_task<uint64_t>
    first_half(calc_sum(0, iterations/2))
  , second_half(calc_sum(iterations/2, iterations))
  ;

  //create 2 threads which are going to execute the tasks
  thread t1(ref(first_half)), t2(ref(second_half));

  // retrieve 2 future objects to wait for results
  unique_future<uint64_t>
    f1(first_half.get_future())
  , f2(second_half.get_future())
  ;

  // will automatically wait if sums are not ready
  uint64_t sum = f1.get()+f2.get();

  posix_time::ptime end = posix_time::microsec_clock::local_time();
  std::cout << end - start << std::endl;

  std::cout << sum << std::endl;
}

int main()
{
  const boost::uint64_t iterations = 1000000000;
  sum_single_threaded(iterations);
  sum_multi_threaded(iterations);
}

> again i imagined a situation in which there are couple of reader threads
> reading a buffer output and simultaneously a writer thread is to write sth
> on the buffer , there i though if priority of threads .
> again i imagined a situation in which there are several inputs ( messages
> ) which need to be decoded and then taken care of , and then i needed to
> make sure that those messages are delivered sucessfully or their respective
> goal are carried out . thats where i tried to know sth about thread status
> to check about their success of failure .
> and about the sequential execution i should say that , thats because i
> imagined a situation where there are two groups of threads , one doing the
> reading chores and the other one doing the writing chores ,
> i wanted to make sure that no reader proceeds a writer thread , so that it
> avoids any failure .
>
> again thank you for your time :)
>
> On Wed, Apr 11, 2012 at 10:05 PM, Ovanes Markarian <om_boost_at_[hidden]
> > wrote:
>
>>
>>
>> On Wed, Apr 11, 2012 at 6:39 PM, Master <master.huricane_at_[hidden]>wrote:
>>
>>> Thank you very much :)
>>> for keeping track of my created threads i decided to use thread_groups
>>> and accumulate them there . i came up with sth like this :
>>>
>>> boost::thread_group threadstore;
>>>
>>> threadstore.add_thread(&thread);
>>> threadstore.add_thread(&thread2);
>>> BOOST_FOREACH(boost::thread t ,threadstore.threads)
>>> {
>>> cout<<t.get_id();
>>> }
>>>
>>> As far as I can see the docs thread_group (
>> http://www.boost.org/doc/libs/1_49_0/doc/html/thread/thread_management.html#thread.thread_management.threadgroup) does
>> not expose the public member threads. Even if it is there, it is an
>> implementation detail, which can change/(made private) with any release of
>> boost and your code will break. I suggest you to implement an approache
>> suggested by Vicente.
>>
>>
>>> well it didnt compiled ! before that , i tried using sth like this which
>>> failed nontheless .
>>>
>>> vector<boost::thread> threadstore;
>>>
>>> threadstore.push_back(thread);
>>> threadstore.push_back(thread2);
>>> BOOST_FOREACH(boost::thread t ,threadstore)
>>> {
>>> cout<<t.get_id();
>>> }
>>>
>>> Thread group is just a management construct, it does not allow you to
>> iterate over the threads. BOOST_FOREACH supports STL like container.
>> thread_group does not provide begin/end iterators.
>>
>>
>>>
>>> im clueless of the cause .
>>> The threadstore has member called :threads , which i dont know how to
>>> work around it , there is also another member named: m ,
>>> which i have no clue of what it is !nor i know where it came form! or
>>> what i can do with it!
>>>
>> i couldnt find any information on these members on the boost::thread
>>> documentation either
>>>
>> This an implementation detailed. It should be transparent for you, don't
>> use it.
>>
>>
>>> for that example i posted , i moved the mutex inside the loop and then
>>> used a sleep() method for couple of microseconds and got it working ( i
>>> mean now both threads seem to work as i expected them).
>>> but i want to know if we have sth like , lets say a timed_lock kind of
>>> lock !, so that a thread would only hold a mutex for a specified time , and
>>> when the timeslice presented in the timed_lock() passes , the
>>> aforementioned thread releases the mutex
>>>
>> and thus other thread(s) can get that mutex and so there would be no need
>>> for a sleep() for a thread to wait till it time-slice finishes up and thus
>>> releases the mutex .the current timed_lock tries to obtain the mutex in a
>>> specified time , which is not the case .
>>>
>> I think you mixup smth. here. Generally speaking there are critical
>> regions in a parallel application. These regions must be protected by
>> synchronization objects to grant only a single thread modification for that
>> region. Now what you ask for: I know there is a long critical region, but
>> in the middle of the critical region I want a break and another thread
>> should run than. If it is so, make 2 regions, but there is no way to
>> implement smth like that in that simple manner. May be transactions, that
>> you interrupt the execution, roll back the calculated state, let others run
>> and start the calculation again.
>>
>>
>>> i remember i tried to use yield() for achieving such a possiblity
>>> (releasing mutex as soon as possible , i think it kinda worked ,i gave a
>>> similare result when i used sleep() )
>>>
>> How should it help, if the other thread wants in the exact critical
>> section which is locked? Yield just gives the remaining CPU time slice of
>> the thread to the scheduler. And scheduler might decide who runs next. It
>> can even happen, that this one thread runs again, if its prio the highest.
>>
>>
>>
>>> here is the code which i wrote to actually speed up the sum action ,
>>> which i think didnt give any speed ! would you see where the problem is ?
>>> //in the name of GOD
>>> //Seyyed Hossein Hasan Pour
>>> //Working with Boost::threads
>>> #define BOOST_THREAD_USE_LIB
>>> #include <iostream>
>>> #include <boost/date_time/posix_time/posix_time.hpp>
>>>
>>> #include <boost/thread.hpp>
>>> using namespace std;
>>>
>>> boost::uint64_t i = 0;
>>> boost::uint64_t sum=0;
>>> boost::mutex mutex;
>>>
>>> void IteratorFunc()
>>> {
>>>
>>> for (i ; i<100000; i++)
>>> {
>>> mutex.lock();
>>> sum+=i;
>>> cout<<i<<"\t"<<boost::this_thread::get_id()<<endl;
>>> mutex.unlock();
>>>
>>> //boost::this_thread::sleep(boost::posix_time::microseconds(200));
>>> boost::this_thread::yield();
>>> }
>>>
>>> }
>>>
>>> int main()
>>> {
>>>
>>> boost::posix_time::ptime start =
>>> boost::posix_time::microsec_clock::local_time();
>>> boost::thread thread(IteratorFunc);
>>> boost::thread thread2(IteratorFunc);
>>>
>>> // boost::thread_group threadstore;
>>> //
>>> // threadstore.add_thread(&thread);
>>> // threadstore.add_thread(&thread2);
>>> //
>>> // BOOST_FOREACH(boost::thread t ,threadstore.threads)
>>> // {
>>> // cout<<t.get_id();
>>> // }
>>>
>>> boost::posix_time::ptime end =
>>> boost::posix_time::microsec_clock::local_time();
>>>
>>> thread.join();
>>> thread2.join();
>>>
>>> cout << "sum =\t" << sum<< "\t"<<end-start<<endl;
>>> return 0;
>>> }
>>>
>> The speed up would rely on the lock-free implementation and dividing of
>> work into independent pieces. How sleep suppose to speedup smth. if in that
>> time period is nothing calculaded? CPU just stands still. Sum can be
>> speedup greatly. The keyword here is work stealing, first proposed in Cilk (
>> http://en.wikipedia.org/wiki/Cilk). Intel acquired Cilk. For C++ you can
>> use Intel Threading Building Blocks, which has a parallel_reduce algorithm.
>> Just download the tutorial:
>> http://threadingbuildingblocks.org/uploads/81/91/Latest%20Open%20Source%20Documentation/Tutorial.pdfand take a look at chapter: 3.3.
>>
>>
>>>
>>> i also want to know if we have such a capability where i can specify the
>>> priority of a thread or a group of thread against the other threads .
>>>
>> You need to get the native handle and use the native OS functions.
>>
>>
>>> let me explain it little more, by that i mean , suppose we have couple
>>> of reader threads and one or two writer threads , is there any kind of
>>> possiblity that i can grant the writer thread(s) more priority in terms of
>>> accessing
>>> a resource (by obtaining the mutex more often ? - or the writer
>>> thread(s) deny access to readers in some cases ? ) ? or a part of memory ?
>>> ( e.g an array of some kind ? )
>>> if it is possible , how can i achieve such possibility ?
>>>
>> Boost provides basic reader/writer lock concepts. For more finer grained
>> concepts I think you will need implement this concept yourself. But usually
>> reader/writer is implemented the way that if a writer wants to enter the
>> critical section no other readers will enter it before the writer is done.
>> And writer has to wait until all currently active readers are done.
>> Actually the speedup here would be to increase the priority of readers to
>> the priority of the writer.
>>
>>
>>> is it possible that i can know if a thread was successful in doing what
>>> it was sent to ?
>>>
>> Actually, you should decouple threads from work and consider using
>> futures and thread pools. You are not interested in the thread's state, but
>> the result which was calculated in parallel. So you wait until the result
>> is available and verify it.
>>
>>
>>> can i specify that a thread or a group of thread execute in a specific
>>> order ? for example thread one must always execute first and thread two
>>> must always follow thread one . do we have such a thing ?
>>>
>> Again, future pattern would help. You put work items in the queue and
>> they are calculated in parallel but taken from the queue in a predefined
>> order. You can also use a priority queue to sort the items according their
>> priority when those are queued. On the other hand: You ask here for
>> sequential execution. Why do you need threads than?
>>
>>
>>> do i have any means of talking to threads ? checking the satus of a
>>> specific thread ? or group of threads ?
>>>
>> What status would you like to check?
>>
>>
>>> can i know by any means , that which threads are blocked and which are
>>> not ?
>>>
>> Blocked in terms of what? Waiting to enter the critical section or are
>> currently not running?
>>
>>
>>> Thank you so much for your time and please excuse me for such newbish
>>> and yet long questions .
>>> i really do appreciate your help and time :)
>>> Regards
>>> Hossein
>>>
>>>
>>>
>>> On Wed, Apr 11, 2012 at 7:38 PM, Vicente J. Botet Escriba <
>>> vicente.botet_at_[hidden]> wrote:
>>>
>>>> Le 11/04/12 13:31, Master a écrit :
>>>>
>>>> Hello all .
>>>> i am a newbie to the boost community . i recently started learning
>>>> about threads in boost . now there are some questions i would like to ask :
>>>>
>>>> Welcome.
>>>>
>>>> 1.where can i find examples showing practical uses of boost::thread
>>>> features?
>>>>
>>>> The documentation doesn't contains too much examples. You can take a
>>>> look at the libs/thread/example and tutorial directories :(
>>>>
>>>> 2.how can i get all threads ID issued by me in my app?
>>>>
>>>> No direct way other that storing them in a container. What is your use
>>>> case?
>>>>
>>>> 3.how can i iterate through running threads in my app ?
>>>>
>>>> No direct way other than storing a thread pointer in a container. What
>>>> is your use case?
>>>>
>>>> 4.is there any kind of means to get all the running threads using
>>>> boost library? if it does whats the calss? if it doesnt how can i do that?
>>>>
>>>> See above. I think that you need to specialize the thread class so that
>>>> it inserts a handle to the created thread on a container at construction
>>>> time and remove it at destruction time.
>>>>
>>>> 5.can i resume a thread after pausing it ? ( how can i pause a
>>>> thread? )
>>>>
>>>> Boost.Thread doesn't provide fibers or resumable threads. There is
>>>> Boost.Fiber for that purpose (not yet in Boost).
>>>>
>>>> 6. how can i share a variable between two or more threads , suppose i
>>>> have a loop , i want two threads to simultaneously iterate through it , if
>>>> thread1 counted to 3, thread2 continues it from 4 and so on . ?
>>>> i already tried
>>>>
>>>> You need to protect the access to the loop index variable 'i' with a
>>>> mutex as you did with sum.
>>>>
>>>> HTH,
>>>> Vicente
>>>>
>>>> ------
>>>>
>>>>> what is wrong with my sample app ?
>>>>> #include <iostream>
>>>>> #include <boost/thread.hpp>
>>>>> using namespace std;
>>>>> using namespace boost;
>>>>>
>>>>> mutex bmutex;
>>>>> int i=0;
>>>>> int sum=0;
>>>>> void IteratorFunc(int threadid)
>>>>> {
>>>>> for ( ; i<25 ; i++)
>>>>> {
>>>>> lock_guard<mutex> locker(bmutex);
>>>>>
>>>>> cout<<"\t"<<threadid<<"\t"<<this_thread::get_id()<<"\t"<<i<<"\n";
>>>>> sum+=i;
>>>>> }
>>>>> }
>>>>>
>>>>> int main()
>>>>> {
>>>>> //boost::posix_time::ptime start =
>>>>> boost::posix_time::microsec_clock::local_time();
>>>>>
>>>>> thread thrd(IteratorFunc,1);
>>>>> thread thrd2(IteratorFunc,2);
>>>>>
>>>>> cout<<sum;
>>>>> thrd.join();
>>>>> thrd2.join();
>>>>> }
>>>>
>>>>
Regards,
Ovanes



Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net