|
Boost Users : |
Subject: Re: [Boost-users] [Threads] How to keep number of running thread constant, performing different operations
From: Lars Viklund (zao_at_[hidden])
Date: 2011-06-17 05:10:43
On Fri, Jun 17, 2011 at 10:12:47AM +0200, Ovanes Markarian wrote:
> On Fri, Jun 17, 2011 at 9:05 AM, Alessandro Candini <candini_at_[hidden]> wrote:
> > I have different threads which have to work on completely different input
> > and output data (non critical sections): an atomic operation per thread,
> > each one with different time execution but everyone with an intense use of
> > the CPU.
> > Let's say I have 10 operations to perform (10 threads): I would like to run
> > concurrently only 2 threads because of resource consumption.
> >
> > My problem is that when a threads ends its execution, I would like to
> > suddenly start another thread performing operation 3, in order to have
> > constantly 2 threads working, and so on until the end of operations.
> >
> > How can I achieve this? I thought to insert my threads into a vector...but
> > I have no idea on how start and join them in order to obtain what described
> > above.
> >
> > Can anyone post me a little example?
> >
> > Thanks in advance.
> >
>
> I think you should use a thread pool. You can extend your thread pool with
> more threads, without impacting other application parts. Take a look at this
> thread pool implementation:
> http://threadpool.sourceforge.net/tutorial/intro.html
Or you could stick to the things that are in Boost, like Boost.Asio.
Create an asio::io_service;
create an asio::io_service::work to keep the workers alive;
start N threads, each running asio::io_service::run();
enqueue work by invoking asio::io_service::post() with your tasks.
When you're done, destroy the 'work' object and the run() calls will
terminate when they're done.
For managing the N threads, you can use a thread_group.
-- Lars Viklund | zao_at_[hidden]
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net