From: Dean Michael Berris (mikhailberis_at_[hidden])
Date: 2006-11-16 12:15:49
On 11/16/06, Pedro Lamarão <pedro.lamarao_at_[hidden]> wrote:
> berserker_r escreveu:
> > Session::very_very_long_operation() takes a lot of time :) I need in the
> > meanwhile to be able in the server to accept new connections, how can I do
> > that?
> You can rework your read handler to do all I/O necessary, and to submit
> the obtained data to a work queue somewhere else.
> Another thread would then dispatch work items from this work queue to
> worker threads.
> After the work is finished, the item would be moved to an exit queue,
> with a response for the peer, or something.
This seems like the most design-wise "appropriate" solution, however
recent experiments with this approach as opposed to an ActiveObject
pattern for the cache/file manager (and using Futures for return
values) have shown (at least in our server application) that the
"single io_service for the processing" approach -- or the ActiveObject
implementation -- performs better (especially for IO bound
operations). For CPU bound operations OTOH, the trick seems to be to
break up that CPU bound operation off as an asynchronous operation or
put into a processor queue as an Asynchronous Completion Token --
which is executed by different threads.
The next approach would be to create a pool of io_service objects,
each io_service::run being invoked by at least 2 threads.
Why create many io_service objects? Because this allows your
processors more chances to deal with more operations and somewhat
perform "load balancing" among your io_services. This means when you
have a new connection object, you can bind the encapsulated socket to
an existing io_service chosen either in a round-robin approach or
using some load balancing algorithm to uses the pigeon hole principle
to keep the number of sockets per io_service pretty much leveled. You
can even make new connections wait until the number of sockets
subsides to a lower level to prevent the server from spawning too many
sockets. This approach makes sense for multi-processor systems.
Otherwise, you're looking at relying on the operating systems'
scheduling algorithm to do the time sharing for you as soon as you
spawn a good number of threads.
FWIW, although a single io_service object being run by a pool of
threads seems like a sufficient approach, this leads to problems
especially if you have operations that need to be scheduled in some
particular order (where the asio::strand comes in). It may be a good
idea to have more io_service objects run by different threads to make
sure that no one thread's running time is dominating the application's
time slice because it's waiting on another operation encapsulated in a
strand. The magic number of io_service objects will depend on a case
to case basis, this is the kind of stuff that should come out with the
help of profiling and performance analysis tools.
Of course, YMMV.
-- Dean Michael C. Berris C++ Software Architect Orange and Bronze Software Labs, Ltd. Co. web: http://software.orangeandbronze.com/ email: dean_at_[hidden] mobile: +63 928 7291459 phone: +63 2 8943415 other: +1 408 4049532 blogs: http://mikhailberis.blogspot.com http://3w-agility.blogspot.com http://cplusplus-soup.blogspot.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk