Boost logo

Boost :

From: Dean Michael Berris (mikhailberis_at_[hidden])
Date: 2008-02-28 23:48:07


On Fri, Feb 29, 2008 at 3:52 AM, Mathias Gaunard
<mathias.gaunard_at_[hidden]> wrote:
> Dean Michael C. Berris wrote:
>
> > In the case of an io service per core design,
> > unbalanced work from connections being serviced by one io service
> > instance over the others will cause that one thread (presumed to be
> > running on just one processor) to do most of the work leaving the other
> > threads idle.
>
> But in that case, you run n demultiplexers (select, epoll, whatever)
> instead of one...
>

This should exploit the multiple available processors better (assuming
that each thread runs on a different processor), and even improve
responsiveness of the system when the demultiplexers 'sleep' or wait
on a condition to happen.

The only problem I see in this design is if you already have a number
of connections already bound to one io_service of which almost every
connection has some pending things to do (read, write, process, etc.)
and other connections on other io_service instances not doing much.
For instance, socket 1, 2, 3 are bound to io_service A, socket 4, 5, 6
are bound to io_service B -- if 1, 2, 3 have a lot of activity, then
the thread running io_service A's run method would be swamped trying
to deal with the stuff 1, 2, 3 need to accomplish while the thread
running io_service B's run method would technically be idle.

I'm curious whether the cost of synchronization between threads all
running the same io_service's run() method is greater than the
possible performance hit of having multiple sockets (de)multiplexed in
a single thread.

Any ideas from the threading experts as to whether locking a mutex
across multiple threads is worse than having a single thread have
exclusive access to the resource (in this case a dispatch queue inside
boost::asio::io_service)?

-- 
Dean Michael C. Berris
Software Engineer, Friendster, Inc.
[http://blog.cplusplus-soup.com]
[mikhailberis_at_[hidden]]
[+63 928 7291459]
[+1 408 4049523]

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk