Subject: Re: [Boost-bugs] [Boost C++ Libraries] #11895: Strand service scheduling is hurting ASIO scalability
From: Boost C++ Libraries (noreply_at_[hidden])
Date: 2016-01-09 18:47:12
#11895: Strand service scheduling is hurting ASIO scalability
-------------------------------------+-------------------------------------
Reporter: Chris White | Owner: chris_kohlhoff
<chriswhitemsu@â¦> | Status: new
Type: Bugs | Component: asio
Milestone: To Be Determined | Severity: Optimization
Version: Boost 1.60.0 | Keywords: strand scheduling
Resolution: | priority
-------------------------------------+-------------------------------------
Comment (by Chris White <chriswhitemsu@â¦>):
To shed a little more light...if I post 10 operations to a strand in quick
succession, my first operation goes into the ready_queue_, the strand gets
scheduled, and then my other 9 operations go into waiting_queue_. When
strand_service::do_complete() is called, it services the ready_queue_,
then it appends the waiting_queue_ and reschedules the strand for the
remaining work. So basically, whenever multiple operations are posted to a
strand, the strand will have to be scheduled at least twice to perform all
of the work.
Instead, the waiting_queue_ should be drained at least once when
do_complete is called in order to perform all the operations that were
posted from the time the strand was scheduled until the time the strand
was serviced. Here is one possible solution. I inlined some code and added
comments for clarity:
{{{
void strand_service::do_complete(io_service_impl* owner, operation* base,
const boost::system::error_code& ec, std::size_t
/*bytes_transferred*/)
{
if (owner)
{
strand_impl* impl = static_cast<strand_impl*>(base);
call_stack<strand_impl>::context ctx(impl);
// ONLY THE FIRST POSTED OPERATION IS IN THE READY QUEUE.
// SERVICE IT NOW WITHOUT HAVING TO LOCK THE MUTEX.
while (operation* o = impl->ready_queue_.front())
{
impl->ready_queue_.pop();
o->complete(*owner, ec, 0);
}
// SUBSEQUENT OPERATIONS ARE IN THE WAITING QUEUE.
// SERVICE THEM NOW INSTEAD OF RESCHEDULING!
impl->mutex_.lock();
impl->ready_queue_.push(impl->waiting_queue_);
bool more_handlers = impl->locked_ = !impl->ready_queue_.empty();
impl->mutex_.unlock();
if (!more_handlers)
return;
while (operation* o = impl->ready_queue_.front())
{
impl->ready_queue_.pop();
o->complete(*owner, ec, 0);
}
// ONLY NOW WE RESCHEDULE IF MORE WORK HAS SINCE BEEN POSTED.
impl->mutex_.lock();
impl->ready_queue_.push(impl->waiting_queue_);
more_handlers = impl->locked_ = !impl->ready_queue_.empty();
impl->mutex_.unlock();
if (more_handlers)
owner->post_immediate_completion(impl, true);
}
}
}}}
I was originally thinking that this could loop as long as there is work in
either of the queues, but that might introduce concerns about a single
strand monopolizing a thread, so I backed off from that solution. The
implementation above is enough to addresses the problem in my attached
sample application.
-- Ticket URL: <https://svn.boost.org/trac/boost/ticket/11895#comment:1> Boost C++ Libraries <http://www.boost.org/> Boost provides free peer-reviewed portable C++ source libraries.
This archive was generated by hypermail 2.1.7 : 2017-02-16 18:50:19 UTC