Boost logo

Boost Users :

Subject: Re: [Boost-users] [asio] sync operations with timeout
From: Gennady Proskurin (gprspb_at_[hidden])
Date: 2011-08-31 07:02:14


On Tue, Aug 30, 2011 at 05:03:43PM +0300, Igor R wrote:
> > I write network library which offers blocking socket operations (connect, read,
> > write, ...) with timeouts, using boost::asio asynchronous operations as
> > implementation. It is actually some sort of wrapper around boost::asio.
> > Implementation does something like this (read, for example):
> >
> > do_read()
> > {
> >        timer.async_wait(&handle_timer)
> >        socket.async_read(..., handle_read);
> >        wait_for_all_handlers();
> >        deal_with_ec();
> > }
> >
> > handle_timer() { socket.cancel(); }
> > handle_read() { timer.cancel(); }
> >
> > I have one io_service for all the sockets/timers with one thread doing
> > io_service::run(). Code is multithreaded, so many threads run that functions
> > like do_read() simultaneously (for different sockets, of course).
> >
> > I have two questions:
> >
> > 1. How to wait for completion of all handlers? Currently I use shared_ptr with
> > mutex+condvar (for each library call), it works fine, but seems to be too
> > heavy-weight.
> >
> > 2. There is a chance (with small timeouts), that timer expires before
> > async_read() is started, so sock.cancel() is no-op in handle_timer(), and timer
> > will be disabled. Currently, I just ignore this. I can use another mutex (or
> > strand) to protect from this, but it is also heavy-weight.
>
>
> It probably isn't an answer to your questions, but perhaps you'll find
> the following approach useful:
> http://lists.boost.org/Archives/boost/2007/04/120339.php

I saw that message, but in order to work with multiple threads, that approach
requires separate io_service for each socket.

Currently, I see no easy (lightweight) way to use boost::asio in synchronous
mode with timeouts.


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net