|
Boost Users : |
Subject: Re: [Boost-users] [asio] continuous reads into streambuf
From: Stephan Menzel (stephan.menzel_at_[hidden])
Date: 2018-05-25 09:40:14
On Fri, May 25, 2018 at 1:44 AM, Gavin Lambert via Boost-users <
boost-users_at_[hidden]> wrote:
>
> Mixing read_until and read on the same socket is problematic, because the
> way they actually behave -- while not actually wrong -- is not what you
> first expect.
>
> There are two key bits of information you need to realise the problem:
>
> 1. async_read_until actually calls async_read_some under the hood to read
> an arbitrary amount of data from the socket. All the data is stored in the
> streambuf and only a *subset* of that length is returned to indicate where
> the terminator was found.
>
> 2. async_read just calls async_read_some directly with the specified
> length request.
>
> In particular, even if #1 already read the entire message into the
> streambuf, #2 will ignore that and still wait for the specified number of
> new incoming bytes -- even if you are reading into the same streambuf.
>
This!
That was the key right here. I had this hunch but failed to confirm it or
know how to prevent it. Thanks a bunch. I can't believe I have never read
this crucial bit of info anywhere in the asio docs. I took your and
Vinnie's advice and boiled this down to fewer async ops still reading into
the streambuf and scratched the initial read_until, leaving only
async_read. And it behaves nicely now.
Looking at all my past failures with similar streambuf based approaches I
think it may very well be that I always had this mix of the two operations.
This should be prominently placed in the asio docs.
So, for anybody who comes across this, the key elements are:
* You can re-use the same streambuf for many reads and writes as long as
you DO NOT MIX async_read() and async_read_until()
* streambuf.size() gives you the data available for extraction
* when extracting data, either read from streambuf.data() and then
consume() the data you have read or use a stream that does the consume for
you
* when writing data into it either write into buffers given by prepare()
and then call commit() or use a stream that does the commit for you
Thanks again,
Stephan
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net