Boost logo

Boost Users :

Subject: Re: [Boost-users] [iostreams] Devices and WOULD_BLOCK
From: Gavin Lambert (gavinl_at_[hidden])
Date: 2015-01-21 18:17:39

On 20/01/2015 11:14, David Hawkins wrote:
> The "problem" with the protocol I need to decode (which I have
> no choice to change) is that the data stream possibly contains
> escaped characters, so there is no way to know the read data
> length at the socket layer - its up to the filtering_stream layers
> to request data from the device layer until a complete packet is
> decoded.
> This all sounds good in theory, but in practice the filter layers
> attempt to read in blocks, and the read size is often larger than
> the read data that the device layer will supply. This lead to
> higher-level layers blocking in read(). I figured (perhaps
> incorrectly) that I could deal with this using non-blocking
> socket reads.

I can't really answer your specific questions about the Boost
implementations, but in general sockets (both blocking and non-blocking)
and code dealing with them are expecting that read() will only block (or
return WOULD_BLOCK) if no data can be read -- if at least one byte of
data is available then that is what it will return, regardless of the
amount actually requested. (The read size acts only as a maximum.)

Serial ports in particular sometimes operate this way and sometimes
don't. Under Windows, the SetCommTimeouts API function selects (via
ReadIntervalTimeout and ReadTotalTimeoutConstant) whether normal serial
ports will return "early" as above or whether it will wait longer to see
if more data is received, and whether there's an overall timeout or if
it will block forever. There may be a similar API call you need to make
to the FTDI library.

Boost-users list run by williamkempf at, kalb at, bjorn.karlsson at, gregod at, wekempf at