|
Boost Users : |
Subject: [Boost-users] [iostreams] Devices and WOULD_BLOCK
From: David Hawkins (dwh_at_[hidden])
Date: 2015-01-19 17:14:46
Hi all,
Boost newcomer here ...
I have an application where I need to encode/decode a serialized
data stream. I have used Boost.Iostreams filtering_stream filter
to implement encoding and decoding function of each layer.
The device interface is via a USB device, eg., think USB-to-Serial
device, but without the classic Virtual COM Port or ttyUSB0 layer
available. For example, the FTDI FT232H USB-to-MPSSE interface
is a suitable cable
http://www.ftdichip.com/Products/Cables/USBMPSSE.htm
The cable can be used to implement either JTAG or SPI mode
access (the mode selection introduces different filters into
the filtering_stream stack).
I have the filters working, and am working on getting a device
working. Rather than deal with the details of the final hardware,
I figured I'd simplify the design by creating a client/server
design via sockets; the client code matches what I would use with
real hardware, while the server code emulates the hardware.
I started with this example for a socket device
The "problem" with the protocol I need to decode (which I have
no choice to change) is that the data stream possibly contains
escaped characters, so there is no way to know the read data
length at the socket layer - its up to the filtering_stream layers
to request data from the device layer until a complete packet is
decoded.
This all sounds good in theory, but in practice the filter layers
attempt to read in blocks, and the read size is often larger than
the read data that the device layer will supply. This lead to
higher-level layers blocking in read(). I figured (perhaps
incorrectly) that I could deal with this using non-blocking
socket reads.
After reading Boost.Iostreams non-blocking support in Section 3.6
http://www.boost.org/doc/libs/1_57_0/libs/iostreams/doc/index.html
http://www.boost.org/doc/libs/1_57_0/libs/iostreams/doc/guide/asynchronous.html
I modified the socket device example in the link above to;
* put the socket in non-blocking mode before constructing the device
* change the device 'read' procedure to return WOULD_BLOCK rather
than throw an exception
This does not work, and its not due to the filter layers. There
is an issue with the device layer.
Here's the issue ... given a design with a filtering_stream
created only with a socket_device and no filters, if I trace
the code in a debugger (Boost 1.57.0 source, Visual Studio 2012
under Win7), the socket_device read method call return
sequence is
boost/iostreams/read.hpp
- read_device_impl read template at line 187
- read at line 52
boost/iostreams/detail/adapter/concept_adapter.hpp
- device_wrapper_impl read at line 169
- read at line 77
boost/iostreams/detail/streambuf/indirect_streambuf.hpp
- line 258
i.e., this source file
and this particular block of code
// Read from source.
std::streamsize chars =
obj().read(buf.data() + pback_size_, buf.size() - pback_size_, next_);
if (chars == -1) {
this->set_true_eof(true);
chars = 0;
}
setg(eback(), gptr(), buf.data() + pback_size_ + chars);
The code tests for EOF (-1), but not WOULD_BLOCK (-2), so after this
point, since chars is -2, things go bad.
So I guess my question now is;
Have I just bumped into the as-yet-unsupported part of Boost.Iostreams
support for asynchronous I/O?
Cheers,
Dave
PS. I can post example code if anyone wants to trace the code for
themselves, I just figured I'd post the question to start with.
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net