|
Boost Users : |
Subject: [Boost-users] asio: cancelling a named pipe client
From: Stian Zeljko Vrba (vrba_at_[hidden])
Date: 2018-01-26 17:50:07
I have a client which connects to a named pipe as follows:
CreateFile(pipeName.c_str(),GENERIC_READ | GENERIC_WRITE, 0, nullptr, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, nullptr);
The result of this call is assigned to a stream_handle. I want that io service shuts down orderly by run() returning due to running out of work. To achieve this, I post a lambda that effectively calls cancel() on the handle (hidden inside Kill()):
_io.post([this]() {
for (auto& w : _workers) w->Kill();
});
However, cancel() has no effect: the callback for async_read continues to return with error code of zero and data read from the pipe. For reference, this is the read call with its handler:
template<typename T>
void read(T& ioh, const asio::mutable_buffer& buf)
{
ioh.async_read_some(asio::buffer(buf), [this, self = shared_from_this()](const boost::system::error_code& ec, size_t sz) {
if (ec == boost::system::errc::operation_canceled)
return;
if (ec)
QUINE_THROW(Error(self->_ownersId, ec, "async_read_some"));
self->ReadCompleted(sz);
});
}
ReadCompleted() processes the received data and loops by calling read() again.
If I call close() the callback gets an error code and everything works out correctly, *except* that I get an error code [invalid handle] that gets logged as an error (though it's not).
Am I correct in assuming that cancel() is an apparent noop in this case because of the race-condition where an I/O request completes successfully before cancel is invoked?
If so, can you suggest a more elegant way (i.e., a way that doesn't induce a hard error) of exiting a loop as described here? Setting a member variable instead of calling cancel?
Given the existence of the race-condition, what are use-cases for cancel? How to use it correctly, if at all possible?
-- Stian
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net