Boost logo

Boost Users :

Subject: [Boost-users] [asio] Infinite loops
From: Michael Chisholm (chisholm_at_[hidden])
Date: 2013-08-24 18:30:51


I have found that when (ab)using asio in a particular way, it causes a
hang in my app, which I think is caused by an infinite loop in asio.
I'll state up front that the usage is pretty underhanded, so I may not
get much sympathy here :) But it also seems reasonable to not hang.
The hang happens only on the two linux platforms I tested, which were
centos 5.9 and 6.4. I built it on win7 and mingw 4.6.3 (with slight
code changes), and it doesn't hang. The simplified test app
demonstrating the hang is pasted at the bottom of this email.

I basically just wanted an API to smooth over platform network API
differences and also give me a nice iostream class backed by a socket.
So there's none of the fancy proactor stuff.

There is a need to "reset" the app after socket connections have been
established, and one thread is blocked on a socket read. So I needed a
way to unblock the thread. Closing the socket seemed like a reasonable
way to do it, but I found that simply calling a close() method from a
different thread doesn't work; it causes race conditions and crashes,
since asio classes seem not to be thread-safe. So I got more devious,
and decided to just grab the raw handle and call close() directly on it.
  That's the underhanded part. I thought asio would figure out that the
socket's no longer valid, and propagate an error up to the caller, thus
unblocking the thread. But it doesn't, the thread just stays blocked.
After doing some gdb tracing, it looked to me like asio is repeatedly
polling the socket with epoll/select (I tried the latter by defining
BOOST_ASIO_DISABLE_EPOLL) and a timeout, and never breaking out of its
loop even though the socket is no longer valid.

So, does it seem reasonable for asio to break out of its polling loop
when someone (underhandedly) closes the socket out from under it? :)

Andy

#include <cstdio>
#include <boost/system/error_code.hpp>
#include <boost/chrono.hpp>
#include <boost/thread.hpp>
#include <boost/asio.hpp>

using namespace std;

namespace bs = boost::system;
namespace bc = boost::chrono;
namespace ip = boost::asio::ip;

boost::asio::io_service iosvc;
ip::tcp::socket clientSocket(iosvc);
ip::tcp::iostream serverStream;

#define TEST_PORT 2000

void connectToServer()
{
   bs::error_code ec;
   clientSocket.connect(
     ip::tcp::endpoint(ip::address_v4::loopback(), TEST_PORT),
     ec);
   if (ec)
     cout << "Client couldn't connect: " << ec.message() << endl;
   else
     cout << "Client connected!" << endl;
}

void waitForClient()
{
   ip::tcp::acceptor acceptor(iosvc,
     ip::tcp::endpoint(ip::address_v4::loopback(), TEST_PORT));
   bs::error_code ec;
   acceptor.accept(*serverStream.rdbuf(), ec);
   if (ec)
     cout << "Server couldn't accept: " << ec.message() << endl;
   else
     cout << "Server accepted!" << endl;
}

void serverReadAByte()
{
   int b = serverStream.get();
   if (serverStream)
     cout << "Got byte: " << b << endl;
   else
     cout << "Error reading byte: " <<
       serverStream.error().message() << endl;
}

int main(int, const char **)
{
   boost::thread serverThread(waitForClient);

   // wait a second for the server socket to open...
   boost::this_thread::sleep_for(bc::seconds(1));

   connectToServer();

   serverThread.join();

   int sd = serverStream.rdbuf()->native_handle();

   // this will block, since we won't write anything
   // to the client socket.
   boost::thread readThread(serverReadAByte);

   // wait a second for the socket read to block...
   boost::this_thread::sleep_for(bc::seconds(1));

   // close the server socket out from under our stream!
   // Does readThread unblock?
   if (close(sd) == -1)
   {
     perror("close");
     return 1;
   }

   cout << "Waiting for read thread..." << endl;
   readThread.join();

   clientSocket.close();
   serverStream.close();

   return 0;
}


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net