Hi,

 

I’m jumping into the discussion, but I’ve noticed your concern here:

 

 

I’m in the process of implementing something similar, also with coroutines. On Windows, my plan for solving this is by creating a (native) autoreset event object and assign it to windows::object_handle. Then I use async_wait on the object. When the event object is signaled, the waiting coroutine will be resumed. So, in effect, this implements a non-blocking signal / “future”.

 

On POSIX there are two ways, and both are hack-ish. You could use signal_set to wait for a specific signal (but signals + threads = UGH!, many pitfalls) or create an anonymous pipe; reading a byte from the pipe is equivalent to waiting on a signal object, while writing a byte to it is equivalent to setting a signal. Such pipe implements in effect an async-awaitable semaphpore.

 

 

From: Boost-users <boost-users-bounces@lists.boost.org> On Behalf Of Stephan Menzel via Boost-users
Sent: Monday, December 17, 2018 08:00
To: Boost users list <boost-users@lists.boost.org>
Cc: Stephan Menzel <stephan.menzel@gmail.com>
Subject: Re: [Boost-users] Understanding fibers

 

Hi Gavin,

 

On Sun, Dec 16, 2018 at 11:42 PM Gavin Lambert via Boost-users <boost-users@lists.boost.org> wrote:


If you're already using Boost.Asio, then you can just use that, without
mixing in Boost.Fiber.

Asio already supports coroutines and a std::future interface -- although
note that these are thread-blocking futures and are intended only for
use for callers *outside* the main I/O thread(s).

 

Yes, I have been using asio all over the place for many years but I have never used the coroutine interface. Only recently discovered it and plan to use it. I don't however see how I can integrate it into my plans here. 

First, this library uses asio but with that one thread and I cannot intrusively change that lib into using fibers because I can't imply that for every use case scenario. I'd rather shield the internal working from the user.

Second, the coroutine interface seems to work on the basis of special async operations within asio that allow for this to work. I don't have those. Like when I consider a mocked asio coroutine usage like

 

loop {

   asio::async_read( ..params.., yield, ec);

   handle_error(ec);

   asio::async_write( ...params..., yield, ec);

   handle_error(ec);

}

 

This works because asio offers those async ops that take the coroutine object and allow continuation. My code doesn't have that. At some point I do have to wait on those futures.

 

loop {

   boost::future<int> result = my_redis.get("value");

   const int value = result.get();

   //..continue

}

 

And even if it were fiber futures that doesn't change much:

 

loop {

   boost::fibers::future<int> result = my_redis.get("value");

   const int value = result.get();

   //..continue

}

 

Asio would still magically have to 'know' that it can switch to another fiber inside the get(). I have seen this page here: https://www.boost.org/doc/libs/1_69_0/libs/fiber/doc/html/fiber/callbacks/then_there_s____boost_asio__.html which I assume talks about this very thing but unfortunately this is way over my head.

 

Inherently though an Asio io_context running on a single thread *is* a
kind of fiber scheduler for operations posted through that context,
including both actual async_* operations and arbitrary posted and
spawned work.

 

Yes, in a way I do see that and I am investigating use of asio here. If only because this is normally my go-to solution in those cases. Which was my original approach before I started looking into fibers. But I got nowhere.

What I would need for this to work would be the mock-up code above. I'd have to be able to post a handler into the io_context which can wait on those futures without blocking the io_service. I considered spawing a great many threads on this io_context so I could stomach a number of them blocking without bogging down everything too much but this seems just wrong.

 

However, I will continue to explore this option as you are right, I think the solution is just there, I'd only have to see it. 

 

Cheers,

Stephan