Boost logo

Boost :

From: Marcelo Zimbres Silva (mzimbres_at_[hidden])
Date: 2023-08-14 19:35:04


On Sun, 13 Aug 2023 at 10:49, Klemens Morgenstern
<klemensdavidmorgenstern_at_[hidden]> wrote:
> > Q1: Default completion token for other Boost libraries
> > ================================================================
> >
> > I guess it will be a very common thing for all apps using Boost.Async
> > to have to change the default completion token to use_op or use_task.
> > Like you do in the examples
> >
> > > using tcp_acceptor = async::use_op_t::as_default_on_t<tcp::acceptor>;
> > > using tcp_socket = async::use_op_t::as_default_on_t<tcp::socket>;
> >
> > Could this be provided automatically, at least for other Boost
> > libraries? I know this would be a lot of work, but it would make user
> > code less verbose and users would not have to face the token concept
> > at first.
>
> If could be, but I am also experimenting with how an async.io
> library could look. I.e. one that is async only (co_await
> stream.read()) and banished the asio complexity to the translations
> units. You can look at my experiments here
> https://github.com/klemens-morgenstern/async/pull/8

This would be great, but then why not get this merged before the
Boost.Review? One of the parts of Boost.Async that I am finding more
valuable is that it is user-friendly frontend to Asio. The PR above
would put even more weight on that fact. It looks like a very large PR
that should not be missed in the Boost Review.

Also, merging at a later point loses an advertising opportunity since
some people might not feel compelled to come back to look at the
release notes.

> > Q3: async_ready looks great
> > ================================================================
> >
> > > We can however implement our own ops, that can also utilize the
> > > async_ready optimization. To leverage this coroutine feature, async
> > > provides an easy way to create a skipable operation:
> >
> > I think I have a use case for this feature: My first implementation of
> > the RESP3 parser for Boost.Redis was based on Asio's async_read_until,
> > which like every Asio async function, calls completion as if by post.
> > The cost of this design is however high in situations where the next
> > \r\n delimiter is already in the buffer when async_read_until is
> > called again. The resulting rescheduling with post is actually
> > unnecessary and greatly impacts performance, being able to skip the
> > post has performance benefits. But what does *an easy way to create a
> > skipable operation* actually mean? Does it
> >
> > - avoid a suspension point?
> > - avoid a post?
> > - act like a regular function call?
>
> Theren are two mechanisms at work here:
>
> - if you provide a ready() function in a custom op, it will avoid
> suspension altogether, thus be like a normal function call
> - if immediate completion is awaitable, the coroutine will suspend,
> but resume rightaway. Thus avoid a post.

I think this needs more detailed examples for each individual
optimization possibility (act like a function call, skip a post). The
doc says

> Do the wait if we need to

but refers to

  void initiate(async::completion_handler<system::error_code> complete) override
  {
    tim.async_wait(std::move(complete));
  }

which clearly does not do any *if needed* check.

> While the above is used with asio, you can also use these handlers
> with any other callback based code.

IMO this statement also needs to be reformulated and perhaps given an example.

> > Q5: Boost.Async vs Boost.Asio
> > ================================================================
> >
> > I use C++20 coroutines whenever I can but know very little about their
> > implementation. They just seem to work in Asio and look very flexible
> > with use_awaitable, deferred, use_promise and use_coro. What advantage
> > will Boost.Async bring over using plain Asio in regards to coroutine?
>
> It's open to any awaitable, i.e. a user can just co_await whatever
> he wants. asio prevents this by design because it has way less it
> can assume about the environment. That is, asio::awaitable cannot
> await anything other than itself and an async op, not even
> asio::experimental::coro.

I am trying to make sense of this statement. Are you referring to what
is shown in the example/delay_op.cpp example? Do the docs teach how to
write an awaitable so that I can profit from using Boost.Async? How
often will users have to do that? Or are the default awaitables
provided by the library already good enough (as shown in the
benchmarks)?

IIUC, this would be the strongest selling point of this library in
comparison to using plain Asio so the docs could put more weight on
that.

> Furthermore all of those are meant for potentially threaded
> environment so everything they do needs to be an async_op,
> i.e. operator|| internally does multiple co_spawns through a `parallel_group`.
>
> Because async can assume more things about how it's run, it can
> provide a loss-less select, which `operator||` cannot do.

What is a loss-less select? In any case, I find it great that we can
have so much performance improvement by being able to assume a
single-threaded environment.

I must say however that I am surprised that my plain-Asio code is
running slower than it could although I am doing single
threaded-io_contexts.

> Likewise the channels work (mostly) without post, because they just
> switch from one coro to the other, whereas asio's channels need to
> use posting etc.

Symmetric transfer?

> I don't see the asio coroutines as competition, they just solve a
> different use-case.

I don't see why it is not a competition. I only do single-threaded
contexts in Asio which is exactly what Boost.Async does. Please
elaborate.

Marcelo


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk