Boost logo

Boost :

From: Vinícius dos Santos Oliveira (vini.ipsmaker_at_[hidden])
Date: 2022-04-12 17:57:12


Em ter., 12 de abr. de 2022 às 05:11, Marcelo Zimbres Silva <
mzimbres_at_[hidden]> escreveu:

> On Tue, 12 Apr 2022 at 06:28, Vinícius dos Santos Oliveira
> <vini.ipsmaker_at_[hidden]> wrote:
> >
> > This class detracts a lot from Boost.Asio's style. I'll
> > borrow an explanation that was already given before:
> >
> >> [...] makes the user a passive party in the design, who
> >> only has to react to incoming requests. I suggest that
> >> you consider a design that is closer to the Boost.Asio
> >> design. Let the user become the active party, who asks
> >> explicitly for the next request.
> >
> > -- https://lists.boost.org/Archives/boost/2014/03/212072.php
>
> IMO, he is confusing *Boost.Asio design* with *High vs low level
> design*. Let us have a look at this example from Asio itself
>
>
> https://github.com/boostorg/asio/blob/a7db875e4e23d711194bcbcb88510ee298ea2931/example/cpp17/coroutines_ts/chat_server.cpp#L84
>

That's not a library. That's an application. The mindset for libraries and
applications differ. An application chooses, designs and applies policies.
A library adopts the application's policies.

A NodeJS application, for instance, will have a http.createServer() and a
callback that gets called at each new request. How, then, do you answer
questions such as "how do I defer the acceptance of new connections during
high-load scenarios?".

Boost.Asio OTOH never suffered from such problems.

And then we have DeNo (NodeJS's successor) that gave up on the callback
model: https://deno.com/blog/v1#promises-all-the-way-down

It has nothing to do with high-level vs low-level. It's more like "policies
are built-in and you can't change them".

The public API of the chat_session is
>
> class chat_session {
> public:
> void start();
> void deliver(const std::string& msg);
> };
>
> One could also erroneously think that the deliver() function above is
> "not following the Asio style" because it is not an async function and
> has no completion token. But in fact, it has to be this way for a
> couple of reasons
>

That's an application, not a library. It has hidden assumptions (policies)
on how the application should behave. And it's not even real-world, it's
just an example.

  - At the time you call deliver(msg) there may be an ongoing write,
> in which case the message has to be queued and sent only after the
> ongoing write completes.
>

You can do the same with async_*() functions. There are multiple
approaches. As an example:
https://sourceforge.net/p/axiomq/code/ci/master/tree/include/axiomq/basic_queue_socket.hpp

  - Users should be able to call deliver from inside other coroutines.
> That wouldn't work well if it were an async_ function. Think for
> example on two chat-sessions, sending messages to one another
>
> coroutine() // Session1
> {
> for (;;) {
> std::string msg;
> co_await net::async_read(socket1, net::dynamic_buffer(msg), ...);
>
> // Wrong.
> co_await session2->async_deliver(msg);
> }
> }
>
> Now if session2 becomes unresponsive, so does session1, which is
> undesirable. The read operation should never be interrupted by others
> IO operations.
>

Actually, it *is* desirable to block. I'll again borrow somebody else's
explanation:

Basically, RT signals or any kind of event queue has a major fundamental
> queuing theory problem: if you have events happening really quickly, the
> events pile up, and queuing theory tells you that as you start having
> queueing problems, your latency increases, which in turn tends to mean that
> later events are even more likely to queue up, and you end up in a nasty
> meltdown schenario where your queues get longer and longer.
>
> This is why RT signals suck so badly as a generic interface - clearly we
> cannot keep sending RT signals forever, because we'd run out of memory just
> keeping the signal queue information around.
>
  --
http://web.archive.org/web/20190811221927/http://lkml.iu.edu/hypermail/linux/kernel/0010.3/0003.html

However, if you wish to use this fragile policy on your application, an
async_*() function following Boost.Asio style won't stop you. You don't
need to pass the same completion token to every async operation. At one
call (e.g. the read() call) you might use the Gor-routine token and at
another point you might use the detached token:
https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/reference/detached.html

std::string msg;
co_await net::async_read(socket1, net::dynamic_buffer(msg), use_awaitable);
session2->async_deliver(msg, net::detached);

You only need to check whether async_deliver() clones the buffer (if it
doesn't then you can clone it yourself before the call).

Right now you're forcing all your users to go through the same policy.
That's the mindset of an application, not a library.

The explanation above should also make it clear why some way or
> another a high level api that loops on async_read and async_write will
> end up having a callback. Users must somehow get their code called
> after each operation completes.
>

It's clear, but the premises are wrong.

> Now, why does it matter? Take a look at this post:
> >
> https://vorpus.org/blog/some-thoughts-on-asynchronous-api-design-in-a-post-asyncawait-world/
> >
> > [...]
>
> I believe this has also been addressed above.
>

Not at all. I just had to mention queueing theory problems which was one of
the problems the aforementioned blog post touches on. There are more.

-- 
Vinícius dos Santos Oliveira
https://vinipsmaker.github.io/

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk