Boost logo

Boost :

From: Marcelo Zimbres Silva (mzimbres_at_[hidden])
Date: 2022-04-09 17:07:19


On Sat, 9 Apr 2022 at 16:39, Ruben Perez <rubenperez038_at_[hidden]> wrote:
>
> Although I am not a Redis user, I believe it would be
> valuable in Boost.

Thanks.

>> - Was the documentation helpful to understand what Aedis
>> provides?
>
> I have had a quick glance at the docs and have some
> questions: which is the API you expect most of your users
> will be using? The higher level or the lower level?

Difficult to say, the low level API is very useful for simple tasks,
for instance, connect to Redis, perform an operation and close the
connection. For example

   - If a Redis server dies and the client wants to perform a failover
it will have to connect to one sentinel, ask for the master address
and close the connection. If that sentinel is also dead it will have
to ask a second one and so on. This is very simple to implement with a
low level api, especially if you are using coroutines.

A scenario where users won't probably want to constantly open and connections is

   - A http server with thousands of concurrent sessions where all of
them perform operations that require communication with Redis (e.g. a
chat server). You won't probably want to have one Redis connection for
each http session and much less, open and close it on every operation.
In other words, you need a small number of long lasting
Redis-sessions. When you do that, you also start having to manage the
message queue as http-sessions may send messages to Redis while the
client is still waiting for a pending response.

Add server pushes and pipelines to that and you clearly need the
high-level API that manages that for you.

> I would also like to know the advantages of one API vs the
> other, like when would I use the lower-level vs the
> higher-level and why?

Hope the comments above clarify that.

>> - Does the design look good?
>
> I would like to understand what are the client::async_run
> mechanics - when does that function complete and under
> what circumstances does it error?

async_run will

   - Connect to the endpoint (async_connect).

   - Loop around resp3::async_read to keep reading for responses and
server pushes.

   - Start an operation that writes messages when they become
available (async_write + timer).

It will return only when an error occurs

   - Any error that can occur on the Asio layer.
   - RESP3 errors:
https://mzimbres.github.io/aedis/group__any.html#ga3e898ab2126407e62f33851b31bee17a
   - Adapter errors. For example, receiving a Redis set into a
std::map. See https://mzimbres.github.io/aedis/group__any.html#ga0339088c80d8133b76ac4de633e9ddae

More info here:
https://mzimbres.github.io/aedis/classaedis_1_1generic_1_1client.html#ab096149d4d39df17f9c4609d142102d3

> It appears that client::send is using some sort of queue
> before the messages are sent to Redis, is that
> thread-safe?

Yes, queuing is necessary, the client can only send a message if there
is no pending response.

No, it isn't thread safe. Users should use Asio e.g. strands for that.

> How can I know if a particular operation
> completed, and whether there was an error or not?

async_run will only complete when an error occurs. All other events
are communicated by means of a receiver callback

 - receiver::on_read: Called when async_read completes.
 - receiver::on_write: Called when asnyc_write completes.
 - receiver::on_push: Called when async_read completes from a server push.

> I would also like more info on when the callbacks of the
> receiver are invoked.
>
> In general, I feel that higher level interface is forcing
> me use callback based code, rather than following Asio's
> universal async mode. What is the design rationale behind
> that?

It does follow the Asio async_model. Just as async_read is implemented
in terms on one or more calls to async_read_some, nothing prevents you
from implementing async_run in terms of calls to async_read,
async_write and async_connect, this is called a composed operation.

Notice this is not something specific to my library, regardless
whether HTTP, Websocket, RESP3 etc. users will invariably

   - Connect to the server.
   - Loop around async_read.
   - Call async_write as messages become available.

That means there is no way around callbacks when dealing with long
lasting connections.

To prevent my users reinventing this every time, I would like to
provide this facility. A question that arises is whether async_run
should receive individual callbacks or a class that offers the
callbacks as member functions. I decided for the later.

> Would it be possible to have something like
> client::async_connect(endpoint, CompletionToken)?

As said above, async_connect is encapsulated in the async_run call.

Regards,
Marcelo


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk