|
Boost : |
From: Marcelo Zimbres Silva (mzimbres_at_[hidden])
Date: 2022-04-10 10:49:38
Hi,
On Sat, 9 Apr 2022 at 20:45, VinÃcius dos Santos Oliveira
<vini.ipsmaker_at_[hidden]> wrote:
>
> Keeping the message in the buffer is not as much of a
> problem as you think. The memory usage will not be greater
> (std::string, for instance, will hold not only the string
> itself, but an extra possibly unused area reserved for
> SSO). However the pattern here might indeed favour
> fragmented allocations more. It might be more important to
> stress the allocators than trying to avoid them.
I had a look yesterday at the implementation of asio::dynamic_buffer
and think it would be simple to make x.consume(n) consume less
eagerly. All that would be needed is an accumulate parameter that
would change the behaviour of x.consume(n) to add an offset instead of
std::string::erase'ing the buffer.
std::string buffer;
std::vector<string_view> vec;
auto dbuffer = dynamic_buffer(buffer, ..., accumulate);
resp3::read(socket, dbuffer, adapt(vec));
// buffer still contains the data.
and then, after using the response, users would clear all accumulated
data at once
dbuffer.clear();
I will think more about whether this is a good thing. It looks more
flexible anyway and doesn't force accumulation on users. It would be
an op-in feature.
> Is there any redis command where anything resembles a need
> for a "streaming reply"? How does Aedies deal with it?
RESP3 has two stream data types
1. Streamed string.
2. Streamed aggregates.
Aedis supports 1. AFAIK, there is no Redis command using any of them.
> Is there support to deserialize directly to an object
> type? For instance:
>
> struct MyType
> {
> int x;
> std::string y;
> };
I have add one example, please see
https://github.com/mzimbres/aedis/blob/master/examples/low_level/sync_serialization.cpp.
The example serializes the structure you asked for in binary format to
simplify the example. I expect however that most users will use json.
> Also, you should add a page on the documentation comparing
> Aedis to other libraries (e.g. cpp-bredis).
Todo.
> Honestly I think the "abstraction" dynamic buffer
> abstracts too little (it's quite literally a region of
> buffered data + capacity which you can implement by
> declaring one or two members in your own class) to offer a
> value worth pursuing. What really offers ease of use to
> the final user is formatted IO (which must be done on top
> of buffered IO â be it external or internal). Your library
> abstracts formatted redis IO. You could just as well
> buffer the data yourself (as in wrapping the underlying
> socket in a new class that keeps the state machine plus
> buffer there). For instance, C++'s <iostream> (which
> usually is *not* a good place to look for inspirations
> anyhow) merges formatted and buffered IO in a single place
> (and so do the standard scanners for many other
> languages), and that's fine.
This is what the high-level client does:
https://mzimbres.github.io/aedis/intro_8cpp_source.html
> Do notice how the *protocol* dictates the usage pattern
> for the buffered data here. It doesn't always make sense
> to decouple buffered IO and formatted IO. Keep the buffer
> yourself and ignore what everyone else is doing. You don't
> need to use every Boost.Asio class just because the class
> exists (that'd be absurd).
Sure, Aedis was a bottom up approach that used the building blocks
provided by Asio. At a certain point I may need my own concepts and
everything, at the moment however those building blocks are suiting me
well.
> The layers are:
>
> raw/low-level IO: Boost.Asio abstractions are good and were refined little-by-little over its many years of existence
> buffered IO: far too little to worry about; and Boost.Asio never really pursued the topic beyond an almost-toy abstraction
> formatted IO: complex and it helps to have abstractions from external libraries
>
> Do notice as well that buffered IO is a dual: input and
> output. We only talked about buffered input. For buffered
> output, what you really want is either (1) improving
> performance by writing data in batches,
If I understand you correctly, this is also what Aedis does, it is
called pipeline in Redis: https://redis.io/docs/manual/pipelining/
> or (2) avoiding
> the problems that originate from composed operations and
> concurrent writers (and several approaches would exist
> anyway â from fiber-level mutexes to queueing sockets),
I use a queue to prevent concurrent writes, like everybody else I'm afraid.
Thank you for all the input btw.
Regards,
Marcelo
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk