|
Boost : |
Subject: Re: [boost] Boost.HTTPKit, a new library from the makers of Beast!
From: Vinnie Falco (vinnie.falco_at_[hidden])
Date: 2017-10-05 00:33:10
On Wed, Oct 4, 2017 at 4:18 PM, Niall Douglas via Boost
<boost_at_[hidden]> wrote:
> I reiterate from my Beast review that the best design for these above is
> to use:
>
> * span<T>
> * Ranges TS
First of all, thanks for investing the time to respond, your feedback
is appreciated. I think this might be one of those rare cases where I
am in partial agreement. A design that uses standard library types is
better than one which requires `std::experimental::net::const_buffer`
or `boost::asio::const_buffer`. However, a couple of points:
* Ranges TS is unnecessary, as ConstBufferSequence and
MutableBufferSequence are already modeled after ForwardRange.
* `span<byte>` and `span<byte const>` are not great choices because
`span` and `byte` are not part of Boost and not part of the standard
library. The closest we can get and still have something compatible
with C++11 is `pair<void const*, size_t>` and `pair<void*, size_t>`.
* Paul's suggestion to change the requirements of
ConvertibleToConstBuffer is even better. Change the definition of
`const_buffer` to include conversion constructors that can accept a
more general concept, such as any class which implements data() and
size().
If I was to re-implement Beast's buffer adapters, dynamic buffers,
parser, and serializer to use these ideas then they would no longer be
compatible with Asio. Using them would be more cumbersome and less
composable with other standard components which use Asio buffer
concepts.
Feedback from users is that they overwhelmingly prefer solutions that
work out of the box over adherance to some model of buffer
abstraction. I have no control over what happens in Boost.Asio and I
do not control the evolution of the Networking TS although I have
filed some additional issues against it.
For better or worse, Boost.Asio is the model that we have and what
people are coding against. Therefore, as with Beast my design choice
here is pragmatic - use what exists, and what works. The approach
offered in Boost.Buffers allows libraries to be written which do not
depend on all of Boost.Asio, but yet offer compatibility with Asio's
buffer concepts.
There's a light at the end of the tunnel though, if you can convince
the Asio author and influence the evolution of Networking-TS to make
changes to the buffer model to eliminate the custom types, then my
proposed Boost.Buffers library could be updated to reflect those
changes. The beneficiaries will at all times be the users - they get
something that works today, and the possibility of something that will
work even better tomorrow.
One thing you might consider, is that span<T> does not have the
"buffer debugging" feature found in `boost::asio::const_buffer`:
<https://github.com/boostorg/asio/blob/b002097359f246b7b1478775251dfb153ab3ff4b/include/boost/asio/buffer.hpp#L109>
I don't see a sane way to add that feature to span<> in a way that
doesn't burden everyone using span instead of just the people using it
for networking buffers.
The buffer debugging feature is really useful. Asynchronous
programming is already hard enough, having every additional advantage
helps.
> You chose, against my advice, to base Beast on the outdated and hard
> coded buffer sequence design in ASIO.
I'll say the same thing I said the last time you brought this up. If
you feel strongly about it, then you need to write a paper. Otherwise,
your advice to me effectively becomes "ignore existing and emerging
standards and invent a new, incompatible concept." That's way too much
risk for me to take on, and I have no evidence that my users want such
a thing. All the feedback I have received thus far cites compatibility
with Asio as a primary motivator for adoption of Beast. Ignoring this
seems...reckless.
> But for a standalone library, things are different. I would advise
> strongly against accepting a Boost.Buffers library unless it is based on
> span<T> and Ranges and is forward looking into C++ 20 design patterns,
> not backwards at C++ 98 era design patterns like ASIO's buffer sequences.
If you feel that span<T>, Ranges, and C++20 design patterns are
important then you should be enthusiastic about my Boost.Buffers
proposal, because it physically separates the buffer concepts from the
networking algorithms. It effectively "factors out the buffers from
Asio." In other words it completes the first step of achieving your
goal - it separates buffers from the rest of Asio where it can be
acted on independently.
Note that I am in favor of Paul's proposal to change the requirements
of ConvertibleToConstBuffer, in order to relax the dependency on a
concrete type. I believe this is in line with your goals as well.
> Regarding HTTPKit etc, I haven't looked into it much, but if I remember
> I had some issues with your implementation and design there too
> specifically the lack of use of zero copy views.
In the version of Beast that was accepted into Boost,
beast::http::basic_parser was already "zero-copy" (when presented with
input as a single contiguous buffer) and beast::http::serializer was
"zero-copy". They still are.
> As a standalone library where a much wider use case applies,
> I think I would need to set a much higher bar in any review.
So, I think my comments earlier apply here as well. The primary
consumers of beast::http::basic_parser and beast::http::serializer are
using Asio buffer concepts (Beast stream algorithms being the best
examples). Factoring out the buffer concepts from Asio into a new
library Boost.Buffers, and then factoring out the parser and
serializer from Beast into a new library Boost.HTTPKit which uses
Boost.Buffers without needing Boost.Asio as a dependency seems like a
very pragmatic way forward which gives stakeholders something they've
been asking for, doesn't reinvent established buffer concepts, doesn't
force Boost.Asio as a dependency, and yet is compatible with
Boost.Asio for the users that want it.
Thanks
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk