Subject: [boost] [asio] RFC on new reliable UDP library
From: Niall Douglas (s_sourceforge_at_[hidden])
Date: 2014-09-09 12:59:12
Dear Boost and ASIO users,
CC: boost_at_[hidden], asio-users_at_[hidden]
I am writing to ask for comments on the design of and interest in a
generic reliable messaging library based on ASIO. Such a library
could bring to ASIO very considerably improved data transfer
performance over TCP, plus the ability to ignore NAT firewalls. The
design imperatives for such a library are proving tricky to reach
consensus upon, so perhaps the community might be able to help.
Firstly, what features should such a reliable messaging library have?
Here are some for you to consider:
* Should it have wire transport backends with an abstraction from
wire transports, so a UDT backend could be interchangeable with TCP
for example with just a recompile needed? Upper level code need
therefore not consider what format the wire transport takes, just
that messages are reliably delivered or we are told of the connection
failure. Some mechanism for dealing with out of order messages, and
message expiry in case congestion control excessively delays outbound
messages would also be included.
You should be aware that congestion control for a reliable messaging
library means one has no choice but to pace message construction
(i.e. block/EWOULDBLOCK handing out new message send templates),
otherwise one ends up buffering more data for sending than
potentially can ever be delivered. This is because we cannot drop
reliable messages, nor can we pace writes, so pacing new message
construction is the only alternative.
* Should it be zero copy for both receipt and send? If so, we'll have
to break with the existing ASIO API somewhat - namely that
MutableBufferSequence will need to now become
MutableFileOffsetBufferSequence so one can supply scatter/gathers of
user space buffers and kernel side buffers. We'll also have to
templatise the buffer management so client code can supply the
virtualisation (e.g. an iterator) which remaps inbound message
fragments into apparently contiguous regions. By the time we have
done these, the API looks quite different.
* Should it be written using C++ 14 idioms instead of the existing 03
idioms ASIO uses? If so, basing read and write handlers on the C++ 17
proposed experimental::expected<T, E>
(http://www.hyc.io/boost/expected-proposal.pdf) makes much more sense
than the fixed ASIO handler prototype void (*handler)(const
boost::system::error_code& ec, std::size_t bytes_transferred) - it is
more powerful, more expressive, optionally integrates cleanly with
no-alloc future-promise and the rest of the C++ 11/14 STL, and lets
handlers use any mix of error reporting mechanism they choose. It
also opens the door to a much cleaner and tighter integration of the
future C++ 17 Networking TS with Concurrency TS.
Equally, especially if combined with the earlier changes, it goes
further and further away from the present ASIO design.
* How concurrent should it be capable of? Unlike TCP where it makes
not much sense to have concurrent writers, UDP allows enormous
parallelism, potentially into queue depths of thousands on decent
NICs, sufficiently so where a strand based design may simply not
scale. We believe that with some effort a completely lock free design
which can make full use of the batch datagram send and receive APIs
on Linux and Windows is possible - we believe that such a design
would scale close to linearly to CPU cores available. Unfortunately,
as batch datagram send/receive is not supported by ASIO and on POSIX
the main dispatcher appears to be locked by a mutex, this would
require a custom code path and i/o service. If combined with all of
the above, it starts to look like a rewrite of the core of ASIO which
seems very big just to implement reliable messaging.
I guess that is good for now for starting discussion. My thanks for
your time in advance.
-- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk