|
Boost : |
Subject: Re: [boost] [afio] Formal review of Boost.AFIO
From: Niall Douglas (s_sourceforge_at_[hidden])
Date: 2015-08-24 17:10:08
On 24 Aug 2015 at 21:46, Antony Polukhin wrote:
> 1) "Hello World, asynchronously!"
>
> This looks really good and impressive. However the
> `afio::current_dispatcher_guard h(afio::make_dispatcher().get());` code
> seems like a boiler plate code that is used all around the examples. Could
> it be simplified? For example adding a default constructor to the
> `current_dispatcher_guard` will significaly reduce typing.
Hmm interesting idea. I didn't have that before because I believe it
would lead easily to bugs in end user code. I'll sleep on it.
> Also `afio::future<>` misses destructor description. This is essential,
> because unfortunately in C++ we have different behavior for futures: future
> returned by `std::async` blocks on destruction, default `std::future`
> abandons the shared state and does not block.
Logged as https://github.com/BoostGSoC13/boost.afio/issues/94.
For reference, no future in lightweight futures ever blocks on
destruction.
> I've failed to find a description for `when_all_p` function in reference
> section. Is it documented?
It's mentioned in the introduction as an error propagating wait
composure convenience function, as it's part of Monad it's not
documented in AFIO.
when_all_p() is pretty simple. If any input future goes errored, the
future for the wait composure gets the same error. I suppose you
could lose errors due to this design, but it sure saves typing out
code to check the errored state of every single future in the
composure every single time.
> 2) "A less toy example: Concatenating files"
>
> That's example not nice in a multiple ways. First of all, it combines
> explicit usage of `boost::afio::dispatcher` with implicit
> `current_dispatcher_guard`. It would be good to have single way of dealing
> with dispatchers in example. `current_dispatcher_guard` seems more simple
> (and that's what i like in examples :) )
I'm not sure what you mean here. That example creates a dispatcher.
It then sets via RAII of the guard variable the current thread local
dispatcher.
> `atomic<off_t> written(0);` popped in my eye. After that in reference
> section I've found that the `make_dispatcher` accepts a thread pool. Is it
> possible to follow Boost.Asio convention with `io_service::run` like calls?
> I often reuse the main thread for ASIO's async operations, so that feature
> could be useful for some resource/performance paranoid users.
This is no problem. You implement the abstract base class
afio::thread_source
(https://boostgsoc13.github.io/boost.afio/doc/html/afio/reference/clas
ses/thread_source.html) with some asio::io_service lvalue reference.
AFIO will use that ASIO io_service for that dispatcher.
> Also, thread
> count 1 seems more right and less error prone (by default this will require
> no threads synchronization from users).
I'm not sure which thread count you mean here.
> Please rename the `utils::file_buffer_allocator` into
> `utils::page_allocator`. In that way it's shorter and self-documenting in
> some way.
Good idea. Logged at
https://github.com/BoostGSoC13/boost.afio/issues/95.
> Now the biggest problem of the second example. It allocates a lot of memory
> and does a lot of data copies from kernel to usersapce and back.
Actually it does no memory copying at all on Windows or FreeBSD
thanks to the page allocator. This stuff is exactly what AFIO is for.
The trick is to never touch a buffer from user space, and that
example doesn't. When you call async_read() on pages you have never
faulted into existence, the kernel will DMA the data straight into
the kernel file page cache and map those pages into userspace for
you. When you call async_write() on a set of pages, the kernel will
tag each as copy-on-write and schedule them for DMA straight to the
hard drive. So long as you never write into a page scheduled for
write with async_write(), no memory copying ever happens, not once -
on Windows and FreeBSD.
On Linux the kernel does copy memory, unless the
afio::file_flags::os_direct flag is specified. Linus unfortunately
has strong views on "stupid MMU page tricks" so don't expect this to
change any time soon.
> Additionally this example is hard to understand and requires a lot of code
> while solves a very common problem. Could AFIO reuse POSIX splice/vmsplice
> and provide an async_copy function? If yes, then amount of code could be
> significantly reduced and no memory allocations will be required.
splice/vmsplice are Linux only, not POSIX. FreeBSD implements a
subset of them for sockets only.
Unfortunately they have an annoyingly broken API which is
incompatible with the ASIO reactor. The problem is SPLICE_F_NONBLOCK
because you cannot feed any fd into an alertable wait state for ASIO.
That means you need to dedicate a thread to block on the kernel-side
copy. I really wish Linux were more like BSD - on BSD you can have a
kevent notify you when an async i/o operation changes its state.
Lovely API, really efficient too. And is ASIO compatible.
I don't discount looking into them again sometime in the future, but
I didn't find any benefit of splice() over read()/write() until 64Kb
blocks or bigger, and even then it is not a huge benefit. I suspect
the cost of copying memory is
low in the single threaded case relative to the cost of the Linux
kernel file page cache. Of course, on big thread counts you end up
evicting much useful data from the L1/L2/L3 caches with file i/o on
Linux, but we're getting into premature optimisation here I think.
Niall
-- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk