Boost logo

Boost :

Subject: Re: [boost] [afio] Formal review of Boost.AFIO
From: Thomas Heller (thom.heller_at_[hidden])
Date: 2015-08-28 04:16:36


On 08/28/2015 07:47 AM, Thomas Heller wrote:
> On 08/27/2015 04:04 AM, Niall Douglas wrote:
> <snip>
> A lot of things about future vs. shared_future...
> </snip>
>>
>> Thoughts?
>
> Ok, after reading all the other messages in this and other threads I am
> starting to understand what the real problem is (all this is IMHO, of
> course):
>
> The biggest mistake in your design is that of the functions taking
> "preconditions". This brought you all this mess. The problem is, those
> are actually not preconditions but actually arguments to the function,
> more specifically, your "precondition" is always (?) the handle (or a
> future to the handle so to speak) to the actual operation that is being
> performend for which you need to get the value. That all is a result of
> the non-existing single responsibility pattern in your code. After
> getting rid of that, you will see that most of the points you brought up
> against using std::future (or boost::future) will actually go away.
> Here is my concrete suggestion:
> - Let your async functions return a future to their result! (Not a
> tuple to the handle and the result...)
> - Don't use futures as arguments to your async functions. Say what you
> want as input!
>
> Let's take on your read example... and suppose we have the following two
> function signatures:
>
> future<handle> open(std::string name);
> future<std::vector<char>> read(handle const & h, std::size_t bytes,
> std::size_t offset);
>
> See how easy, clear and succinct those functions are? Almost self
> explaining without any further documentation.
>
> Now, on to your example about a 100 parallel reads, so let's encapsulate
> it a bit to concentrate on the important part:
>
> future<void> read_items(handle const & h)
> {
> vector<future<vector<char>>> reads;
> reads.reserve(100);
> for(std::size_t i = 0; i != 100; ++i)
> {
> reads.push_back(async_read(h, 4096, i * 4096));
> }
> // handle reads in some form ... for simplicity, just wait on all
> //of them ...
> return when_all(std::move(reads));
> }
>
> No continuations, cool, eh?
> (Well, that's a lie, when_all attaches continuations to the passed
> futures as well as the conversion from the result of when_all to
> future<void>).

I took the effort to also reimplement your hello world example:
https://gist.github.com/sithhell/f521ea0d818d168d6b35

Note that this hypothetical example uses standard conformant futures (no
namespace specified ...). The intent should be clear. It gets of course
a little messy with all those continuations but as you are a fan of C++
coroutines, look at the end of the gist where the equivalent coroutine
solution has been posted. To further simplify the non-coroutine example
one would probably resort to the synchronous version of the API. Note
that any errors of the preceding operations can be handled explicitly in
the continuation where the next operation is being scheduled. Exceptions
will be naturally propagated (as mandated by the standard) through the
chain of continuations. This should also be more efficient than your
afio::future<> based solution as we don't have to synchronize with the
(shared_)future<handle> shared state all the time we attach a
continuation (this also needs to happen if the future is already ready,
btw).

>
> In order to call it, you now have several options, as already outlined
> in another mail:
> 1. read_items(open("niall.txt").get());
> 2. open("niall.txt").then([](future<handle> fh){ return
> read_items(fh.get());
> 3. read_items(await open("niall.txt"));
>
> As you can see, the question about shared_future vs. future doesn't even
> arise!
> Was it already mentioned that an rvalue (unique) future can be
> implicitly converted to a shared_future?
>
> Now I hear you saying, but what if one of my operations depend on other
> operations? How do I handle those preconditions?
> The answer is simple: Use what's already there!
> Utilities you have to your disposal are:
> - when_all, when_any, when_n etc
> - .then
>
> This allows for the greatest possible flexibility and a clean API!
>
> So to repeat myself, as a guideline to write a future based API:
> 1. You should always return a single (unique) future that represents the
> result of the asynchronous operation.
> 2. Never return shared_future from your API functions
> 3. Use the standard wait composure functions instead of trying to be
> clever and hide all that from the user inside your async functions.
>
>
> (NB: In HPX we have one additional function called dataflow, it models
> await on a library level and executes a passed function whenever all
> input futures are ready, this reminded me a lot of your "preconditions").
>
>>
>> Niall
>

-- 
Thomas Heller
Friedrich-Alexander-Universität Erlangen-Nürnberg
Department Informatik - Lehrstuhl Rechnerarchitektur
Martensstr. 3
91058 Erlangen
Tel.: 09131/85-27018
Fax:  09131/85-27912
Email: thomas.heller_at_[hidden]

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk