|
Boost : |
Subject: Re: [boost] [afio] Formal review of Boost.AFIO
From: Thomas Heller (thom.heller_at_[hidden])
Date: 2015-08-25 11:44:38
Am 25.08.2015 3:24 nachm. schrieb "Niall Douglas" <s_sourceforge_at_[hidden]
>:
>
> On 25 Aug 2015 at 8:22, Thomas Heller wrote:
>
> > >> As far as I can see, there is not a single function documented
that
> > >> returns anything than some equivalent to "shared_future<handle_ptr>".
> > >
> > > The synchronous single shot functions return an afio::handle_ptr or
> > > some type appropriate for that function (e.g. enumerate() returns a
> > > vector of directory_entry).
> > >
> > > The asynchronous single shot functions return an afio::future<T>
> > > where T is usually void. afio::future<T>.get_handle() returns an
> > > afio::handle_ptr, while afio::future<T>.get() returns whatever T is
> > > (which might be void).
> > >
> > > The batch functions return vectors of afio::future<T>.
> >
> > Ahh i missed the enumerate (btw, the sync enumerate example only shows
> > the usage of the async version...) and stat functions (and maybe some
> > others). The question remains:
> > Why do they also need to return a handle?
> > [snip]
> > Why does it need to be shared_ptr<handle> in the first place? Why not
> > just handle? A shared_ptr always implies shared ownership, when for
> > example a boost::afio::file function returns, with whom is the ownership
> > shared?
> > [snip]
> > It is misdesigned in the way that it implies shared ownership from the
> > start on, that is something that should be left to the user.
> > [snip]
> > That's actually a very strange statement. shared_ptr implies shared
> > ownership, aka some shared state. Sharing state requires synchronization
> > to protect it from races. And yes, if you modify this shared from two
> > threads without synchronization, you accidentally introduced a race.
> > [snip]
> > That doesn't make a lot of sense ... Of course, if your state is not
> > shared, you don't introduce races, but if you don't have a shared state,
> > why use shared_ptr?
>
> I think Thomas you have missed something very fundamental about the
> AFIO design. I hope you don't mind me not replying to the rest of it
> (which was useful) and I'll just dig into the above as others may be
> where you are.
>
>
> 1. An afio::future<> is *always* a future handle to a file. You asked
> why all functions always return a future handle to a file instead of
> just the functions which open a file. I'm getting the feeling you
> didn't understand that:
>
> future<vector<directory_entry>> async_enumerate(future<>
> precondition)
>
> ... expands, quite literally as you'll see in afio.hpp, into:
>
> return precondition.then(detail::async_enumerate());
>
> ... which is a Concurrency TS continuation. detail::async_enumerate
> might then be effectively implemented as:
>
> struct async_enumerate
> {
> future<vector<directory_entry>> operator()(const future<>
> &precondition) const
> {
> return await
> do_async_directory_enumeration(precondition.get_handle());
> }
> };
>
> In other words, when the future precondition becomes ready, it will
> execute the continuation which calls the call operator on the
> instance of detail::async_enumerate. That call operator will schedule
> (via await) the directory enumeration of the file handle just become
> ready in the future precondition.
Erm no, await doesn't schedule the directory enumeration, it just continues
the control flow after the future returned from
do_async_directory_enumeration returns. In fact, await would have been
better suited to wait for the precondition.
>
> This is why afio::future<> *always* is a future to a handle to a
> file. Does this make sense?
No, not really, that doesn't answer my question at all.
>
>
> 2. The reason why it's shared_ptr<handle> not handle * is precisely
> because of how to manage lifetime over unknown continuations. If the
> user writes:
>
> future<> oh=async_file("niall.txt");
> future<> reads[100];
> for(size_t n=0; n<100; n++)
> reads[n]=async_read(oh, buffers[n], 10, n*4096);
>
> So this schedules niall.txt to be opened, and then adds 100
> continuations onto that file open each of which schedules the read of
> 10 bytes from a 4Kb multiple. All those 100 continuations occur in
> *parallel*, and could complete in *any* order.
>
> So how do you keep the handle to the niall.txt file around until the
> last continuation using that file has completed which is at some,
> unknown point in the future? The natural fit is shared_ptr. It's
> exactly what it was designed for.
>
> Does this make sense?
No, not at all, sorry. The file handle should be kept alive by the future
holding on to it and probably also by the continuations attached by
async_read. Anything else should be handled by the user. Furthermore, if
I'm not completely mistaken, do all OSes internally reference count open
file handles, why duplicate that in user space again?
>
>
> 3. You have stated that you dislike the idea of afio::future<T>
> having shared future semantics for the future handle. But can you see
> that it is not possible to schedule 100 parallel data read
> continuations onto a file open unless the handle has shared future
> semantics?
>
> After all, if it had future semantics, you could only schedule
> exactly one read to each file open because then you would consume the
> future. It would not be possible to schedule parallel operations on
> any single preceding operation!
That's correct. However you miss an important point:
shared_future<T>::shared_future(future<T>&&)
Meaning, a rvalue reference to future<T> can be implicitly converted to a
shared_future<T> (same is true for shared_ptr and unique_ptr, BTW). With
that, no publicly facing api function should ever return a
shared_future<T>, in my book as this doesn't make a lot of sense when a
future is seen as a return value transport endpoint.
It should be in the users responsibility how to deal with that. Some
options:
- just ".get()" the handle and continue with it as a named variable. This
will save you the overhead of attaching continuations over and over again.
- just ".then()" and continue as above
- await it (my favorite actually) and continue as above
- put it into a shared_future and continue as you outlined it.
I understand, of course, that this might get clumsy in your current design,
hence my criticism.
>
>
> Perhaps I am not understanding your concerns correctly?
Looks like it.
>
> Niall
>
> --
> ned Productions Limited Consulting
> http://www.nedproductions.biz/
> http://ie.linkedin.com/in/nialldouglas/
>
>
>
>
> _______________________________________________
> Unsubscribe & other changes:
http://lists.boost.org/mailman/listinfo.cgi/boost
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk