Boost logo

Boost :

Subject: Re: [boost] [Concepts] Definition. Was [GSoC] [Boost.Hana] Formal review request
From: Niall Douglas (s_sourceforge_at_[hidden])
Date: 2014-08-19 07:33:18


On 18 Aug 2014 at 9:54, Robert Ramey wrote:

> > You actually made some very valid points, points which were facepalm
> > obvious. Thank you. I tried hitting reply, but there appears to be no
> > mechanism for quoting which would make a rebuttal confusing and
> > messy.
>
> OK - I'll try to figure out how to insert quoting.

I'll let you get the quoting reply thing working before I reply on
that site. You made some excellent very valid points I will fix
immediately, but many of them were answered in the docs already.

> To me this is the most valuable and interesting part of the this
> discussion. I'm less interested in AFIO in particular here than what
> I view as the "right (easiest and most useful)" way to create a complete
> boost quality C++ library. I see a lot off efforts with very clever
> and advanced coding in which authors have invested huge amount
> of effort, which fail to get traction for lack of all the
> rest of that which makes for a good library. The whole
> goal behind the incubator is to sell my ideas on how to make the
> process easier so motivate authors are more likely to avoid
> disappointment. Of course these are my personal ideas - but
> (lol) so far no one has posted any comments disputing them.

Where we diverge the most strongly is in what we think "easier" to
mean. But I won't bother rehashing all that again.

> >> c) AFIO is much more complicated than I think it has to be - but I can't
> >> prove this because its so hard to understand anything about it.
> >
> > On the last point I will say this: AFIO is clean, logical and simple
> > only when compared to other asynchronous i/o libraries. The Windows
> > overlapped i/o API, libuv and of course the POSIX aio API come to
> > mind, all of which have their own weird quirks too.
> [snip]
> I have used asyncio both the windows api and linux api version and have
> found them to be almost unusable and very, very, very
> difficult to get right. I very much aware of the utility of such
> a library to improve performance and to help properly factor
> and decouple i/o bottle necks from the rest of the application.
> I also believe that many users would find such a library very
> useful and would want to use it - if it were easy enough to use.
> This is the task of the library writer - to make the unusable usable.

You are already very mistaken in the premise for a library such as
AFIO. Firstly, if you don't have a mission critical i/o bottleneck
problem - one so serious that you are happy to throw your existing
design and codebase out the door - then you don't need nor want async
i/o. Modern kernels are truly excellent in hiding 80-90% of i/o
bottlenecks for you with synchronous usage, and bypassing the
kernel's algorithms and buffers so you can implement your own
directly atop the physical hardware really is only for the serious
performance hungry user. Most engineers will not design algorithms
nor buffering systems which will beat those of the kernel except in
specialised cases.

This is your first misapprehension: async i/o is *supposed* to turn
your application design on its head, and completely convolute
everything in order to eke out that last bit of performance.

What AFIO hopefully does is to hide a majority - not all -
platform-specific quirks only. It intentionally does nothing, and nor
should it, about the general design consequences of a fully
asynchronous i/o based solution on your code. As much as that might
appear to suck, and make the library unfriendly, well that's async
i/o for you and that's why hedge funds pay $300/hour upwards to async
i/o engineers because most engineers aren't wired that way, and those
that are are especially rare.

> so I would say you're not thinking big enough. if AFIO is not simple
> enough to be used within an hour of looking over the docs, it's
> too complicated.
>
> Think bigger!!!!
>
> What would be an ideal interface ? is there a way to implement it?
> For example:
>
> main(..){
>
> // example one
> asyncio::istream ais("filename"); // works almost like a stream
> int x;
> ais >> x; // read an x
> ....
> ais.sync()

:)

Such a design would not make good use of hardware DMA such that use
of the CPU can be completely avoided for bulk data transfers. To
invoke hardware DMA, all i/o must be done from 4Kb aligned memory
buffers. Indeed AFIO can enforce that for you because it's so
important to performance.

Generally if you're bothering to use async i/o, you have no interest
in *ever* copying memory. That rules out all serialisation usually :)

You've got to remember Robert that the user of async i/o will spend
weeks squeezing microseconds of latency out of a routine whilst
keeping stochastic variance within some bound. The Math guys here
will know what I mean. If you don't need that kind of control, just
go synchronous i/o, it's vastly easier.

> // example two
> completion_function_object cfo = [](....);
> asyncio::istream ais("filename", cfo);
> ais >> x; // read an x, invoke cfo on completion
>
> // example three - cfo is created as an io manipulator
> ais >> x >> cfo;
>
> // example four - cfos and cfoy are created as io manipulators
> ais >> x >> cfox >> y >> cfoy;
> }

You can already schedule the invocation of completion callbacks using
completion() or call(). However, something like the nice terse DSEL
you just describe is coming eventually in Vicente's monadic
continuations framework where one can schedule zero memory copying
transfers of socket to disk in the same sequence of continuations. In
other words, AFIO just "plugs in" seamlessly to what will hopefully
become the standard C++ 17 way of doing async continuations for all
C++ code. And that will be the more friendly way of programming AFIO
such that all the usual boilerplate you need to write to get async
libraries to interoperate goes away.

Niall

-- 
ned Productions Limited Consulting
http://www.nedproductions.biz/ 
http://ie.linkedin.com/in/nialldouglas/



Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk