Subject: Re: [boost] [gsoc-2013] Boost.Thread/ThreadPool project
From: Niall Douglas (ndouglas_at_[hidden])
Date: 2013-04-30 11:16:51
> Le 29/04/13 21:35, Niall Douglas a ?crit :
> > No major changes to the C++11 futures API is needed. Rather that a
> > different, but compatible, optimized implementation of future<> gets
> > returned when used from a central asynchronous dispatcher. So the C++
> > standard remains in force, only the underlying implementation is more
> > optimal.
> Could you explain me what a central asynchronous dispatcher is?
An asynchronous procedure call implementation. Here's Microsoft's:
px. Here's QNX's: http://www.qnx.com/developers/articles/article_870_1.html.
Boost.ASIO is the same thing for Boost and C++ but implemented by
application code. It is *not* like asynchronous POSIX signals which are a
nearly useless subset of APCs.
> If not, please could you elaborate what kind of optimizations can be
If you have transactional memory in particular, you gain multi-CAS and the
ability to (un)lock large ranges of CAS locks atomically, and a central
dispatcher design can create batch lists of threading primitive operations
and execute the batch at once as a transaction. Without TM, you really need
the kernel to provide a batch syscall for threading primitives to see large
> > Futures come in because you'd naturally use std::packaged_task<> with
> > Boost.ASIO. It returns a future.
> Could you point me to the Networking paper proposal that has
> packaged_task<> on its user interface. I would expect this to be an
> implementation detail.
You misunderstand me. *If* you want Boost.ASIO to dispatch unknown
std::function<>, *then* std::packaged_task<> is the most obvious route
And the correct way to pass a value from one execution context to another is
std::future<>. This includes the single threaded case e.g. when going
through a C++ -> C -> C++ transition as a stack unwinds.
> > For reference, the AFSIO/AFIO project is *not* a threadpool. It's a
> > asynchronous execution engine based on Boost.ASIO that lets you chain,
> > vast numbers, huge arrays of std::function<> whose returns are fetched
> > std::future<> to be executed asynchronously according to specified
> > dependencies e.g. if A and B and C, then D, then E-G. That sort of
> So the thread pool you need is an internal one that is adapted to you
> particular needs, isn't it?
I think you don't understand what Boost.ASIO is or how it works.
Boost.ASIO's core is boost::asio::io_service. That is its dispatcher
implementation which each dispatch execution context being executed via
boost::asio::io_service::run() which is effectively an event loop. Third
parties then enqueue items to be dispatched using
boost::asio::io_service::post(). You don't have to run Boost.ASIO using
multiple threads: it can be single threaded.
So is my thread pool an internal one adapted to my particular needs? In the
sense I need not use threads at all, yes. In the sense that
boost::asio::io_service's use scenario is inviolate, no. Boost.ASIO has its
API, and you have to use it. And Boost.ASIO's API will be similar to the TR2
There is nothing stopping a person merging a Boost.ASIO managed thread pool
with a traditional thread pool. I would struggle to see the use case though
- this would an excellent thought project actually which would benefit any
> > One thing presently implemented on that engine is asynchronous file i/o,
> > in the next month or two you'll hopefully see batch parallel SHA256
> > 4-SHA256 SSE2 and NEON implementations also added to the asynchronous
> > engine. The idea is that the engine is fairly generic for anywhere where
> > do need to chain lots of coroutine type items together (not that it
> > Boost.Coroutine yet). v1 isn't particularly generic nor optimal, but I'm
> > hoping with feedback from Boost that v2 in a few years' time would be
> > improved.
> As Oliver noted you could take a look at Boost.Fiber and Boost.Task.
Right now anything Boost.Context based can't have multiple contexts
simultaneously entering the kernel apart from on QNX and possibly Hurd.
Therefore, for the time being, full fat threads are the only thing being
considered. If kernel support ever includes coroutines or fibres, that will
be eagerly added (I like coroutines, I use them a lot on Python). Note that
on async i/o capable platforms multiple threads are solely used for
non-async APIs like batch directory creation and batch chmod. Normal i/o
gets multiplexed using completion handlers. Currently on Windows for
example, threads are barely used at all.
--- Opinions expressed here are my own and do not necessarily represent those of BlackBerry Inc.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk