Boost logo

Boost :

Subject: Re: [boost] [gsoc-2013] Boost.Thread/ThreadPool project
From: Vicente J. Botet Escriba (vicente.botet_at_[hidden])
Date: 2013-05-01 04:47:24

Le 30/04/13 17:16, Niall Douglas a écrit :
>> Le 29/04/13 21:35, Niall Douglas a ?crit :
>>> No major changes to the C++11 futures API is needed. Rather that a
>>> different, but compatible, optimized implementation of future<> gets
>>> returned when used from a central asynchronous dispatcher. So the C++
>>> standard remains in force, only the underlying implementation is more
>>> optimal.
>> Could you explain me what a central asynchronous dispatcher is?
> An asynchronous procedure call implementation. Here's Microsoft's:
> px. Here's QNX's:
> Boost.ASIO is the same thing for Boost and C++ but implemented by
> application code. It is *not* like asynchronous POSIX signals which are a
> nearly useless subset of APCs.
I don't see the term "central asynchronous dispatcher" used in any of
the links. Could you clarify what it is?
>> If not, please could you elaborate what kind of optimizations can be
>> obtained?
> If you have transactional memory in particular, you gain multi-CAS and the
> ability to (un)lock large ranges of CAS locks atomically, and a central
> dispatcher design can create batch lists of threading primitive operations
> and execute the batch at once as a transaction. Without TM, you really need
> the kernel to provide a batch syscall for threading primitives to see large
> performance gains.
I'm really lost.
>>> Futures come in because you'd naturally use std::packaged_task<> with
>>> Boost.ASIO. It returns a future.
>> Could you point me to the Networking paper proposal that has
>> packaged_task<> on its user interface. I would expect this to be an
>> implementation detail.
> You misunderstand me. *If* you want Boost.ASIO to dispatch unknown
> std::function<>, *then* std::packaged_task<> is the most obvious route
> forwards.
> And the correct way to pass a value from one execution context to another is
> std::future<>. This includes the single threaded case e.g. when going
> through a C++ -> C -> C++ transition as a stack unwinds.
This is the way a 3pp library can do it using the external std
interfaces, but a C++1y proposal could define an interface that can not
be implemented suing the future<> external interface and let the library
implementor use some internals of his future<> implementation.
>>> For reference, the AFSIO/AFIO project is *not* a threadpool. It's a
> batch
>>> asynchronous execution engine based on Boost.ASIO that lets you chain,
> in
>>> vast numbers, huge arrays of std::function<> whose returns are fetched
> using
>>> std::future<> to be executed asynchronously according to specified
>>> dependencies e.g. if A and B and C, then D, then E-G. That sort of
> thing.
>> So the thread pool you need is an internal one that is adapted to you
>> particular needs, isn't it?
> I think you don't understand what Boost.ASIO is or how it works.
I confirm.
> Boost.ASIO's core is boost::asio::io_service. That is its dispatcher
> implementation which each dispatch execution context being executed via
> boost::asio::io_service::run() which is effectively an event loop. Third
> parties then enqueue items to be dispatched using
> boost::asio::io_service::post(). You don't have to run Boost.ASIO using
> multiple threads: it can be single threaded.
> So is my thread pool an internal one adapted to my particular needs? In the
> sense I need not use threads at all, yes. In the sense that
> boost::asio::io_service's use scenario is inviolate, no. Boost.ASIO has its
> API, and you have to use it. And Boost.ASIO's API will be similar to the TR2
> networking API.
> There is nothing stopping a person merging a Boost.ASIO managed thread pool
> with a traditional thread pool. I would struggle to see the use case though
> - this would an excellent thought project actually which would benefit any
> Boost.ThreadPool.
I could not comment until I understand what Boost.ASIO provides and it
can interact with thread_pools :(


Boost list run by bdawes at, gregod at, cpdaniel at, john at