|
Boost : |
Subject: Re: [boost] [threadpool] relation with TR2 proposal
From: Anthony Williams (anthony.ajw_at_[hidden])
Date: 2008-09-26 16:31:32
Johan Torp <johan.torp_at_[hidden]> writes:
> viboes wrote:
>>
>>> My _guess_ is that the C++ standard committee is targeting a thread pool
>>> which can help to ease extraction of parallel performance, not a fully
>>> configurable templated thread pool with lots of neat features.
>>
>> It's my _guess_ also. I think that the Boost threadpool library must
>> integrate child-tasks and task stealing between worker threads.
>>
>
> I haven't though about it much but I think you might be able to separate
> scheduling from a thread pool library. You could even split it into three
> pieces, a generic passive thread pool, scheduling algorithms and a
> launch_in_pool free function which employs a single static thread pool and
> some scheduling behind the scenes.
That sounds interesting.
> viboes wrote:
>>
>>> I believe the standard efforts are inspired by what java (see
>>> java.util.concurrency and the part called the fork-join framework) and
>>> .NET
>>> standard libraries (TPL, task parallel library) and intel thread building
>>> blocks (a C++ library) provide.
>>
>> Are you talking about the C++ standard efforts? The n2276 proposal is a
>> simple thread pool without possibility to steel tasks betweer worker
>> threads. Are there other work in progress?
>>
>
> Actually, I thought that the idea behind N2276 was to leave a lot of space
> in the definition of the launch_in_pool so that library implementors could
> have sophisticated work-stealing behind the scenes.
Yes. The intention was that the free function launch_in_pool would use
an implementation-provided global thread pool that would be as smart
as the library implementor could manage.
e.g. I have a working prototype for Windows that initially runs one
pool thread per CPU. If a pool thread blocks on a future for a pool
task it suspends the current task and runs a new task from the pool.
> viboes wrote:
>>
>> BTW, is someone already working on the FJTask adaptation to C++, or
>> something like that?
>>
>
> I believe Intel TBB has come the longest way in extracting task level
> parallelism via thread pools. I suspect a C++ solution will differ quite a
> lot from a java implementation since the languages are so different.
My thread pool prototype is very similar in behaviour.
> viboes wrote:
>>
>>> In practice, just providing the interface to launch_in_pool has proven
>>> difficult as it returns a future value. The problem is nailing down a
>>> future
>>> interface which is both expressive and can be implemented in a
>>> lightweight
>>> manner.
>>
>> Could you elaborate more, which difficulties?
>> What is missing on the current Future proposals for you?
>>
>
> There are at least two things which still needs be solved:
> 1. How to wait for multiple futures
My futures prototype at
<http://www.justsoftwaresolutions.co.uk/threading/updated-implementation-of-c++-futures-3.html>
handles that.
> 2. How to employ work-stealing when one thread waits on a future
>
> An expressive solution is to allow some callback hooks (for future::wait and
> promise::set) but that is quite hackish. IMO you should not be able to
> inject arbitrary code which is run in promise::set via a future object that
> runs on a completely different thread.
Yes. This needs to be internal to the implementation, which requires
the future and thread pool to cooperate.
> I'm really not satisfied with the wait_for_any and wait_for_all proposal or
> operators proposal either. Window's traditional WaitForMultipleObject and
> POSIX select statement is IMHO very flawed. You have to collect a lot of
> handles from all over your program and then wait for them in one place. This
> really inverts the architecture of your program. I'd like a solution which
> is expressive enough to "lift" an aribitrary function to futures. I.e.:
>
> R foo(int a, int b) should be easily rewritten as future<R> foo(future<int>
> a, future<int> b)
>
> You could say that I want futures to be as composable as the retry and
> orElse constructs of transactional memory (see my thesis if you are not
> familiar with transactional memory). You might also want to support waiting
> on a dynamic set of futures and as soon as one of them becomes ready.
My futures prototype has wait_for_any/wait_for_all that work on
iterator ranges (e.g. vector of shared_future)
Anthony
-- Anthony Williams | Just Software Solutions Ltd Custom Software Development | http://www.justsoftwaresolutions.co.uk Registered in England, Company Number 5478976. Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk