Boost logo

Boost :

From: Anthony Williams (anthony_w.geo_at_[hidden])
Date: 2008-05-12 06:52:50

Johan Torp <johan.torp_at_[hidden]> writes:

> Anthony Williams-3 wrote:
>> I have read your comments. My primary reason for including this came from
>> thread pools: if you know that the current thread (from the pool) is
>> blocked
>> on a future related to a task in the pool, you can move it up the queue,
>> or
>> maybe even invoke on the blocked thread. Braddock suggested lazy futures,
>> and
>> I think that's also an important use case. For one thing, it shows that
>> futures are useful without thread pools: you can use them in
>> single-threaded
>> code.
> I don't quite understand you. What is the "current" thread in a thread pool?

In this case I mean the thread that called some_future.wait().

> If there are dependencies between tasks in a thread-pool, shouldn't
> prioritizing be the task of an external scheduler - and solved before the
> tasks are initiated? I'd like to know your thoughts on what the thread pool
> should be and what problems it should solve more specifically than what's
> explained in N2276.

Suppose you're using a thread pool to provide a parallel version of
quick-sort. The easiest way to do that is to partition the values into those
less than and those not-less-than the chosen pivot (as you would for a
single-threaded version), and submit tasks to the thread pool to sort each
half and then waits for them to finish. This doubles the number of tasks with
each level of recursion. At some point the number of tasks will exceed the
number of threads in the pool, in which case you have some tasks waiting on
others that have been submitted to the pool but not yet scheduled.

If you can arrange for the implementation to identify this scenario as it
happens, and thus schedule the task being waited for to run on the waiting
thread, you can achieve greater thread reuse within the pool, and reduce the
number of blocked threads.

One way to do this is have the pool use packaged_tasks internally, and set a
wait callback which is invoked when a thread waits on a future from a pool
task. When the callback is invoked by the waiting thread (as part of the call
to wait()), if that waiting thread is a pool thread, it can proceed as
above. If not, then it might arrange to schedule the waited-for task next, or
just do nothing: the task will get its turn in the end.

> I thought the most common use case for futures was the active object
> pattern.

That's one possible use. I wouldn't have pegged it as "most common" unless
you're considering all cases of a background thread performing operations for
a foreground thread as uses of active object.

> We should all try to agree what use cases/design patterns/higher
> level abstractions are most important and which we want to support. IMHO,
> this should be top priority for the "future ambition". Even though no higher
> level abstractions built on futures will make it to C++0x or boost anytime
> soon, it's important that the future interface needn't change to support
> them in the - future :)

I agree we should think about the higher-level abstractions we want to
support, to ensure the "futures" abstraction provides the necessary
baseline. I'd like higher-level stuff to be built on top of C++0x futures
without having to replace them with a different low-level abstraction that
provides a similar feature set.

> To me, being able to wait for any or all of a number of futures seems like
> an important use case. I'd use it to implement "i'm waiting on the result of
> a number of time-consuming commands and queries". Maybe this is implemented
> better in another way - any ideas?

Waiting for one of a number of tasks is an important use case. I'm not sure
how best to handle it. I've seen people talk about "future_or" and "f1 || f2",
but I'm not sure if that's definitely the way to go.

> Anthony Williams-3 wrote:
>> This was alongside a suggestion that we change the names for
>> condition_variable waits. The important part was the separation of
>> timed_wait(duration) vs timed_wait(absolute_time) with distinct names, so
>> it
>> was clear which you were calling, and you wouldn't accidentally pass a
>> duration when you meant a fixed time point.
>> We could go for timed_wait_for() and timed_wait_until(), but they strike
>> me as
>> rather long-winded. Maybe that's a good thing ;-)
> Yes, long names is a good thing :) duration_timed_wait/absolute_timed_wait
> are other alternatives.
> duration and absolute_time will have two different types, right? If so I
> don't think they should have different function names because:
> - IMO it doesn't increase code readability to repeat type information in
> symbol names
> - It reduces genericity. Function overloading can be used to implement LSP
> for generic functions.
> template<class TimeType>
> void foo_algorithm(future<void>&f, TimeType t)
> {
> ... do stuff ...
> f.timed_wait(t); // LSP for TimeType, more generic
> }
> I vote for 2 x time_limited_wait.

duration and absolute_time will have distinct types. In Boost at the moment,
for boost::condition_variable, duration is anything that implements the Boost
Date-Time duration concept, such as boost::posix_time::milliseconds, and
absolute_time is boost::system_time.

However, even though distinct overloads will be called, this is not
necessarily desirable, as the semantics are distinct. The members of the LWG
are discussing renaming condition_variable::timed_wait to have distinct names
for the duration and absolute time overloads in order to ensure that the user
has absolute clarity of intent: wait_for(absolute_time) or
wait_until(duration) won't compile.


Anthony Williams            | Just Software Solutions Ltd
Custom Software Development |
Registered in England, Company Number 5478976.
Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Boost list run by bdawes at, gregod at, cpdaniel at, john at