Boost logo

Boost :

From: Anthony Williams (anthony_w.geo_at_[hidden])
Date: 2008-05-14 02:51:44

Johan Torp <johan.torp_at_[hidden]> writes:

> Anthony Williams-3 wrote:
>> If the thread "crashes", you've got a serious bug: all bets are off. It
>> doesn't matter whether that's the same thread that's performing another
>> task
>> or not.
> I agree that something is seriously wrong and that we perhaps don't need to
> handle things gracefully. But if the threading API allows us to detect
> "crashing" threads somehow, we could avoid spreading a thread-local problem
> to the whole process. The client thread could even be notified with a
> thread_crash exception set in the future. I'm haven't had time to read up on
> what possibilities the C++0x threading API will supply here, but I suppose
> you know. Maybe there isn't even a notion of a thread crashing without
> crashing the process.

No, there isn't. A thread "crashes" as a result of undefined behaviour, in
which case the behaviour of the entire application is undefined.

> At the very least, I see a value in not behaving worst than if the
> associated client thread would have spawned it's own worker thread. That is:
> std::launch_in_pool(&crashing_function);
> should not behave worse than
> std::thread t(&crashing_function);

It doesn't: it crashes the application in both cases ;-)

> Anthony Williams-3 wrote:
>>> B might be useful. It can't detect waiting by periodic is_ready-polling -
>>> which with todays interface is needed to wait for more than one future.
>> I would use timed_wait() calls when waiting for more than one future:
>> doing a
>> busy-wait with is_ready just consumes CPU time which would be better spent
>> actually doing the work that will set the futures to ready, and timed_wait
>> is
>> more expressive than sleep:
>> void wait_for_either(jss::unique_future<int>& a,jss::unique_future<int>&
>> b)
>> {
>> if(a.is_ready() || b.is_ready())
>> {
>> return true;
>> }
>> while(!a.timed_wait(boost::posix_time::milliseconds(1)) &&
>> !b.timed_wait(boost::posix_time::milliseconds(1)));
>> }
> It could as well have been implemented by:
> while (!a.is_ready() || !b.is_ready())
> {
> a.timed_wait(boost::posix_time::milliseconds(1));
> }

This has a redundant check on a.is_ready(), and as you mention below, it
doesn't cause a wait callback on "b" to be called. Also, this is biased
towards waiting on a. By alternating the timed wait you're sharing the load.

> You can't detect that b is needed here. I would not implement dynamic wait
> by timed_waiting on every single future, one at a time. Rather i would have
> done something like:
> void wait_for_any(const vector<future<void>>& futures)
> {
> while (1)
> {
> for (...f in futures...) if (f.is_ready()) return;
> sleep(10ms);
> }
> }

If it was a large list, I wouldn't /just/ do a timed_wait on each future in
turn. The sleep here lacks expression of intent, though. I would write a
dynamic wait_for_any like so:

void wait_for_any(const vector<future<void>>& futures)
  while (1)
    for (...f in futures...)
        for (...g in futures...) if (g.is_ready()) return;
        if(f.timed_wait(1ms)) return;

That way, you're never just sleeping: you're always waiting on a future. Also,
you share the wait around, but you still check each one every time you wake.

> Anthony Williams-3 wrote:
>>> - Let the thread-pool be a predictable FIFO queue. Trust client code to
>>> do
>>> the scheduling and not submit too many tasks at the same time.
>> That's not appropriate for situations where a task on the pool can submit
>> more
>> tasks to the same pool, as in my quicksort example.
> Ah - I knew I missed something. Agreed, child tasks should be prioritized.
> But that mechanism could be kept internal in the thread pool.

The pool can only do that if the pool knows you're waiting on a child task.

> Anthony Williams-3 wrote:
>> My wait_for_either above could easily be extended to a dynamic set, and to
>> do
>> wait_for_both instead.
> Still you don't really wait for more than one future at a time. Both yours
> and mine suggestion above are depressingly inefficient if you were to wait
> on 1000s of futures simultaneously. I don't know if this will be a real use
> case or not. If the many core prediction comes true and we get 1000s of
> cores, it might very well be.

You're right: if there's lots of futures, then you can consume considerable
CPU time polling them, even if you then wait/sleep. What is needed is a
mechanism to say "this future belongs to this set" and "wait for one of the
set". Currently, I can imagine doing this by spawning a separate thread for
each future in the set, which then does a blocking wait on its future and
notifies a "combined" value when done. The other threads in the set can then
be interrupted when one is done. Of course, you need /really/ lightweight
threads to make that worthwhile, but I expect threads to become cheaper as the
number of cores increases. Alternatively, you could do it with a
completion-callback, but I'm not entirely comfortable with that.

> Anthony Williams-3 wrote:
>>> My 5 cents is still that 2 x time_limited_wait is clear and readable
>>> enough
>>> but it's no strong opinion. For good or bad you are forcing users to
>>> supply
>>> their intent twice - by both argument type and method name. Is this a
>>> general strategy for the standard library?
>> This is an important strategy with condition variables, and it is probably
>> sensible to do the same elsewhere in the standard library for consistency.
> I understand your point, even though I'm not sure it's the best strategy.
> Rather than arguing with more experienced people, I'll adopt whatever public
> code I write to this.

Currently the WP uses overloads of timed_wait for condition variables. I
expect we'll see whether the committee prefers that or wait_for/wait_until
after the meeting in June.


Anthony Williams            | Just Software Solutions Ltd
Custom Software Development |
Registered in England, Company Number 5478976.
Registered Office: 15 Carrallack Mews, St Just, Cornwall, TR19 7UL

Boost list run by bdawes at, gregod at, cpdaniel at, john at