Boost logo

Boost :

From: Johan Torp (johan.torp_at_[hidden])
Date: 2008-05-13 05:14:02


Anthony Williams-3 wrote:
>
> If the thread "crashes", you've got a serious bug: all bets are off. It
> doesn't matter whether that's the same thread that's performing another
> task
> or not.
>

I agree that something is seriously wrong and that we perhaps don't need to
handle things gracefully. But if the threading API allows us to detect
"crashing" threads somehow, we could avoid spreading a thread-local problem
to the whole process. The client thread could even be notified with a
thread_crash exception set in the future. I'm haven't had time to read up on
what possibilities the C++0x threading API will supply here, but I suppose
you know. Maybe there isn't even a notion of a thread crashing without
crashing the process.

At the very least, I see a value in not behaving worst than if the
associated client thread would have spawned it's own worker thread. That is:
  std::launch_in_pool(&crashing_function);
should not behave worse than
  std::thread t(&crashing_function);

Anthony Williams-3 wrote:
>
>> B might be useful. It can't detect waiting by periodic is_ready-polling -
>> which with todays interface is needed to wait for more than one future.
>
> I would use timed_wait() calls when waiting for more than one future:
> doing a
> busy-wait with is_ready just consumes CPU time which would be better spent
> actually doing the work that will set the futures to ready, and timed_wait
> is
> more expressive than sleep:
>
> void wait_for_either(jss::unique_future<int>& a,jss::unique_future<int>&
> b)
> {
> if(a.is_ready() || b.is_ready())
> {
> return true;
> }
> while(!a.timed_wait(boost::posix_time::milliseconds(1)) &&
> !b.timed_wait(boost::posix_time::milliseconds(1)));
> }
>

It could as well have been implemented by:

    while (!a.is_ready() || !b.is_ready())
    {
        a.timed_wait(boost::posix_time::milliseconds(1));
    }

You can't detect that b is needed here. I would not implement dynamic wait
by timed_waiting on every single future, one at a time. Rather i would have
done something like:

void wait_for_any(const vector<future<void>>& futures)
{
  while (1)
  {
    for (...f in futures...) if (f.is_ready()) return;
    sleep(10ms);
  }
}

Anthony Williams-3 wrote:
>
>> - Let the thread-pool be a predictable FIFO queue. Trust client code to
>> do
>> the scheduling and not submit too many tasks at the same time.
>
> That's not appropriate for situations where a task on the pool can submit
> more
> tasks to the same pool, as in my quicksort example.
>

Ah - I knew I missed something. Agreed, child tasks should be prioritized.
But that mechanism could be kept internal in the thread pool.

Anthony Williams-3 wrote:
>
> My wait_for_either above could easily be extended to a dynamic set, and to
> do
> wait_for_both instead.
>

Still you don't really wait for more than one future at a time. Both yours
and mine suggestion above are depressingly inefficient if you were to wait
on 1000s of futures simultaneously. I don't know if this will be a real use
case or not. If the many core prediction comes true and we get 1000s of
cores, it might very well be.

Anthony Williams-3 wrote:
>
>> My 5 cents is still that 2 x time_limited_wait is clear and readable
>> enough
>> but it's no strong opinion. For good or bad you are forcing users to
>> supply
>> their intent twice - by both argument type and method name. Is this a
>> general strategy for the standard library?
>
> This is an important strategy with condition variables, and it is probably
> sensible to do the same elsewhere in the standard library for consistency.
>

I understand your point, even though I'm not sure it's the best strategy.
Rather than arguing with more experienced people, I'll adopt whatever public
code I write to this.

Johan

-- 
View this message in context: http://www.nabble.com/Review-Request%3A-future-library-%28Gaskill-version%29-tp16600402p17204350.html
Sent from the Boost - Dev mailing list archive at Nabble.com.

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk