Boost logo

Boost :

From: Frank Mori Hess (fmhess_at_[hidden])
Date: 2008-06-02 23:08:21


On Monday 02 June 2008 21:26, Peter Dimov wrote:

> The number of unready dependencies can be lower than N*P, but let's
> ignore that for now.

The big O means I don't care about constant factors :)

> > Then it will take O(M+N) to go through all the method requests and
> > execute the M ready tasks. That results in O(N*P/M+1) per method
> > request executed.
>
> I think that going through the method requests takes (N+M)*P, because of
> the "all dependencies are ready" part.

I think some kind of optimization could prevent having to check all P of
each method request's dependencies at that stage, like if the earlier pass
through the dependencies incremented a count in a ready dependency's
associated method request. Then only the count's value would need to be
checked.

> (I think that this is also true
> for your scheduler.)

Oh, the scheduler I implemented in libpoet is not optimal. But with, for
example, a completion callback on futures, you could in principle
implement an O(P) scheduler. P future complete signals to a method
request make it generate its ready signal, then the slot connected to the
method request's ready signal (which might have had an iterator to the
method request's location in a pending requests list bound to it when it
was connected) moves the request from the pending requests list into a
ready requests list.

So my goal is to make waiting with future_selector in this scenario be
O(P).

> So we have (2*N+M)*P total, or (2*N/M+1)*P per
> executed request.
>
> > Having M in the denominator is nice, as you would expect it to scale
> > with N,
> > and it means the scheduler probably won't break down under heavy load.
> >
> > On the downside, in a common case I'd expect M to be small (between 0
> > and 1),
> > with performance concerns about the scheduler starting to come into
> > play as M
> > approaches 1.
>
> If the scheduler performance is insufficient, it will not be able to
> maintain a low M. If it does manage to maintain a low M, then its
> performance is sufficient.
>
> Stated differently, in the additional N*P time taken by the first loop,
> some of the futures will become ready that wouldn't otherwise, and the
> second loop will execute more tasks, automatically diluting the
> overhead.

I'm not quite sure what you're driving at with the above 2 paragraphs. Are
you satisfied with your pseudocode scheduler's performance, or no?




Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk