From: Frank Mori Hess (frank.hess_at_[hidden])
Date: 2008-06-02 20:56:59
-----BEGIN PGP SIGNED MESSAGE-----
On Monday 02 June 2008 19:17 pm, Peter Dimov wrote:
> Frank Mori Hess:
> >> wait_for_any( waiting_ );
> > A scheduler which takes O(N) to dispatch a method request is considered
> > inefficient. N being the number of pending method requests in the
> > scheduler.
> > The call to wait_for_any alone is O(N), thus my motivation for using
> > future_selector for a scheduler.
> Using algorithmic complexity to evaluate the performance of a call that
> blocks because you have nothing to do is not quite correct. Let's tidy up
> the pseudocode a bit:
> list< future<void> > deps_;
> wait_for_any( deps_ );
> for each in deps_:
> if ready() erase;
> for each pending task:
> if all dependencies are ready()
> execute task
> I believe that the "if all dependencies are ready" part corresponds to your
> One can reasonably assume that wait_for_any will return early if it finds a
> ready() future, bit even if it doesn't, we can optimize the loop to only
> call wait_for_any when there was no activity on the last iteration.
> So we only enter wait_for_any when none of deps_ are ready() and it blocks.
> It doesn't really matter what the overhead of wait_for_any is (as long as
> it isn't pathological) since it's usually going to block anyway. The worse
> the overhead of wait_for_any, the less often we'll enter it. What matters,
> in the end, is whether the rest of the scheduler can keep up with the
> outside world submitting requests; if wait_for_any turns into a bottleneck,
> it simply won't be called at all.
Okay, lets try ignoring the wait_for_any call. Say you have on average N
method requests just sitting in your scheduler that are not ready, and have
on average M method requests that are ready to execute, and each method
request has on average P dependencies (I'd imagine P would probably be
small). That means each pass through your pseudocode will take O(N*P) to
iterate through all the unready dependencies of the unready method requests.
Then it will take O(M+N) to go through all the method requests and execute
the M ready tasks. That results in O(N*P/M+1) per method request executed.
Having M in the denominator is nice, as you would expect it to scale with N,
and it means the scheduler probably won't break down under heavy load.
On the downside, in a common case I'd expect M to be small (between 0 and 1),
with performance concerns about the scheduler starting to come into play as M
approaches 1. In my case, where the scheduler thread is executing the tasks
itself (as opposed to sending them to a thread pool for instance), if I were
in optimization mode I would start looking at breaking things up more finely
once M hit 1, to maximize use of available processors.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
-----END PGP SIGNATURE-----
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk