Subject: [boost] proposal - Statistically robust function timing for performance decisions.
From: Edward Grace (ej.grace_at_[hidden])
Date: 2009-07-17 13:36:55
Having developed what I think is an easy to use C++ library for the
robust and statistically meaningful timing of functions I'm looking
to place it under the terrifying glare of peer review and to find a
home for it.
My motivation for its development was simply to be able to state:
"Function 'foo' is between 'x' percent and 'y' percent faster than
function 'bar' at the 95% confidence level."
since I wish to make reliable, automatic, informed decisions about
function choice at run-time.
While this looks like a trivial problem - it is not! Being able to
determine some confidence limits on the relative speedup,
particularly when other user processes are active, turns out to be
I am aware of code snippets such as
based on wrap_code.cpp and, while it's better than many approaches,
I'm dissatisfied with it. It's not as reliable as one might think at
first blush. I am also aware of the timing code used internally in
FFTW for selecting codelets - while it may be appropriate for this
type of application I do not view it as a reliable general solution
to the problem.
I strongly suspect that many people reinvent the wheel regarding this
problem and make decisions in an ad-hoc manner. There certainly
doesn't seem to be a standard and reliable solution.
I'm not sure whether Boost is an appropriate home for such a (compile
time) library. Is this something anyone 'out there' would be
interested in? If it's not the right place - does anyone have any
suggestions other than sourceforge.net?
I look forward to your responses.
P.S. A typical example of its use is presented in the output below.
Here we time two simple and otherwise identical functions, one of
which (a) iterates 1000 times, the other (b) 1005. Once we demand of
the timer a nominal percentage precision approaching the difference
in expected run-time (0.5%) they can be discriminated successfully.
It should be noted that this was carried out simply using gettimeofday
() while significant user processes were present!
Calibrate overhead (0,1)? 1
Point estimate of clock overhead (t_c): 0.0975994
Potential jitter on t_c : 0.633371
Increasing precision until I can discriminate between function a and b.
The nominal confidence interval represents the region: 1-alpha=0.95
Timing at a nominal precision of: 50% 0 1.82141<--- Uncertain.
Timing at a nominal precision of: 10% 0 2.00391<--- Uncertain.
Timing at a nominal precision of: 2% 0.383285 0.593955<--- Ok.
Timing at a nominal precision of: 0.4% 0.411541 0.62412<--- Ok.
Timing at a nominal precision of: 0.08% 0.448847 0.582617<--- Ok.
Nominal confidence bounds on percentage speedup of A relative to B:
0.448847 [0.516456] 0.582617
"No more boom and bust." -- Dr. J. G. Brown, 1997
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk