Subject: Re: [boost] [chrono] steady_clock efficiency
From: Marsh Ray (marsh_at_[hidden])
Date: 2011-12-04 22:41:04
On 12/01/2011 10:23 PM, Kenneth Porter wrote:
>> Some of that may reflect quantization error.
>> E.g., the clock output might be truncated to microsecond precision
>> which introduces a 500 ns error on average and the actual read
>> overhead is something like 150 ns.
> The profiling program reads the clock a million times, storing the results
> in a pre-allocated array, and times the whole operation. (An initial run
> that constructs a million time_points is used to factor out the loop and
> array member constructor time.) How would microsecond jitter affect the
> overall operation to that degree?
Ah, well not if you do it that way.
When you look at the array, does it reflect any particular quantization?
It might be interesting to run the test on bare metal and again in a
VMware-type VM. The high resolution counters tend to be virtualized in
VMs and could produce greatly different results.
Boost list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk