Subject: Re: [boost] [chrono] steady_clock efficiency
From: Vicente J. Botet Escriba (vicente.botet_at_[hidden])
Date: 2011-12-05 02:04:02
Le 05/12/11 04:41, Marsh Ray a écrit :
> On 12/01/2011 10:23 PM, Kenneth Porter wrote:
>>> Some of that may reflect quantization error.
>>> E.g., the clock output might be truncated to microsecond precision
>>> which introduces a 500 ns error on average and the actual read
>>> overhead is something like 150 ns.
>> The profiling program reads the clock a million times, storing the
>> in a pre-allocated array, and times the whole operation. (An initial run
>> that constructs a million time_points is used to factor out the loop and
>> array member constructor time.) How would microsecond jitter affect the
>> overall operation to that degree?
> Ah, well not if you do it that way.
> When you look at the array, does it reflect any particular quantization?
> It might be interesting to run the test on bare metal and again in a
> VMware-type VM. The high resolution counters tend to be virtualized in
> VMs and could produce greatly different results.
The times in the array have the precision of the clock and stors the
actual time, not a duration.
I have no access to a VM. Could others run the performance test?
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk