|
Boost : |
From: Philippe Vaucher (philippe.vaucher_at_[hidden])
Date: 2006-10-31 12:42:44
>
> Is that really the case? Microsoft's own documentation states:
>
> "The default precision of the timeGetTime function can be five
> milliseconds or more, depending on the machine. You can use the
> timeBeginPeriod and timeEndPeriod functions to increase [snipped]
microsec_clock doens't use timeGetTime()... it uses GetSystemTime() if I
remember correctly.
QueryPerformanceCounter has indeed a better resolution than timeGetTime(),
it also has less overhead... but unfortunately I don't know how it compares
to GetSystemTime(). I will have to run some tests to determinate.
At the moment my code offers :
- microsec_timer, which uses boost::posix_time::microsec_clock which
is itself based on GetSystemTime on windows and gettimeofday() on linux. I
think that'd be the timer that most of the users should use.
- second_timer, which uses boost::posix_time::second_clock, which I
forgot what it was using.
- qcp_timer, only available under windows, which uses
QueryPerformanceCounter.
- tgt_timer, only available under windows, which uses timeGetTime.
And then I plan to add clock_timer which would use std::clock... about
GetTickCount() I don't think it'd be worth adding it as it's the worse win32
timer that exists.
I'll give a shot to the nvidia timer test thing in the next days.
Philippe
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk