|
Boost : |
From: Johan Nilsson (johan.nilsson_at_[hidden])
Date: 2003-03-27 06:40:40
"Jeff Garland" <jeff_at_[hidden]> wrote in message
news:LPBBLOEIMCKBCMMHJMGAMEHEENAA.jeff_at_crystalclearsoftware.com...
> > > I think this is a good addition, but we should probably make the
> > > addition for all Win32 compilers since I think this is actually
> > > part of the Win32 api.
> > >
> >
> > I agree with that. Would it be better to make it a millisec_clock, or
> > just use the microsec_clock but the resolution is only milliseconds?
>
> Hmm, I'm thinking that for consistency it would probably be better to
> call it millisec_clock.
Could be.
I might be a bit off here (coming in late into the discussion), but I'd
prefer consistency in my code; using microsec_clock for both Windows and
Unix code - even if the real 'resolution' is dependent of the system time
updates on the Win platforms.
If you plan to timestamp events with low overhead, the easiest and fastest
way to get the system time is GetSystemTimeAsFileTime (assuming you can
defer the conversion from FILETIME to SYSTEMTIME until later). Just remember
that you'll never (?) get more frequent updates of the system time than 10
or 15 (SMP system) milliseconds.
Even though it is possible to affect the possible Sleep() resolution to get
~1 millisecond sleeps by using NtSetTimerResolution or multimedia timers,
this doesn't seem to affect the system updates (I've never seen it update
more often than the standard 10/15 ms even though I've tried). Anyone else
got comments on that?
I've got no experience on non-Intel or 64-bit Windows though.
// Johan
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk