Boost logo

Boost :

From: Don G (dongryphon_at_[hidden])
Date: 2005-04-14 22:11:00


Hi Bob,

>Caleb Epstein writes:
>> As far as the appropriate subseconds type goes, we
>> should probably pick the highest-possible resolution
>> that makes sense, which I'd contend is probably
>> microseconds. Some operating systems may be able to
>> slice time (and signal events) at resolutions below
>> milliseconds, but I doubt any can go deeper than
>> microseconds.
>
> I wouldn't take that bet. I know Mac OS X can measure
> time as finely as nanoseconds (but I have no idea how
> many services, i.e. sockets, actually work at
> nanosecond resolutions; it doesn't seem outside the
> realm of possibility that, given the way technologies
> advance, that within a few short years, microseconds
> simply won't be fine enough. One of the nice things
> about double-as-time-unit is that it avoids resolution
> issues altogether.

With processor speed basically stalled out around 4GHz, it is at
theoretically possible to measure time to about 0.25ns. Not that a
scheduler would muck about at that level. For timeout purposes
(especially for networking), I think microsec is fine. In particular,
this is what select() wants. epoll_wait uses milliseconds. Windows
generally uses milliseconds. The kqueue/kevent folks do want
nanoseconds. The precision of the implementation is not clear (just
it's interface).

Having said that, I like Caleb's proposal to offer multiple ctors for
timeout purposes as long as they don't get ambiguous. That way,
floating point can be used as well as pure integer (if that kind of
thing still matters to anyone<g>).

Best,
Don

                
__________________________________
Do you Yahoo!?
Yahoo! Small Business - Try our new resources site!
http://smallbusiness.yahoo.com/resources/


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk