|
Boost : |
From: Max Motovilov (max_at_[hidden])
Date: 2004-03-02 17:50:30
"Beman Dawes" <bdawes_at_[hidden]> wrote in message
> Hum... The chart only shows highly significant differences for "Create
> sequential, Win32 Map" and "Write sequential, Win32 Map" compared to the
> similar operations for other approaches.
Almost 2X for reading performance - it may not be immediately apparent
because Y axis is logarithmic.
> What was being measured? CPU time or wall clock time?
System time, which, I imagine, would be wall clock time in this parlance.
> But for those two operations the timing differences are so great that they
> make me wonder if the data was actually written to disk on those tests.
I've theorized on that a bit in my follow-up message. Note that writing and
reading performances are rather close for file mappings and far apart for
regular file I/O. There's gotta be a write-through vs. write-back issue here
somewhere.
> Remember that with some
> operating systems writing can under some conditions be deferred past the
> point where the program which creates the data terminates. Thus timings
> generated by the program itself can be bogus.
I agree. However the data were collected over multiple runs of the same
process as well as over multiple invocations of the test pass within one
run. Neither guarantees measuring the final write-through performance of
course, but IMHO this mode of testing approaches the natural behavior of an
application running under this specific OS. After all, an application
developer would probably be interested in observable performance numbers,
even if optimizations built into the OS may result in indefinite delays
before data actually end up on the disk.
> Did you compute apparent
> disk-transfer rates to verify timings were reasonable?
Again, with the read-ahead and write-back behavior of the file cache,
reasonability does not necessarily imply that you measure the low-level I/O
operation. If this test really measures the overhead of different caching
mechanisms within the OS it is fine by me, as long as it approximates what a
real application would experience. Though I agree that running it on a
system with less RAM and using larger files (I believe I did try 1G files
and got very similar results but neglected to collect enough data with this
setting) would give a different, no less interesting, insight.
...Max...
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk