From: Beman Dawes (bdawes_at_[hidden])
Date: 2004-03-02 15:58:47
At 02:17 PM 3/2/2004, Max Motovilov wrote:
>The summary of multiple runs is on the attached chart. The chart is in
>logarithmic scale so actual numbers should be irrelevant. To provide a
>reference point, the test was run on a Xeon 1.7G x 2 machine with 2G
>under Win.2003 Server and allocated the file on an NTFS-formatted 7200
>IDE drive connected to a non-RAID (and, as far as I can tell, non-cached)
>onboard IDE adapter. Typical run times for a 256M test file are 0.3-0.5
>using memory mapped files, 0.5-0.6 sec using file I/O (I'm not even gonna
>mention iostreams and stdio here, you'll see how atrocious they are from
>Short version - memory mapped files are SO much faster, it is not even
>funny. I did not go the extra mile and test the asynchronous I/O
>(scatter/gather reads and writes); my expectation is that they should
>memory mapping but not by much. There apparently is a good reason to
>to the old Microsoft's recommendation and use memory mapping of files
Hum... The chart only shows highly significant differences for "Create
sequential, Win32 Map" and "Write sequential, Win32 Map" compared to the
similar operations for other approaches.
What was being measured? CPU time or wall clock time?
But for those two operations the timing differences are so great that they
make me wonder if the data was actually written to disk on those tests.
Did you verify the data was actually written, and that your timing actually
covered the period during which writing occurred? Remember that with some
operating systems writing can under some conditions be deferred past the
point where the program which creates the data terminates. Thus timings
generated by the program itself can be bogus. Did you compute apparent
disk-transfer rates to verify timings were reasonable?
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk