|
Boost : |
From: Benjamin Collins (ben.collins_at_[hidden])
Date: 2007-09-03 19:08:15
On 9/3/07, Frank Birbacher <bloodymir.crap_at_[hidden]> wrote:
> Hi!
>
> How is that different from using "buffers"? The regular std::fstream
> shows just this behavior: when reading it fills a buffer; when the end
> of the buffer is reached it loads the next part of the file into memory,
> and so on. The only difference is that writing to a memory location does
> not implicitly change the file content. But do you need this kind of
> random access for writing?
I'm not concerned with random access; I'm concerned with doing really
fast reads and writes of large files, which can just be linear reads
and writes as far as I'm concerned
The difference between std::fstream and what I'm proposing is
performance (hopefully). std::fstream, as I understand it, uses
read()/write(). mmap() provides better performance than read(), and
increasingly so as your file gets larger. See here for a
Solaris-oriented analysis
(http://developers.sun.com/solaris/articles/read_mmap.html).
I can't find any recent benchmarks for Linux, but I think I would be
suprised if it was very much different than Solaris (which wasn't the
case for the old benchmarks I did find).
Of course, finding out if I'm wrong is part of why posted this message
in the first place, but I don't *think* I'm wrong.
.
-- Benjamin A. Collins <ben.collins_at_[hidden]> http://bloggoergosum.us
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk