Boost logo

Boost :

From: Jeff Flinn (TriumphSprint2000_at_[hidden])
Date: 2007-09-04 07:39:58


Benjamin Collins wrote:
> On 9/3/07, Frank Birbacher <bloodymir.crap_at_[hidden]> wrote:
>> Hi!
>>
>> How is that different from using "buffers"? The regular std::fstream
>> shows just this behavior: when reading it fills a buffer; when the end
>> of the buffer is reached it loads the next part of the file into memory,
>> and so on. The only difference is that writing to a memory location does
>> not implicitly change the file content. But do you need this kind of
>> random access for writing?
>
>
> I'm not concerned with random access; I'm concerned with doing really
> fast reads and writes of large files, which can just be linear reads
> and writes as far as I'm concerned
>
> The difference between std::fstream and what I'm proposing is
> performance (hopefully). std::fstream, as I understand it, uses
> read()/write(). mmap() provides better performance than read(), and
> increasingly so as your file gets larger. See here for a
> Solaris-oriented analysis
> (http://developers.sun.com/solaris/articles/read_mmap.html).
>
> I can't find any recent benchmarks for Linux, but I think I would be
> suprised if it was very much different than Solaris (which wasn't the
> case for the old benchmarks I did find).

IIRC, at least under Windows, file i/o uses memory mapped files for all
file access. I think there were some threads discussing this in the
spirit mailing list.

Have you looked to see if the memory_mapping facilities in the
interprocess library could meet your needs?

Jeff Flinn


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk