Boost logo

Boost :

From: Stewart, Robert (stewart_at_[hidden])
Date: 2002-03-20 08:37:00


From: Dylan Nicholson [mailto:dylan_nicholson_at_[hidden]]
>
> --- Rene Rivera <grafik666_at_[hidden]> wrote: >
> On 2002-03-19 at
> 04:37 PM, dylan_nicholson_at_[hidden] (Dylan Nicholson)
> > wrote:
> > >
> > OK, how can you say "remarkably efficient" when you don't
> give us numbers,
> > and
> > you only compare two methods. How about comparing it to
> other methods like
> > cstdio, raw io, or even (on some platforms) memory mapped files?
> >
> Well quite clearly it is remarkable, given the number of
> people who have
> remarked on it :o) But I did do some other tests with bigger

It's funny how people forget that remarkable doesn't mean that something is
superior, but rather that it is worthy of notice.

> files, and using
> a network etc. etc. In no case did I get a result
> perceptably slower than
> using, for instance, the cp command. The point is my
> assumption was that such
> a naive implementation would be horribly slow. It may simply
> be that the size
> of the buffer used by the underlying implementation is
> well-tuned to the
> filesystem. The fact that reading the whole file into one
> big buffer and
> writing it all out in one go was so slow was extremely
> surprising to me.

It's not surprising. Seeking to the end and then the beginning of a file to
learn its size is quite inefficient versus a stat() or similar call which
will extract the information already stored in the filesystem.

File I/O is already buffered by the OS, so you continually underflowed the
input buffers as you copied to your large buffer and continually overflowed
the output buffers as you copied from your large buffer. You left no
opportunity for the OS to read ahead/write behind you.

Depending upon the size of the file you're copying, you can cause virtual
memory paging which can mean that parts of your buffer must be written to
and read from disk in order to read from and write to disk!

Finally, you did all of the copying in user code. The transitions between
kernel and user code aren't cheap. Allowing the kernel to do all of the
work is always faster. Hence, using a native OS call to do the work means
you can eliminate those transitions and gives rise to opportunities to
exploit low level optimizations based upon intimate knowledge of the OS
implementation.

Rob
Susquehanna International Group, LLP
http://www.sig.com


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk