|
Boost Users : |
From: Robert Ramey (ramey_at_[hidden])
Date: 2005-12-30 00:29:33
Jonathan Turkanis wrote:
> Robert Ramey wrote:
>> I'm looking at array_sink stream buffer as a possible speed
>> enhancement to binary_oarchive in some cases. My quesion is" What
>> happens when the capacity of the array used for the buffer is
>> exceeded? Is an exception thrown or what?
>
> Sorry I didn't respond sooner. I'm way behind on my Boost stuff.
>
> Yes, an exception is thrown. You can see the code at work in
> boost/iostreams/detail/streambuf/direct_streambuf.hpp.
>
<snip ..>
>
> There are no exceptions mentioned in the docs for array_sink, since
> it doesn't throw exceptions.
Hmmm - I'm still unclear as to whether or not it throws and exception.
> As far as using array_sink in serialization, if you tell me how you
> want to use it I may be able to judge whether it is a good idea.
Just to give a little background.
In the course of looking at binary_?archive performance I've looked
at a "typical" stream implementation (Dinkumware). In pseudo code
it looks like
a) ar << t; // archive save operator
b) os.write((char *)& t, sizeof(t));
c) // get a mutex prevent multiple threads from interlacing output
d) // get the stream buffer with rdbuf()
d) // convert characters using code convert facet
e) // copy converted characters to buffer
f) // if buffer overflows output buffer and return when the characters have
been copied out of it
In the case of binary serialization we don't need all of the above steps and
they add a lot to the execution time.
a) By skiping the stream << operator save the mutex which is not necessary
in our context.
I've altered my working copy of the library to rdbuf()->write directly.
b) I'm still stuck with the code conversion - another time waster in this
context.
So right a way this suggests using a special high performance streambuf
implementation
for high performance output. Such a stream would have the following
features.
a) skip the mutex
b) skip the code conversion
right a way this would help.
If one had nothing else to do then a "socket streambuf" would look like
this:
a) same interface as above
b) streabuf constructor would have some more paramteters - buffer size and
buffer count
c) when buffer is flushed, either because its full or because flush has been
called
get another available internal buffer and start using it.
d) launch the full buffer with async i/o
as an option, one could compress the buffer before sending it.
This would help sending largish datasets between processors in a distributed
environment.
So I started making things a little faster and it has helped. Really making
things as
fast as I would like really has nothing to do with serialization but is
really a special
purpose stream buffer - and that's your department.
Robert Ramey
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net