|
Boost : |
From: Scott Woods (scottw_at_[hidden])
Date: 2005-04-26 15:59:56
----- Original Message -----
From: "Rob Stewart" <stewart_at_[hidden]>
To: <boost_at_[hidden]>
Cc: <boost_at_[hidden]>
Sent: Wednesday, April 27, 2005 4:57 AM
Subject: Re: [boost] Re: Re: Re: Re: (Another) socket streams library
[snip]
> > As the application protocol writter you have to know how much you have
written to
> > the buffer *at all times* and you must know when an overflow / flush
will happen.
> >
> > Errors can *only* happen when the buffer writes to the socket (
overflow/flush )
> > therefore if a programmer does his/her job correctly there will not be
multiple
> > operator << traversing and overflow boundary. This is irrespective of
sync/async.
>
> According to this, there is no way to use streams with sockets.
> Otherwise, clients of the stream interface would need to keep
> track of the size of formatted output of each object inserted on
> the stream to avoid overflow between insertion operators. How
> can they do that?
>
Yeah, tricky stuff. After several attempts with varying levels of success
the
following is the most complete and efficient that I have come up with. This
is specifically to deal with "streaming" over async sockets. This may ramble
a bit
but hopefully with purpose :-)
The insertion operator (<<) is pre-defined for all "standard" or "built-in"
types.
Templates are defined for the standard containers. This is consistent with
the approach taken by many, including Boost serialization. Insertion
operators
are defined for any application types. These have the appearance of;
stream &
operator<<( stream &s, application_type &a )
{
s << a.member_1;
s << a.member_2;
s << a.member_3;
return s;
}
Nothing new there :-)
But (!) rather than the expected conversion to a formatted byte stream,
these
operators transform the application object to its generic form (i.e. a
variant)
and place it on an outgoing queue of variants. This activity occurs in the
"application
zone", a name coined to separate application object activity from the
behind-the-scenes processing of network/socket events.
In the "socket zone" a "WRITE" notification advises that its a "good time to
write".
The socket code manages the transfer of those queued variants to a byte
buffer suitable
for sending. This activity obviously requires the conversion of variants to
some
byte-by-byte encoding, or format.
The task of this "transfer" code is to present optimal blocks to the network
API taking
its raw material from the queue of outgoing variants. The buffering strategy
at this
point is really the essence of this solution.
The underlying container for my buffering is a std::vector<char>. While my
buffer does
not contain a block of optimal network size I attempt to take a variant off
the queue. If the
queue is non-empty I "stream" a varaint onto the end of the buffer, i.e.
append the formatted
representation of the variant using vector<>::push_back.
This specifically allows the buffer to temporarily contain more than the
network
system requires. This may sound like a crippling design decision but in
practise
it works very well. It brings (IMO) the huge advantage of allowing the
complete
"streaming" of any application object.
A block is written to the network. The amount accepted by the network is
used
to adjust a "skip" value, the number of leading buffered bytes that dont
need to
be written again.
The last piece of this solution is a "wrap" operation that tidies the buffer
up at
the end of a phase of writing (i.e. the socket code handling a "WRITE"
notification).
If the remaining bytes in the buffer are less than an optimal network block
they
are all shifted down to the zero-th position and the skip value is reset to
zero.
The underlying vector will (worst case) be as big as the largest streamed
application
object plus odd bits and pieces totalling less than an optimal network
block, i.e. if
application objects tend to "max out" at 8K then the vector may approach 16K
in
size. Of course "reserve" can be used to minimize some initial memory
shuffling.
Hopefully this sketch is enough to describe the somewhat bizarre solution I
have
arrived at to deal with the conflicting requirements in this area.
Trying to couple the application event of streaming an object to the network
event of writing a byte block is (only my opinion :-) doomed. I would even
go as far as saying that this is true for all streaming, i.e. to files. But
thats another
story. And dont even start me on input streams.
Cheers.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk