From: Robert Ramey (ramey_at_[hidden])
Date: 2004-12-29 01:16:51
Jody Hagins wrote:
>> I believe you could easily achive what you want to accomplish by
>> serializaiton to a memory buffer (e.g string_stream) and transmitting
>> this. On the other end the inverse of this process would occur.
> If I understand Scott correctly, the problem still exists, if you want
> to use the lib in that way.
> Assume an object whose serialization is something like 4K. If reading
> from a file, or even a TCP stream with the socket in blocking mode,
> you just keep reading until you get all the data. However, for a
> in non-blocking mode, you will typically use select or poll or some
> other notification mechanism to be told when data is available. You
> will then go read as much as is currently available, and then return
> other tasks until more data is ready. Let's say that data is slow,
> to ready the entire 4K of data, it takes 10 different "notifications"
> and 10 different "read" operations.
> I think Scott is saying that operator>> is insufficient because it can
> not do a partial read of what is there... it wants to snarf all 4K.
> I could be missing the boat, but this is the usual problem with
> serialization methods, when using them with sockets. For this to
> the operator>>() has to know that there is no more data (i.e.,
> correctly interpret return code of read when the fd is in
> non-blocking mode), and
> keep its current state so that the next call to operator>>() will
> continue where the last call left off.
> I do not see this as a protocol issue but as supporting non blocking
> reads where you can get the data in many small chunks.
> Then again, it is possible that the serialization library already does
> support this in some way...
Not as far as I can see. I would say that one should serialize the data in
the chunk size you want and not attempt to break up the chunks.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk