Boost logo

Boost :

From: Giovanni P. Deretta (lordshoo_at_[hidden])
Date: 2005-05-04 04:16:11


Aaron W. LaFramboise wrote:
> Nathan Myers wrote:
>
>
>>After there's text in the
>>buffer, you can decide if it's enough to merit calling whatever is
>>supposed to extract and operate on it.
>
>
> It seems that if you already have code that does this by directly
> examining the buffer, there may be little point in dropping back a level
> of abstraction and then using operator>>. In particular, in a common
> case, verifying whether input is complete does most of the work of
> actually extracting it.
>

It does actually make a lot of sense if you have a simple way to detect
that the buffer had data enough to satisfy whatever extractions you are
going to do: may be you know how big your data is going to be or may be
the lenght is encoded in the buffer, or may be that the encoded data
never cross a line and you can detect an end of line. If by simple
examination of the buffer you can decide that an extraction step won't
block, you go ahead and call an upper layer that will do the extraction
using standard>> operators. This is has the benefict that high level
code need not to be aware of blocking/non-blocking-

In fact i think that iostream should be considered a way to convert a
sequence of characters to live objects, *not* a way to do input output
(probably not even streambuffers should be considered the lowest i/o
mechanism, as they take care of some locale conversion). The simplest
way to use them would be to read the wole byte stream in memory, build
an iostream on top and then do the deserialization. Deserializing while
reading at well known buoundaries should be considered an optimization
that let's you pipeline the two operations.

> But still I think something special is needed here, because in the case
> of integers, for example, theres no particular way to tell whether an
> integer is complete or not without having some metadata.
>
> One thing I've never understood is how extractors are supposed to be
> written when they require reading two or more sub-objects from the input
> stream. If reading the first part suceeds, but the second part fails,
> what happens to the chunk of data that was read? And how do we prevent
> the stream from being in an indeterminant state due to not knowing how
> much was read? Perhaps the solution to this problem might present new
> ideas for solutions to the nonblocking extractor problem.
>

The simples solution is to make always sure you can't read a partial
object. If you can't don't pipeline. A more complex solution would be
to have some kind of roll back: save the whole stream state (i.e. even
the buffered data), deserialize, and if it fails (i.e. an exception is
thrown in the object constructor) rollback, read some more data from the
source and try again. As an optimization instead of copying the
buffered data you might just remember the position that the
deserialization started from and never throw characters away until you
commit. You probably want to use a custom buffered stream. I don't
think working this way is worth it though.

--
Giovanni P. Deretta

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk