|
Boost : |
From: Carlo Wood (carlo_at_[hidden])
Date: 2004-08-30 18:58:51
On Mon, Aug 30, 2004 at 05:00:46PM -0600, Jonathan Turkanis wrote:
> > On
> http://home.comcast.net/~jturkanis/iostreams/libs/io/doc/classes/alphabetically.html
> >
> > converting_stream and converting_streambuf do not have hyperlinks.
>
> They're not implemented. This is explained here -- http://tinyurl.com/4hkut --
> but I probably shouldn't have included them in the index. Thanks.
>
> > stream_facade and streambuf_facade link to non-existing pages.
>
> Which page contains the bad links?
Still http://home.comcast.net/~jturkanis/iostreams/libs/io/doc/classes/alphabetically.html
> Thank you for the detailed explanation. I don't think I understand all of it
> yet -- I'll have to look at your implementation. Perhaps you could elaborate on
> the concept of a 'message'. Are you thinking of something which is specific to
> network programming or is more general?
I am foremost an abstract thinker. Abstract comprehension and analysis are my
strongest points (according to official tests). So... more general ;).
My (abstract) way of looking at this was that a 'stream' is just that.. a line-up
of bytes that are received in a stream: you don't have random acces to the whole
thing at once - only to the head of the stream that you currently have in
a buffer. This alone already indicates that chunks of data that are to be
processed (where 'processing' means that they are allowed to be removed from
that buffer after 'processing') need to be more or less contiguous and more or
less close to eachother (fit in the buffer). I decided that it was general
enough to demand that such 'decodable chunks' HAD to be contiguous (on the stream)
and decodable independent on what comes after it on the stream (where 'decodable'
means that the application must be able to process it (and allow removal from the
stream buffer)).
This means that such a stream is to be cut into non-overlapping, contiguous chunks,
and that a single chunk then should be decodable somehow - depending at most
on the internal state of the state machine that has decoded the previous chunks.
The size of these chunks of data is totally dependend on the protocol of the
stream; arbitary. My own design is such that you can 'plugin' a protocol class
(by using it as template parameter) into a 'device class' and voila, it decodes
the stream that comes in on that device.
As you will have guessed - the above 'decodable chunks', which are made contiguous
on request (by the decoder, iff needed) are the 'messages' that I introduced
earlier. In libcw's code I actually call them 'msg_blocks'.
> BTW, this sounds vaguely like the descriptions I have read of the Apache 'bucket
> brigade' filtering framework. Any connections?
I never heard of it before.
> At any rate, you are right about the limitations of the Direct concept. I
> introduced it originally to handle memory mapped files. Later I realized that it
> is really a degenerate case of a more general concept. In general, for output,
> when you run out of room in the array provided by the Direct resource you would
> be able to request a new array to write to. Similarly for input you could
> request a new array when you finish reading to the end of the current array.
I don't understand. Don't you only need to enlarge a buffer when you write
to it? For 'input' that means that a new array is needed when more data is
received from the device then fits in the current array.
> For
> random access, you might request an array of a certain length containing a given
> offset -- you might not get exactly what you requested, but you'd always get an
> explanation of how the returned array relates to the requested array. (All this
> would be handled internally by streambuf_facade, of course.)
I don't think that my approach (== using a buffer that actually exists of
a list of allocated memory blocks) is compatible with random access. The
problem of a message overlapping the edge of two blocks and thus not
being contigious make it impossible to treat the "stream" any different
then just like that: a stream of messages. Well, I suppose you could allow
random access to messages. But you can do that with my implementation too
by simply storing all the read msg_block's in a vector (and never destructing
them). The streambuf would grow undefinitely then, but you'd have instant
and random access to each received message so far. :) What I mean is that
it is not compatible with mmap-ed access.
> I didn't implement this for four reasons:
>
> - I had limited time
duh
> - I wasn't sure there was a real need for it
there is always a need for everything(tm)
> - It would make the library harder to document and learn, and
duh
> - There are cases where resources have to be handled directly rather than
> indirectly through streambuf_facades (see, e.g., <boost/io/copy.hpp>);
> generalizing the Direct concept would lead to nightmares in those situations.
Yeah, 'nightmare' describes the libcw interface :).
But at least it is powerful *grin*.
[...snip rip...]
> > Well, this only makes sense for large servers with thousands of connections
> > that all burst data in huge quantities... exactly the kind of applications I
> like
> > to write ;).
>
> Just to clarify, what do you mean by 'this' here? Your lists of dynamically
> allocated memoery blocks, or my idea of a buffering policy?
The first.
[...deleted finished discussion...]
> Why can't you use a streambuf_facade with mode inout, which buffers input and
> output separately? (http://tinyurl.com/6bjbl)
Oh, great! It is clear I didn't really study all of the documentation.
It gets better and better ;).
Well, then we have two points left now that I wonder about.
1) Is it possible to use this library and somehow implement
a growable streambuf that never moves data (== exists of
a list of fixed sized allocated blocks).
2) A new point :p. Does your library also have what I
call in libcw a 'link_buffer' (actually comparable
to your 'mode'); this isn't immedeately clear from the docs
and well, I am lazy :/. The 'link' mode make a streambuf
input and output for two different devices at the same time,
allowing to link two devices together without the need
to copy the date (as would be the case when each device
always has its _own_ buffer). For example, I can create a
file device and a pipe_end device (one end of a UNIX pipe)
and tie them together (construct one from the other)
and they will share the same buffer - writing the file
to the pipe or visa versa (depending on input/output
'mode' template parameters that are part of the device
classes). In your url I see 'seekable', but that doesn't
seem to be the same as 'pass-through'.
Of course you can just create a std::istream and a
std::ostream and give them the same buffer... but well,
is that an input or output mode buffer then? I seem to
fail to see where you define a mode for that case.
Perhaps the dual-seekable? *confused*
-- Carlo Wood <carlo_at_[hidden]>
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk