From: Jonathan Turkanis (technews_at_[hidden])
Date: 2004-09-01 11:07:16
"Robert Ramey" <ramey_at_[hidden]> wrote in message
Thanks for the review!
> What is your evaluation of the documentation?
> I've been reading through the documentation and I have a couple of questions
> and observations. I'm not sure I understand all the details so feel free to
> correct me if I've miss understood something.
> 1) Generally the quality of the documentation is very good.
> 2) I have a problem with the name of the library - I find it very
> misleading. I would character the library by its main functionality
> a) simplify the creation of new stream buffers.
> b) permit stream buffers to be "chained" together to sequentially process
> input and output.
> So I think a better name would be "Stream Buffer Library"
This was the original name of the library. I decided that many poeple who might
benefit from using the library wouldn't be familiar enough with stream buffers
to look into the library. I also thought about "Filtering Library" -- but this
leaves out an important use case. "Filtering and Streaming Library" might be
okay, but it might bring to mind streaming media protocols.
> Additionally it includes some very "thin" classes to make streams with with
> the indicated buffers. As I understand it this this basically replaces the
> rdbuf member of the stream in the normal way.
> In my view the tutorials don't make a clear distinction between stream
> buffers and streams. For example, I would prefer that the first one be:
> Defining a Stream Buffer for a New Data Source.
> Which would create a new stream buffer and then use in a stream created the
> old fashion way. Basically I think the tutorial is easier to understand if
> one can add one concept at a time - especially when (in my view) they are
> orthogonal concepts.
This is a good point. I believe I need to state in several prominent places that
streambuf_facade and filtering_streambuf are the main components, that
stream_facade and filtering_stream are provided for convenience, and that plain
streams from the standard library can be used instead.
I should give at least one example about how to do this. I'm not sure I like
rewriting all the examples this way, since using the wrappers seems more
> Adaptors and Object Generators
> Hmmm - I'm not sure "Adaptors" should be in here. How about "Streambuffer
> Generator" as it seems just to be shorthand for the above.
Hmm. It sounds like you might be looking at an old version of the library. In
the new version, the relevant section is called 'STL sequence adapters.'
(http://tinyurl.com/7y98b) Furthermore, under 'Planned Changes' I explain how
the need for adapters will be eliminated (http://tinyurl.com/6rtkz) entirely.
> Filtering Input/Output
> At first I was going to complain about this, but upon forming my complaint
> it became clear how its supposed to be used. Again I was thrown by the
> mixing of streambuffer and stream. I would prefer to see this divided into
> two pieces - making a chain of streambuffers and a final layer to used the
> buffer in a standard stream.
// read from in.
> It would characterize the source/sink concepts as "helpers" to building
> streambufs but providing the end points of a chain of filters. But that
> leaves me with the question "What about a filter to a standard filebuf?
std::filebuf models Sink:
file.open("essay.z", std::ios::in | std::ios_binary);
in.push(file); // stored by reference
This is explained here, http://tinyurl.com/3qyl5, but obviously it should be
stated prominently in the introduction.
> 3) Reference
> Another very professional job.
> A few observations:
> a) I'm sort of intrigued with the sequence that things are explained. For
> example, it's hard to understand the concept of "chains" until the Filter
> Stream is described. These are relatively small points.
I was thinking that users would read the 'User's Guide' before the
reference.Chains are explained pretty well here: http://tinyurl.com/6kccz. (esp
figures 3-6). I guess everywhere I mention chains I should refer to this page.
> b) I'm intrigued with the section "Code Converters". How does this contrast
> with using a codecvt facet?
The template boost::io::converter provides a generic implementation of code
conversion using a codecvt. If you have a stream buffer implementation, such as
std::filebuf, which uses a codecvt internally, then you dont need to use
boost::io::converter -- you simply imbue a locale with an appropriate codecvt
facet. However, many -- if not most -- stream buffer implementations do not
perform any code conversion. Furthermore, if you are writing a stream buffer and
want it to perform code conversion, it seriously complicates the implementation.
This is what boost::io::converter is for.
Say you want to write a tcpbuf that performs code conversion -- you simply write
a narrow-character tcp_resource which does no conversion, then write:
This allows the library to interact with the codecvt facet, so you don't have
> 4) Rationale
> a) The section Generic Design left me stumped. The selection of conditional
> verb tenses suggests that neither alternative was used.
Okay. In a previous version, I suggested several alternate designs, then
compared them. The main alternative to generic design would be providing some
base classes with virtual functions from which filters and resources have to
derive. This is the approach of the Crypto++ crytographic library and the
java.io package, for instance.
> b) Interesting to me was that fact that the issue of streambuf vs stream is
> dealt with explicitly.
Could you elaborate?
> 5) What is your evaluation of the design?
> a) I've already stated my reservations about the mixing of streams and
> stream buffers.
> b) the concept of "chaining" is quite different than what I would have
> envisioned. As I understand it, if 10 filters are chained together, this
> system is going to require ten levels of function calls to get a character
> in/out. Someone is bound to object to this. On the other hand, this
> system permits filter to be composed "chained" at runtime which may be
> useful in some instances.
> I would have expected something like the filters be composed at construction
> time with templated constructors. This would permit the inlined member
> functions to be collapsed by the compiler to minimize copying. The Dataflow
> iterators in the serialization library manifest this expection.
It's certainly possible to make the type of filtering_streambuf depend on the
types of all the filters and resources in the chain, so that there are no
virtual functions except at the beginning of the chain (basic_streambuf has
virtual functions) and so that all the calls to the i/o functions read, write,
etc. could conceivably be inlined.
However, no compiler I know of will actually inline 10 layers of non-trivial
filtering. Furthermore, the cost of the function calls is largely mitigated by
buffering. If buffers are large enough, the function-call overhead is minimal.
(I've verified the positive effect of buffering as buffer sizes increase. The
way to verify the full claim would be to write a full implementation using the
alternate design and compare it with the current implementation. I'm reluctant
to do this ;-) )
Finally, the principal existing implementation of filtering stream buffers, by
James Kanze, works essentially the same way mine does. (See
> c) I'm a little concerned as to where codecvt, wide character i/o etc fit
> into this.
I hope I've explained this above. I'll give one more example.
file_descriptor_source does not internal code conversion; it simply forwards
calls to read to the appropriate low-level i/o function. If you want code
conversion, you can do this:
> d) I would be curious as to whether this is suitable for something other
> than filebuf - e.g. stream adaptor for sockets i/o. ?
I've used (an earlier version of) the library for socket i/o. It's one of the
main envisioned use cases, and the principle reason for supporting the i/o mode
> 6) What is your evaluation of the implementation?
> I didn't build the library or use in any tests so I recall can't contribute
> to that assessment.
> 7) What is your evaluation of the potential usefulness of the library?
> This is very useful and necessary. Something like this is what I've hoped
> for to complement the serialization library. I envision something like this
> combined with the serialization libray, combined with the mult-index set as
> making one great in memory database.
> Do you think the library should be accepted as a Boost library? Be sure to
> say this explicitly so that your other comments don't obscure your overall
> I have reservations as stated above. It very useful and necessary, and
> seems (from the quality of the documentation) a high quality implementation.
> If my choices were t accept or reject it as is I would accept it. Of course,
> if others who've studied it in more depth confirm my reservations above, I
> would like to see them addressed first.
I hope I have addressed some of your reservations.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk