From: Christopher Kohlhoff (chris_at_[hidden])
Date: 2005-12-31 04:17:49
--- "Giovanni P. Deretta" <gpderetta_at_[hidden]> wrote:
> This change is fundamental i think, basic_stream_socket should
> be a template that can be generally used to implement al types
> of protocols, simply defining the protocol policy. It is more
> or less what i do in my network lib.
I believe there might be more similarity there already than you
think, so I'm going to do a bit of experimentation and see what
comes out of it. However the dual support for IPv4 and IPv6
raised by Jeff might be a bit of a problem -- is it something
you address in your network lib?
> May be i wasn't very clear. I'm not saying that the
> write_some/read_some functions should be free functions
> (althought that wouldn't be that bad), i'm saing that these
> functions, instead of forwarding to non-static member
> functions of the underlying service, should forward to static
> member functions of the protocol class. This
> would decouple the demuxer from the i/o functions.
I see what you mean now, however I don't think they can be
portably decoupled. Some platforms will require access to shared
state in order to perform the operation. The acceptor socket
caching you mentioned is also a case for having access to this.
I suspect that, if I adopt a type-per-protocol model, the
service will also be associated with the protocol in some way
(i.e. the protocol might be a template parameter of the service
> Of course non-blocking operations are useful, but as there is
> no public readiness notification interface (i.e. a reactor),
> its use is somewhat complicated.
What I mean is that the readiness notification isn't required,
since one way of interpreting an asynchronous operation is
"perform this operation when the socket is ready". That is, it
corresponds to the non-blocking operation that you would have
made when notified that a socket was ready. A non-blocking
operation can then be used for an immediate follow-up operation,
> What about parametrizing the buffered_stream with a container
> type, and providing an accessor to this container? The
> container buffer can then be swap()ed, splice()ed, reset()ed,
> fed to algorithms, and much more without any copying, while
> still preserving the stream interface. Instead of a buffered
> stream you can think of it as a stream adaptor to for
> containers. I happily used one in my library, and it really
> simplifies code, along with a deque that provides segmented
> iterators to the contiguous buffers.
I think a separate stream adapter for containers sounds like a
good plan. BTW, is there a safe way to read data directly into a
deque? Or do you mean that the deque contains multiple buffer
> Actually you don't want to integrate it with the socket
> functions. Only because you have a stack, does not means you
> want to limit your self to sync functions, for example, here
> is the code of a forwarding continuation, it read from a
> stream and writes to another:
> It might look somewhat complicated, but the implementation is
> straightforward, about a thousand lines of code.
> i'm already thinking of possible extensions... shared_buffers,
> gift_buffers (i need a better name for the last one) and more.
I take it that by "shared_buffers" you mean reference-counted
If so, one change I'm considering is to add a guarantee that a
copy of the Mutable_Buffers or Const_Buffers object will be made
and kept until an asynchronous operation completes. At the
moment a copy is only kept until it is used (which for Win32 is
when the overlapped I/O operation is started, not when it ends).
However, this may make using a std::vector<> or std::list<> of
buffers too inefficient, since a copy must be made of the entire
vector or list object. I will have to do some measurements
before making a decision, but it may be that supporting
reference-counted buffers is a compelling enough reason.
> Btw, did you consider my proposal for multishot calls?
I have now :)
I think that what amounts to the same thing can be implemented
as an adapter on top of the existing classes. It would work in
conjunction with the new custom memory allocation interface to
reuse the same memory. In a way it would be like a simplified
interface to custom memory allocation, specifically for
recurring operations. I'll add it to my list of things to
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk