Boost logo

Boost :

From: Giovanni P. Deretta (gpderetta_at_[hidden])
Date: 2005-12-30 23:04:47


It is quite late here, so i'll just write a quick reply
on some points...

>
> Basically the protocol class can become a template parameter of
> basic_stream_socket. Then, for example, the asio::ipv4::tcp
> class would be changed to include a socket typedef:
>
> class tcp
> {
> public:
> ...
> class endpoint;
> typedef basic_stream_socket<tcp> socket;
> };
>
> Then in user code you would write:
>
> asio::ipv4::tcp::socket sock;
>
> Now any constructor that takes an io_service (the new name for
> demuxer) is an opening constructor. So in basic_stream_socket
> you would have:
>
> template <typename Protocol, ...>
> class basic_stream_socket
> {
> ...
> // Non-opening constructor.
> basic_stream_socket();
>
> // Opening constructor.
> explicit basic_stream_socket(io_service_type& io,
> const Protocol& protocol = Protocol());
>
> // Explicit open.
> void open(io_service_type& io,
> const Protocol& protocol = Protocol());
> ...
> };
>
> This basic_stream_socket template would in fact be an
> implementation of a Socket concept. Why is this important?
> Because it improves portability by not assuming that a
> Protocol::socket type will actually be implemented using the
> platform's sockets API. Take the Bluetooth RFCOMM protocol for
> example :
[...]

This change is fundamental i think, basic_stream_socket should be a
template that can be generally used to implement al types of protocols,
simply defining the protocol policy. It is more or less what i do in my
network lib.

>
> The only wrinkle is in something like accepting a socket. At the
> moment the socket being accepted must not be open. That's fine,
> except that I think it is important to allow a different
> io_service (demuxer) to be specified for the new socket to the
> acceptor, to allow partitioning of work across io_service
> objects. I suspect the best way to do this is to have overloads
> of the accept function:
>
> // Use same io_service as acceptor.
> acceptor.accept(new_socket);
>
> // Use separate io_service for new socket.
> acceptor.accept(new_socket, other_io_service);
>
> An alternative is to require the socket to be opened before
> calling accept (this is what Windows does with AcceptEx) but I
> think that makes the common case less convenient.
>

Just an idea: the socket_impl-to-be accepted in the windows case might
not actually be stored in socket object, but it is inside an internal
cache in the acceptor or in the io_service. When accept is called, an
already opened socket_impl is asigned to the socket object. When the
socket object is closed, its socket_impl is instead returned to the
cache. I don't really know much of Winsock, so i can't really say how
much it is feasible.

[...]
>
>>By the way, the implementation of socket functions (accept,
>>connect, read, write, etc) should not be members of the
>>demuxer service but free functions or, better, static members
>>of a policy class.
>
>
> I do think these functions at the lowest level should be member
> functions rather than free functions, particularly because in
> the case of async functions it makes it clearer that the result
> will be delivered through the associated io_service (demuxer).
>

May be i wasn't very clear. I'm not saying that the
write_some/read_some functions should be free functions (althought that
wouldn't be that bad), i'm saing that these functions, instead of
forwarding to non-static member functions of the underlying service,
should forward to static member functions of the protocol class. This
would decouple the demuxer from the i/o functions.

[...]
>
>>The current interface let the user put a socket in non
>>blocking mode, but there is not much that can be done with
>>that, because no reactor is exported.
>
>
> I think non-blocking mode can still be useful in asynchronous
> and synchronous designs, since it allows you to issue an
> operation opportunistically.
>

Of course non-blocking operations are useful, but as there is no public
  readiness notification interface (i.e. a reactor), its use is somewhat
complicated.

>>The various reactors/proactor implementations should be
>>removed from the detail namespace and be promoted to public
>>interfaces, albeit in their own namespace (i.e.
>>win32::iocp_proactor, posix::select_reactor,
>>linux::epoll_reactor etc...). This change will make the
>>library a lot more useful.
>
>
> Over time perhaps, but these are already undergoing changes as
> part of performance changes (without affecting the public
> interface). They are also secondary to the portable interface,
> so there are many costs associated with exposing them.
>

Sure, I expect this change to be done in future versions of the library,
when both the inteface and the implementation stabilizes, they are not
immediately needed.

>
>>The buffered stream is almost useless. Any operation requires
>>two copies, one from kernel to user space, and one from the
>>internal buffer to the user buffer.
>
>
> Well yes, but you're trading off the cost of extra copies for
> fewer system calls.
>
>
>>The internal buffer should be unlimited in length (using some
>>kind of deque)
>
>
> I don't think it is helpful to allow unlimited growth of
> buffers, especially with the possibility of denial of service
> attacks.
>

Well, unlimited growth should be possible in principle, but the buffered
stream might have some way to set the maximum length.

>
>>and accessible to eliminate copies. An interface for i/o that
>>does not require copying would be generally usefull and not
>>limited to buffered streams.
>
>
> In the past I exposed the internal buffer, but removed it as I
> was unhappy with the way it was presented in the interface. I'm
> willing to put it back if there is a clean, safe way of exposing
> it.
>

What about parametrizing the buffered_stream with a container type, and
providing an accessor to this container? The container buffer can then
be swap()ed, splice()ed, reset()ed, fed to algorithms, and much more
without any copying, while still preserving the stream interface.
Instead of a buffered stream you can think of it as a stream adaptor to
for containers. I happily used one in my library, and it really
simplifies code, along with a deque that provides segmented iterators to
the contiguous buffers.

>
>>While I didn't use all of it (No timers nor SSL), as an
>>experiment I did write a simple continuation library using
>>asio::demuxer as a scheduler and using asio callback pattern
>>to restart coroutines waiting for i/o. Asio's demuxer cleaness
>>and callback guarantees made the implementation very
>>straightforward. If some one is interested i may upload the
>>code somewhere. Currently it is posix only (it uses the
>>makecontext family of system calls), but it should be fairly
>>easy to implement win32 fiber support.
>
>
> This sounds very interesting, especially if it was integrated
> with socket functions somehow so that it automatically yielded
> to another coroutine when an asynchronous operation was started.
>

Actually you don't want to integrate it with the socket functions. Only
because you have a stack, does not means you want to limit your self to
sync functions, for example, here is the code of a forwarding
continuation, it read from a stream and writes to another:

void forwarder(int counter, asio::stream_socket& sink,
asio::stream_socket& source) {
....

condition_node main_loop(scheduler, continuation::wait_any);

const size_t token_size = /* some size */;
char token[token_size];
boost::optional<asio::error> read_error;
boost::optional<asio::error> write_error;
std::size_t write_size = 0;
std::size_t read_size = 0;

while(counter) {
     if(write_error) {
       if(*write_error) {
        break;
       }
       write_error = error_type();

       boost::asio::async_write(sink,
                               boost::asio::buffer(token, token_size),
                               scheduler.current_continuation
                               (main_loop.leaf(), write_error,
                                write_size));
       counter--;

     }

     if(read_error) {
       if(*read_error) {
        break;
       }
       read_error = error_type();
       boost::asio::async_read(source,
                              boost::asio::buffer(token, token_size),
                              scheduler.current_continuation
                              (main_loop.leaf(), read_error,
                                read_size));

     }
     main_loop.wait();
}
sink.close();
main_loop.join_all();
continuation::exit();
}

I'm using a variant to hold the error code, so i can see if an async
call has finished (btw, asio works very well with error variants and
this patterns look promising). Scheduler is a global object (but it
needs not to be so, it is just for simplicity), that is simply a list of
ready coroutines. I could just use the demuxer as scheduler, buth then i
would need some extra context switches. Using two schedulers is better.
Current continuation returns a functor that when called will signal a
condition object. The condition object is created by a condition node
and it is linked to it. When a condition object is signaled, it signals
its parent condition node. What the node do depends from its current
mode. If it is in wait_any mode, it will immediately queue the current
continuation at the end of the scheduler queue. If it is in wait_all
mode, it will queue the continuation only if all child are signaled.
Multiple node can be nested an a complex tree can be built. Calling
wait() on any node (or even on a leaf), will remove the coroutine from
the ready list and run the next ready coroutine.

By themselves conditions object do not hold any state, it is the
combination of a condition and a variant that make them very powerful.
You might think of them as futures.

It might look somewhat complicated, but the implementation is
straightforward, about a thousand lines of code.

[follow some rant of mine about lazy allocation of buffers...]
>
> With a slight tightening of the use of the Mutable_Buffers
> concept, I think this is already possible :)
>
> The Mutable_Buffers concept is an iterator range, where the
> value_type is required to be a "mutable_buffer or be convertible
> to an instance of mutable_buffer". The key word here is
> "convertible". As far as I can see, all you need to do is write
> an implementation of the Mutable_Buffers concept using a
> container of some hypothetical lazy_mutable_buffer class. It
> only needs to provide a real buffer at the time when the
> value_type (lazy_mutable_buffer) is converted to mutable_buffer.
>

Hum, you are almost certanly right, I will experiment with it one of
these days. Btw, my issues with the mutable buffers were because i
misanderstood them as wrappers around an io_vec (i should have read the
implementation). I did miss the term *concept*. Now I have changed my
mind and i think it is very powerful. In my library i've used iterators
to delimit io ranges, but as two diferent paramenters. Wrapping them in
a pair or a range make them much more useful and exensibile... i'm
already thinking of possible extensions... shared_buffers, gift_buffers
(i need a better name for the last one) and more.

Btw, did you consider my proposal for multishot calls?

> Thanks very much for your comments!

You are very wellcome, I hope i've been helpful.

---
Giovanni P. Deretta

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk