Boost logo

Boost :

From: christopher baus (christopher_at_[hidden])
Date: 2005-12-15 22:22:32


> I happen to
> think that the fact many people have been trained to think about sockets
synchronously is most definitely for the worse.

Totally agree

>> I just want to point out that async reading and writing could
>> be a function of an async I/O demuxer and not the socket
>> itself. The socket could be passed to the demuxer and not
>> vice versa.
> The above design is something I can fundamentally disagree with
> ;) That sort of design, in my opinion, relegates asynchronous
> operations to "second class citizens".

I'm not religious about this, but it could be argued that it would put
sync and async on the same level. For instance you could consider the
POSIX interface to be os::sync_io::write(fd, buffer, bytes) and the
os::sync_io is just left out. Then my interface would be
os::async_demuxer::write(fd, buffer, bytes);

But honestly this isn't something that concerns me.

> There is a relatively straightforward mapping between one and
> the other. In the future I want to exploit this mapping further
> by investigating the use of expression templates (perhaps
> similar to Boost.Lambda) to permit the encoding of a sequence of
> operations in a synchronous programming style. The operations
> could then be executed either synchronously or asynchronously as needed.

Not to take away from asio (which I think is great BTW), but before
jumping into discussion about expression templates, I want to point out
that accepting handlers as template parameters and passing them by value
isn't free in asio. The end result is a dynamic copy of the handler per
operation in a system with potentially thousands of operations/second.

This is going to make potential pooling inefficient because the library
doesn't know what size and how many handlers are used until run time (what
happens if one large handler is queued along with a bunch of small ones?).

The handler functions are called via a function pointer, so there isn't
any performance advantage by passing handler functors by value, and the
deferred nature of async i/o makes compile time binding impossible (ok I
shouldn't say impossible with this group, but extremely unlikely).

I'm going to discuss this more in my review, but there is a significant
amount of overhead that is an artifact of the value semantics. I think a
parallel interface which accepts handlers by reference (or pointer) should
be considered. The result would be that the user could specify the same
handler instance for multiple operations, and asio could pool fixed sized
queue nodes. I suspect that in applications handling a large number of
connections the result would be improved reliability, performance, and in
some cases lower memory usage.

Anybody that's interested in the *nix implementation of this should look
at detail/reactor_op_queue.hpp, which is an interesting read. The
allocation I'm discussing happens in enqueue_operation(). Similar
allocations happen in win_iocp_socket_service.hpp and
win_iocp_demuxer_service. I think the allocation for value functors could
be done in a cross platform area of the library, and a concrete handler
object pointer could be passed to platform specific implementations which
allocate fixed sized structures from pools.

Other than that concern with memory management, I become more impressed by
the library the more time I spend with it. It is really a great piece of
work.

Cheers,

Christopher


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk