Boost logo

Boost :

From: Christopher Kohlhoff (chris_at_[hidden])
Date: 2005-12-13 20:08:30


Hi Matt,

--- Matthew Vogt <mattvogt_at_[hidden]> wrote: <snip>
> I don't really follow the intent of the locking dispatcher.
> It appear to me that it simply achieves the same effect as
> adding a scoped lock to each dispatched method (each using the
> same mutex).

It's not quite the same thing. A scoped lock in each dispatched
handler means that the method may be blocked while waiting to
acquire the mutex. A blocked handler means that other handlers
may also be prevented from executing even though they are ready
to go.

The locking_dispatcher ensures that the handler won't even be
dispatched until the lock is acquired. This means that the
execution of other handlers can continue unimpeded.

I'd also like to note here that I intend asio to allow and
encourage (although not enforce) the development of applications
without any explicit locking of mutexes.

<snip>
> The locking_dispatcher's 'wrap' method seems to be its primary
> purpose. Since the terminology is a little unnatural, perhaps
> it could be operator() ?

The wrap() function returns a new function object that wraps the
provided one. It doesn't actually execute code at that point, so
I'm not sure operator() would convey that meaning.

> I would like to see a user-supplied handler option for
> handling genuine errors, not per-call but at a higher level
> (maybe per-demuxer?) Typically, the response to an error is
> to close the socket, once open; it would be handy to supply a
> callback which received enough context to close down a socket,
> and in all other code to ignore error-handling completely.

I'm not convinced that this belongs as part of asio's interface,
since there are a multitude of ways to handle errors. For
example, there's the issue of what happens in async operations,
such as asio::async_read, that are composed of other async
operations. You wouldn't want the application-specific error
handling to be invoked until the composed operation had finished
what it was doing.

I think the go is to use function object composition at the
point where you start the asynchronous operation, e.g.:

  async_recv(s, bufs, add_my_error_handling(my_handler));

or perhaps:

  async_recv(s, bufs,
    combine_handlers(error_handler, ok_handler));

Also if the default behaviour you want is to close the socket
(although often there's an associated application object to be
cleaned up too) you can simply bind a shared_ptr to the object
as a parameter to your handlers:

  async_recv(s, bufs,
    boost::bind(handler, connection_ptr, ...));

Essentially the object is owned by the chain of operations. When
the chain of async operations and their handlers is terminated
due to an error (or any other condition) the object is cleaned
up automatically.

> I do not think EOF should constitute an error, and I would
> expect that try-again and would-block errors could be safely
> ignored by the user. Perhaps EOF should be a testable state
> of the socket, rather than a potential error resulting from
> 'receive'.

In older versions of asio EOF was indicated by having a read
return 0, same as the BSD sockets interface. However this
increased complexity in composed synchronous and asynchronous
operations, such as the now defunct async_read_n(), which had to
return both the total number of bytes transferred *and* the last
bytes transferred to allow checking for EOF.

Now with EOF as an error the problem can be addressed far more
clearly and elegantly. For example, consider:

  asio::read(socket, buffers, asio::transfer_all());

Its "contract" is to fill the buffers, using multiple underlying
read_some calls as necessary. If it is unable to fulfill the
contract it must return within an error indicating why. If the
stream closed early then that error is EOF.

> Can the proactor be made to work with multiple event sources,
> with the current interface? For example, wrapping
> simultaneous network IO and aio-based file IO on POSIX?

Yes, although in the current implementation one of them must be
relegated to a background thread.

<snip>
> I believe there should be an automatically managed buffer-list
> class, to provide simple heap-based, garbage-collected buffer
> management, excluding the possibility of buffer overruns and
> memory leaks. Perhaps the one in the Giallo library could be
> used without significant modification, and supported by the
> asio buffer primitives. Also, this abstraction should be used
> in tutorials; although it is useful and practical to support
> automatically allocated buffer storage, it shouldn't be
> needlessly encouraged.

I would rather see such a buffer-list utility developed as a
separate boost library. As you say, it can integrate with easily
with asio provided it supports the asio buffer primitives, or if
it implements the Mutable_Buffers and Const_Buffers concepts.

However, I don't agree that automatically allocated buffer
storage shouldn't be encouraged. No allocation means no leaks,
predictable memory usage, less fragmentation etc. I.e. it's one
of the strengths of C++. Furthermore, it may be more appropriate
in many applications to use a buffer that is a data member of
some connection class, where the connection class as a whole is
garbage collected, not the buffer.

> I also would like to request some commentary (in the library
> documentation) on future developments that are out of scope
> for the current proposal. Given the late juncture at which
> we're looking at bringing network programming into boost, I
> think it's important to consider desirable extensions, and how
> the current proposal will support them. Much of this will be
> discussed in the review, of course.

What sort of things did you have in mind?

> One thing I would like to see (even in Asio itself) is a
> simplest-possible layer over the bare sockets layer, which
> took care of resource management and the intricacies of
> sockets programming. Ideally, this would leave an interface
> comparable to asynch IO programming in python and the like,
> suitable for the smallest networking tasks.

I'm not sure what you mean here. Did you mean to say that such a
layer would leave out asynch I/O? Otherwise if it includes
asynch I/O then that's the role that asio is already trying to
fill - i.e. be a thin yet portable asynchronous I/O abstraction.

<snip>

Cheers,
Chris


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk