Boost logo

Boost :

From: Christopher Kohlhoff (chris_at_[hidden])
Date: 2005-12-05 15:07:52


Hi Simon,

--- simon meiklejohn <simon_at_[hidden]> wrote:
<snip>
> As an example, lets say i have a library component that parses
> data from a socket into some higher level application message.
> At the time it finds a complete message it wants to notify its
> client code.

I think this example perfectly illustrates why there is a difference
between the needs of the caller and callee...
 
> - In some applications its appropriate to call immediately in
> the same thread. (e.g. if the app has only one thread which is
> blocking in the network layer)

This first case is about the caller, since the caller best knows
whether it is appropriate to make the call immediately in the same
thread. E.g. an immediate call might no longer be appropriate if the
library was changed to use asynchronous I/O so it could handle multiple
concurrent connections, since blocking on the call would affect the
service provided to other client code.

> - In other cases its appropriate to defer to a single
> different thread (eg. one with particular thread affinity, or
> the single thread that services all calls into a particular
> group of application objects thus providing protection against
> deadlocks)

In this case you are talking about the needs of the callee, i.e. you
talk about the threads that service calls *into* application objects.

> - in a third case its better to pass the message off to a pool
> of threads for performance/responsiveness reasons (e.g. the
> task involves accesses to a database which take time and can
> be done in parallel).

This is about the needs of the caller again, since it does not want a
long running operation to block it.

> The message parse library can be built to support all these
> scenarios. Just supply it with a different defer object when
> constructing and connecting the application objects. Hide the
> decision behind a polymorphically implemented demuxer::post().

The decision can't be hidden entirely behind such a beast, although it
may be part of the solution.

Let's say the parser library has the following definition:

  class parser {
    ...
    void register_callback(function<void(int)> callback);
    ...
    function<void(int)> callback_;
  };

If the parser, as caller, decides it is appropriate to call the
function directly then it need only go:

  callback_(42);

However if the caller requires that the call be deferred then it can
use an implementation of the dispatcher concept, such as asio::demuxer:

  parser_dispatcher_.post(boost::bind(callback_, 42));

The deferral needs of the client are decoupled. If the client does not
care what thread calls it, then it can go:

  parser.register_callback(my_callback);

However if the client has specific needs (such as that the calls must
come in only on a specific thread) then it would use its own dispatcher
to provide those guarantees:

  parser.register_callback(client_dispatcher_.wrap(my_callback));

This is where the optimised dispatch() call (versus always-deferring
post() function) comes into play. The client code is not aware of the
parser's deferral decision and vice versa. However they may in fact
share the same dispatcher (such as an application-wide asio::demuxer
object), in which case you want the callback to be optimised into a
single deferral.

The thing that asio does not provide is a polymorphic wrapper for the
dispatcher concept so that the choice of deferral mechanism is done at
runtime. However assuming one exists (and it wouldn't be hard to create
one) the parser interface might be:

  class parser {
    ...
    parser(polymorphic_dispatcher& d) : parser_dispatcher_(d) {}
    ...
    void register_callback(function<void(int)> callback);
    ...
    polymorphic_dispatcher& parser_dispatcher_;
    function<void(int)> callback_;
    ...
  };

Specifying what dispatcher the parser should use is separate to
supplying a callback function. It is still the parser's decision as to
whether it requires a deferred call using post() or not.

Cheers,
Chris


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk