From: Matthew Vogt (mvogt_at_[hidden])
Date: 2004-02-20 20:45:13
Darryl Green <Darryl.Green <at> unitab.com.au> writes:
> Hi Matty,
Hi! Sorry, I didn't make the name-to-person connection before.
> I don't see a great deal of use for the thread-pool based active object
> variant briefly described in the book, where the active object has a
> whole thread-pool to itself and the object needs its own internal
> locking. This seems to me to be an implementation detail of the object
> itself - nothing to do with the active object model. Further I have real
> difficulty imagining a circumstance in which I'd want all those threads
> sitting around dedicated to one AO (like I can talk...). However, it
> sounds like you are describing something different.
Yes, I know what you're saying, since I'm also used to the client/server
model where the concurrency is built-in, rather than bolted on. (Apology
preemptively tendered to anyone who uses active objects and construes this as
That said, I'm open to the idea that some problems are more easily solved by
concealing the concurrency behind object boundaries. Further, if the
object can perform tasks concurrently, and the scheduler is sufficiently
sophistocated to use a thread group, then you do in fact have a server.
The difference being that the communication is through in-process function
calls rather than via IPC or network channels.
> I can (if I try hard enough) imagine that I'd like to specify the level
> of concurrency allowed between methods of an object and have a
> scheduler clever enough to order the execution of those methods
> efficiently based on this (and probably request priority as well) on a
> (shared by multiple such AOs) thread pool.
I don't think it's really a question of allowing a concurrency level between
methods, but of permitting concurrency of execution while protecting the
object's internal resources.
I doubt that prioritisation can be used in active object pattern, due to
the access model of C++ object interactions; you can't invoke two methods
on an object and have them executed in the opposite to order to that in which
you called them.
Perhaps prioritisation of one client over another is useful, but that's not
obvious to me.
> A dynamic thread pool could then scale the number of threads used based
> on the actual observed/required concurrency.
> However, I suspect that the above isn't going to fly, if only because it
> is likely to make the scheduler a bottleneck. Anyway, nobody seems to
> want to give me a box with enough processors and I/O to make this much
> fun - and I can't think of anything (useful) to run on it
> Is that anything like what you had in mind?
No, I don't think so. If you take the characterisation I used earlier of
a server with an in-process function call interface, then it doesn't matter
at what scale you apply the pattern, and the scheduling doesn't need to be
I think it's quite generally applicable, although to use it you need to
approach a problem with a particular viewpoint.
Without using the active object pattern, you may think something like, "I'm
going to need a service to handle requests of type 'X', and I'll make that a
server using named pipes...", whereas with an active object wrapper, you might
think, "I'll write a class that does 'X', and if it later needs to be used from
multiple threads then I'll transform it into an active object to ensure
thread safety...". And subsequently, you might think "The class that does 'X'
is a bottle-neck; I had better add some mutexes to it and schedule it with a
thread pool rather than have clients blocking on it..."
> Does anyone actually re-use some of the more exotic veriations of these
> patterns often enough that they consider a framework for implementing
> them is actually anything more than a (fun?) excercise?
I don't know, but I think it's more a question of the approach taken, rather
than the applicability of the concept.
And, it is a fun exercise!
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk