Boost logo

Boost :

From: Mark Blewett (boost_at_[hidden])
Date: 2004-02-26 06:19:55


At 02:41 26/02/2004, you wrote:

>scott <scottw <at> qbik.com> writes:
>
> > > Essentially, I send a message from my object to another
> > > object, and I receive
> > > the result not as a return value, but as a new message from
> > > the other object.
> >
> > Yes! This is that recess that I couldnt quite scratch. What
> > you describe is the essence of my "reactive objects". Still a
> > bit more stretching required but that is "the guts of it".
> >
> > A crude application of this technique might mave proxy methods
> > called things suchs as;
> >
> > <paste>
> > // Proxy declarations for all methods of the active object
> > proxy<void (int)> non_void_with_param;
> > proxy<void (void)> non_void_without_param;
> > ..
> >
> > proxy<void (int)> non_void_with_param_returned;
> > proxy<void (const char *)> non_void_without_param_returned;
> > ..
> > <paste/>
> >
> > An immediate reaction might go something like "but look at the
> > overheads!". The plain truth is that for successful interaction
> > between threads, something of this nature is a pre-requisite. It
> > may as well be the mechansim that has already been crafted for the
> > job.
> >
>
>Yep, there's gotta be mutexes somewhere.
>
>The mechanism you're referring to is the task_queue of fully-parameterised
>method invocations? So, how precisely does this work? Say I have a scheduler
>that has performed some work via a method in a servant S, and that method
>produced a result of type int. I want to return that result to the caller C,
>but the caller may not be threaded (may not be associated with any scheduler).
>Does that mean that instead of queueing the response to the object, I will
>perform some type of registered action in C, in the same thread context as the
>method invocation of S?
>
>If not, and I place the result into a task_queue of some sort in C, how does
>another scheduler object become aware that C has a result in its queue, and
>that something should be done with it?

Sorry I haven't been following this thread to closely (not enough time in
the day what with work as well)

I thought it maybe useful to post a some pseudo code, of a "pattern" that
I'm using more and more, incase its of any use, or gives someone a new idea;

[sorry about the roughness.. and simplified code!]

class servant
{
public:
         servant(scheduler s)
                 : m_scheduler(s), m_cs(), m_queue(), m_queue_is_busy(false) {}

         void post_message(message* m) {
                 criticalsection::lock lock(m_cs);
                 m_queue.push(m);
                 if (!m_queue_is_busy) {
                         m_scheduler->post_activation_request(this,
servant::dispatch);
                 }
         }

         void dispatch() {
                 message* m = 0;
                 {
                         criticalsection::lock lock(m_cs);
                         m = m_queue.pop();
                 }

                 // Do something with m
                 delete m;

                 {
                         criticalsection::lock lock(m_cs);
                         if (m_queue.empty()) {
                                 m_queue_is_busy = false;
                         } else {
                                 m_scheduler->post_activation_request(new
callback<servant>(this, servant::dispatch));
                         }
                 }
         }
private:
         criticalsection m_cs;
         std::queue<message*> m_queue;
         bool m_queue_is_busy;

};

class scheduler
{
public:
         scheduler()
                 : m_queue, m_cs() ....{
         }

         void post_activation_request(callbackbase* c)
         {
                 criticalsection::lock lock(m_cs);
                 m_queue.push(c);
         }

         // this function is run by a pool of threads (or on win32 an iocp)
         void run()
         {
                 while(!terminate) {

                         callbackbase* c = 0;

                         // wait for an entry on m_queue

                         {
                                 criticalsection::lock lock(m_cs);
                                 c = m_queue.pop();
                         }

                         (*c)(); // calls the dispatch method of the
relevant servant instance
                         delete c;
                 }
         }
};

For me this as a number of advantages;

- separation of threads and objects, ie no 1 thread per object

- guarantee of only scheduler 1 thread inside a servant at any one time, so
little effort required by the servant to be threadsafe (unless
communicating between sevants or other threads of course)

- I normally implement scheduler::run using a win32 io completion port, so
minimal overheads of context switching etc, and so highly scaleable (my
current project handles 2000 sockets with just a handful of threads)

- tho I've shown message's being posted to a servant, they could be
callback objects, which call a servant function with parameters (in fact in
my current project I use both messages and callbacks)

I'm currently digesting the rest of this post, I haven't needed to
implement any return values... yet!

Regards
Mark

> > If my scrappy example is taken "as is", the implied active objects
> > would effectively be "hard-coded" to interact with each other. This
> > is a non-viable design constraint that is relaxed by adding the
> > "callers address" to the queued "tasks". With this additional info
> > the thread "returns" (i.e. "a new message from the other object")
> > may be directed to any instance.
> >
>
>The proxy objects I was using before assumed that the return would be passed
>via a future reference. Perhaps they could be adjusted for your pattern
>so that they invoked a method (or proxy) on the caller, of type
>'boost::function<void, result_type>', which is registered when the original
>proxy is invoked?
>
>E.g.
>
>struct Callee : public SomeActiveObjectBase
>{
> ...
> Proxy<int, void> accessValue;
>};
>
>struct Caller : public SomeActiveObjectBase
>{
> void reportvalue(int value)
> {
> cout << "Got " << value << " back from callee" << endl;
> }
>
> void someMethod(Callee* other)
> {
> other.accessValue(boost::bind(&Caller::reportValue, this))
> }
>};
>
> > Please note: i'm not proposing the above "pairing of call and return
> > proxies" as the path forward. Its only intended to further expose
> > the essential technique.
> >
> > >
> > > The limitation resulting from the unequal status of different
> > > event mechanisms
> > > is a fairly fundamental one. Is anyone working on this in a
> > > boost threads
> > > context?
> >
> > Well, hopefully for reactive objects we have reduced it to 1?
> > But to answer your question, no.
> >
>
>Sorry, I meant unequal in that you can ::select on FDs, but not on boost::
>mutexes, or any underlying implementation detail they might expose.
>
> > > > In another implementation this just involved storing some kind
> > > > of "return address" and adding that to the queued object (task).
> > > > On actual execution the thread (belonging to the active object)
> > > > need only turn around and write the results back into the callers
> > > > queue.
> > > >
> > >
> > > This implementation requires that all objects have a queue,
> > > which is another
> > > property of the 'reactive object' system you're describing,
> > > but can't work
> > > with the approach I've taken.
> >
> > On first appearance this looks to be the case. A bit more sleight
> > of hand and there can be any number of reactive objects "serviced"
> > by a single thread. Previous explanations of this have failed
> > woefully so all I can do is direct you to the ActiveObject pattern,
> > entities "Scheduler" and "Servant".
> >
>
>So you use callbacks, rather than queues?
>
> > The natural perception that reactive objects are "heavy on
> > threads" is addressed beautifully by this section of the pattern.
> > It appears that Schmidt et al. resolved many of our concerns
> > long before we knew enough to start waving our arms (well, I can
> > speak for myself at least).
> >
>
>I don't think threads are necessarily that heavy. Certainly, for long-lived
>apps that create/destroy threads mostly at setup and tear-down, thread numbers
>are a major concern.
>
>In any case, lowered throughput or higher latency resulting from denying
>opportunities for concurrency will often be more noticeable than the overhead
>from the management of the concurrency.
>
> > The pattern doesnt address asynchronous return of results and
> > also doesnt quite give the entities a "final polish", i.e. the
> > Scheduler entity needs to inherit the Servant interface.
> >
>
>Yeah, this would be nice, if you could make it stick.
>
> > > Ok. Having established that this is a different pattern of
> > > multithreading,
> > > what are the costs/benefits of symmetric activation?
> >
> > Operationally the costs of SA tends towards zero. If you first
> > accept that some form of inter-thread communication was required
> > anyhow, i.e. SA doesnt itself _add_ this requirement.
> >
> > I think costs do exist in terms of the development cycle, e.g.
> > changing of culture. The type of programming actually required is
> > best defined IMHO as "signal processing" (e.g. SDL). It may take
> > some effort to convince developers of desktop apps that they
> > need to take a "signal processing" approach to the next version
> > of their 80Mb CAD package.
> >
>
>Perhaps I've been heading off on a tangent, but it actually sounds like what
>you want is a variant of boost::signal that could communicate between threads.
>In this design, the callback would have to seamlessly mutate into a decoupled
>method invocation like that in my Proxy objects. Actually, this sounds like
>an obvious thing for someone to have tried before...
>
> > The benefits? System responsiveness, maintainability of
> > code, low software defect counts. The results are more
> > likely to run correctly for long periods.
> >
>
>Can't complain about that.
>
> > There are difficulties when debugging. The traditional
> > flow of control is lost.
> >
>
>Similarly to signal-based designs.
>
> > > > Its been fun up to this point but perhaps your "code with legs" is
> > > > viable as "active object" and that is a valid first phase? I could
> > > > wheel my barrow off to the side for a while
> > > >
> > > > Cheers,
> > > > Scott
> > >
> > > Yes, I don't think my code can be modified to incorporate
> > > your requirements.
> > > I am interested in your pattern, though.
> > >
> > > Perhaps a change of subject is in order?
> >
> > Will go that way if you really didnt want to do anything more
> > with your active<object>. To me it seemed viable both as a
> > phase and as the basis for reactive objects. But think thats
> > up to you?
> >
>
>I'm open. Code is just code, anywhere it goes is fine with me. I just
>thought maybe the posts' 'subject line' should be changed :) I don't have
>a suggestion for a better description, though.
>
> > Cheers,
> > Scott
> >
>
>Matt
>
>_______________________________________________
>Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>
>
>
>---
>Incoming mail is certified Virus Free.
>Checked by AVG anti-virus system (http://www.grisoft.com).
>Version: 6.0.573 / Virus Database: 363 - Release Date: 28/01/2004


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.573 / Virus Database: 363 - Release Date: 28/01/2004

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk