Boost logo

Boost :

Subject: Re: [boost] Synapse library review starts today December 2
From: Robert McInnis (r_mcinnis_at_[hidden])
Date: 2016-12-04 20:15:03

On Sun, Dec 4, 2016 at 6:50 PM, Emil Dotchevski <emildotchevski_at_[hidden]>

> > I disagree with this design style, which is to say I also disagree
> > with the existing slot/signal style.
> >
> > As an example, what if we had a system with hundreds of thousands of
> > numerics... each with a particular name. Then imagine a rule set that
> > had to maintain integrity as any particular numeric were to be
> > changed. With slots or synapses, you're forced to search for the
> > object each time before applying the rules. This would bog down
> > quickly as the number of updates per second increased (imagine 100k
> > objects and 2-10k entwined rules being updated 2k times per second.
> > That's a real world scenario)
> >
> The search is the price one has to pay for the non-intrusive nature of
> Obviously, when this isn't necessary (and the overhead of the search is
> as in the real world use case you're referring to) then you can use a
> approach.

Why pay the price at all if there is no need? Why use an inefficient
algorithm when
a more direct approach is available?

Additionally, you can achieve algorithmic capabilities otherwise
unobtainable using
an observer pattern as I've described. For example, a painter's algorithm
updating screen components. Using slots, the manager may be notified to
re-paint the
window, but the manager would have to maintain some understanding of the
paint order
to render it properly. Using a map< int, Observer* > where the int is the
level, observers can hook into the appropriate level and be called in order
by simply
traversing the map in-order.

> That said, in my own use cases the overhead of Synapse has never been
> in comparison to the time it takes to execute the actual connected
functions. Note
> that the search is limited only to connections of the same signal type,
and that
> it can be implemented as a hash, O(1).

With a small test case, you would rarely see any performance issues. Only
at scale
would the inefficiencies start to become apparent. In my example (stock
feed), if
one stock were to change (ie: MSFT), you would be forced to traverse all
portfolios then search each to see if they own any MSFT to be updated.
less than 100 positions in each portfolio, O(100) is still 6 comparisons for
one. Assuming only 5k portfolios, where 1k actually have a MSFT position,
still 30,000 searches to find all 1k portfolios. With the subject/observer
no search would be required... just directly to the business rule on each

30,000 searches from a single tick. Now imagine 2,000 ticks/sec. That's
the issue.

Of course, after updating the portfolio, you'd have to remember to trigger
associated agents the user has assigned to watch his investments. Which in
turn would
trigger other actions. This series of events can be easily obtained using a
subject/observer pattern.

> > Objects should have events they can trigger or subjects they
> > which would trigger
> > given a certain situation (value changes, particular event arrives,
> > flag flips, whatever) and the observer(s), being unknown to the
> > original object, would observe the event via a loose coupling
> > mechanism.
> >
> > This allows for a system to have hundreds or thousands of objects all
> > trigging events at random but only notifying the particular observers
> > that are interested in that instance's event.
> >
> > This allows for a more flexible system while promoting dependency
> > driven updates... which would result in the best performance for an
> > event notification system.
> >
> > I've been using such a design since 1990 and posted a rendition of it
> > to this group 3 months ago:
> >
> >
> >
> You should request a formal review, if the Boost community finds your
> useful it'll be accepted.

I offered it up along with any nuggets I may have gleaned long the 25+ yrs
experience utilizing such a design across multiple types of projects.

If the community has any questions, I'd imagine they'd ask.

Boost list run by bdawes at, gregod at, cpdaniel at, john at