Boost logo

Boost :

Subject: Re: [boost] Asynchronous library now in Boost Library Incubator
From: Thomas Heller (thom.heller_at_[hidden])
Date: 2016-11-30 16:33:18


On Mittwoch, 30. November 2016 20:31:58 CET Christophe Henry wrote:
> On 11/29/2016 09:40 PM, Thomas Heller wrote:
> > On Dienstag, 29. November 2016 20:41:58 CET Christophe Henry wrote:
> >>> boost.asynchronous looks to have many of the same features/functionality
> >>
> >> as HPX.
> >>
> >>> Can I ask why you chose to reimplement futures/lightweight thread
> >>
> >> pools/parallel STL/etc,
> >>
> >>> rather than working on improving HPX to suit your needs?
> >>
> >> I don't think the libraries have the same goals (or at least it seems to
> >> me
> >> so). Asynchronous is first of all an architecture tool focusing on
> >> organizing a whole application into thread worlds.
> >
> > Can you please elaborate a little on what is meant with "thread worlds"?
>
> You might want to read the concepts parts.
> I'll elaborate a little more. I will also add more to the doc.
> A "thread world" is a world defined by a (single threaded) scheduler and
> all the objects which have been created, are living and destroyed within
> this context.
> It is usually agreed on that objects and threads do not mix well. Class
> diagrams fail to display both as these are orthogonal concepts.
> Asynchronous solves this by organizing objects into worlds, each living
> within a thread. This way, life cycles issues and the question of thread
> access to objects is solved.
> It is similar to the Active Object pattern, but with n Objects living
> within a thread.

Thanks, this makes it a bit clearer. So essentially, a thread world, defines
the scope of your object? Which class is modeling the concept of the thread
world now, the scheduler? I think this information is missing from the
documentation. Is there also a default thread world?

>
> >> The libraries also do not have the same design / tradeoffs. Asynchronous
> >> can make use of futures but encourages callbacks and tries to make these
> >> as
> >> safe as possible. The goal is to be as asynchronous (meaning
> >> non-blocking)
> >> as possible.
> >> It is no coincidence that I'm also the author of Boost.MSM. The
> >> encouraged
> >> design is a whole lot of state machines, each in its own thread (possibly
> >> sharing a thread), sending tasks to threadpools, TCP connections, etc.
> >> and
> >> waiting for callbacks. Future-based designs do not mix well with state
> >> machines' run-to-completion.
> >>
> >> The reason for the non-blocking focus is that I'm using the library at
> >> work
> >> in an industrial 24/7 application. When working on the production line,
> >> blocking is a disaster. If an algorithm goes astray, or simply takes
> >> longer
> >> than usually, blocking the production line results in huge costs. Crashes
> >> due to races also. Asynchronous provides ways to avoid both.
> >
> > Does this mean that Boost.Asynchronous provides realtime worst case
> > execution time guarantees?
>
> I did not state this. This is not a real-time library.

No, you never stated that explicitly. It was my interpretation of the
sentences above. Reading through the documentation etc. one might indeed get
the feeling that Boost.Asynchronous was designed for real-time applications.
Which is probably very hard to justify given the different memory allocations,
exceptions etc.

> However what the library offers might or might not be sufficient for
> your needs. Automotive usually has more stringent needs as other industries.
> A way to handle hard real-time (meaning an answer has to be given latest at
> a precise time), is to write manager objects like state machines (no
> matter real state machines or own implementation) living within a thread
> world and reacting to events. A timer event would be one of them. When the
> event is emitted, the manager can react. The library ensures
> communication between worlds using queues with different priority.
> Giving the highest priority to the timer event will ensure it will be
> handled next. In theory a run to completion executes in 0 time unit. But
> as this is never the case, there is a delay due to user code and thread
> context switching, so it is not a perfect hard real-time.
> To help with soft real-time (throughput), the library provides
> threadpools and parallelization mechanisms.

So we are speaking about soft realtime here, with a focus on high throughput,
correct? That makes more sense and is more aligned with the research I am
aware of about multi-core real time systems. For a moment, I was hoping you
found a way to solve that problem, which would have been awesome.

>
>
> Christophe
>
> _______________________________________________
> Unsubscribe & other changes:
> http://lists.boost.org/mailman/listinfo.cgi/boost


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk