|
Boost : |
Subject: Re: [boost] Proposal for a thread synchronisation primitive: future_queue
From: Martin Hierholzer (martin.christoph.hierholzer_at_[hidden])
Date: 2018-05-14 13:31:51
----- Original Message -----
> From: "boost" <boost_at_[hidden]>
> To: "boost" <boost_at_[hidden]>
> Cc: "Gavin Lambert" <gavinl_at_[hidden]>
> Sent: Wednesday, 9 May, 2018 08:43:34
> Subject: Re: [boost] Proposal for a thread synchronisation primitive: future_queue
> Even the Boost.LockFree queue may not always be lock-free
Even std::atomic is not guaranteed to be lock free, see std::atomic::is_lock_free(). I think it is mainly important to specify precisely when to expect lock-free behaviour and when locks might internally occur.
> But it's also trivial
> to implement around those existing types, which is probably why it
> didn't end up making it into the standard as a separate type.
Sure it is possible to combine certain primitives to more complex primitives and users can always do this themselves. Still, my proposal goes beyond just a combination of an eventcount/semaphore and a lockfree queue. Just like futures it knows continuations, when_all/when_any and similar things. If you say it's too trivial to put this into a fundamental library, one could say a future is, too. A future also is just a combination of a value and a semaphore. Yet it is a widely used primitive everyone is happy to have.
My actual use case for the future_queue is a class of applications which can be described maybe as a very complex, multi-threaded soft PLC (programmable logic controller) application. For now we have only soft realtime requirements which are even met by the current spsc_queue<future> construction. We have many threads running (like 100 or 200), basically one thread per task or action. This simplifies the program structure a lot, since we have independent modules with each one thread. This may sound crazy, but actually the performance is much better than I originally anticipated (sleeping threads don't cost performance, only context switches are important).
The future_queue would help to implement constructions like this much easier. Tasks can be implemented as continuations. Since we have when_all() and when_any(), which can be continued on their own, one can even create tasks which use multiple future_queues as an input.
Maybe this explains a bit better, what I am aiming at here?
> I would love to see a portable eventcount
> (http://www.1024cores.net/home/lock-free-algorithms/eventcounts)
> implementation in Boost.Lockfree.
I would anyway offer to work on a portable semaphore, but I am afraid I don't have much experience with platform independent implementations. I will probably soon extend my simple semaphore class from the future_queue library to work with other values than 0 or 1 (since I need that for a better implementation of the future_queue) and I can also try to make a good implementation for Mac OS X, if this helps you. I don't really have access to development tools on Windows any more and basically no experience on other platforms. Maybe someone else can add more implementations?
Cheers,
Martin
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk