|
Boost Users : |
Subject: Re: [Boost-users] Forthcoming Boost.Fiber review
From: Nat Goodspeed (nat_at_[hidden])
Date: 2013-12-19 08:40:41
On Wed, Dec 18, 2013 at 9:29 PM, Gavin Lambert <gavinl_at_[hidden]>wrote:
this library sounds interesting. When it was initially mentioned I was
> wondering how it related to Coroutine, since they sound like similar
> problem domains. But now I see that it uses Coroutine as a basis.
>
It's an important point, one that I had to absorb myself. The Google Summer
of Code 2006 "Boost.Coroutine" project conflates the two ideas. Oliver
teases them apart.
You construct coroutines when you want them freely and frequently passing
control back and forth. Perhaps you're constructing a pipeline of
potentially-stateful filters, where you want to use control flow as well as
local data to govern the subsequent behavior of a given filter. Control
transfer with a coroutine is immediate: when you ask an upstream coroutine
for a value, you suspend until it delivers that one value. Similar remarks
when you pass a value to a downstream coroutine. It's like an ordinary
function call: the calling function immediately suspends until the called
function returns.
Fibers are much more analogous to threads, in that launching a fiber gives
it a more-or-less independent run. You *may* choose to synchronize two
fibers, but you need not. When you coordinate with another fiber using
(e.g.) future and promise, setting the value in the promise does not
immediately suspend the calling fiber. It only marks the waiting fiber
ready to run.
Fibers are useful when, for instance, you must make a sequence of
asynchronous network requests. Of course you could also structure that code
as a sequence of callbacks -- but quick, what's the control flow among
those callbacks? Is there conditional behavior? How about looping? With a
fiber, you can write what *appear* to be blocking calls, and wrap them in
normal C++ control structures.
Fibers are cheaper than threads. Better still (at least to me) is the
implicit promise of cooperative scheduling. You cannot safely run legacy
code (with potential references to global or static variables) on a new
thread. You must first visit every code path, defending every such variable
with appropriate thread synchronization. But on a given thread, at any
given moment, you are guaranteed that only one fiber is running. One fiber
will not interrupt another in an unpredictable state.
Hope that helps...
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net