Boost logo

Boost :

Subject: Re: [boost] Boost.Fiber mini-review September 4-13
From: Giovanni Piero Deretta (gpderetta_at_[hidden])
Date: 2015-09-04 14:10:24


On Fri, Sep 4, 2015 at 4:14 PM, Nat Goodspeed <nat_at_[hidden]> wrote:
> Hi all,
>
> The mini-review of Boost.Fiber by Oliver Kowalke begins today,

I did a quick skim of the docs and the implementation. I have to say
that both the docs and the code are quite readable and I can't see
anything controversial.

So, just to get the discussion started, here are a couple of comments:

On the library itself:

- Boost.Fiber is yet another library that comes with its own future
type. For the sake of interoperability, the author should really
contribute changes to boost.thread so that its futures can be re-used.

- In theory Boost.Thread any_condition should be usable out of the box.

This probably should lead to a boost wide discussion. There are a few
boost (or proposed) libraries abstract hardware and OS capabilities,
for example boost.thread, boost.asio, boost.filesystem,
boost.iostream, boost.interproces (which also comes with its own
mutexes and condition variables) and of course the proposed afio and
fiber. At the moment they mostly live in separated, isolated worlds.
It would be nice if the authors were to sit down and work out a shared
design. Or more practically at least added some cross library
interoperability facilities. This is C++, generalization should be
possible, easy and cheap.

On condition variables, should Boost.Fiber add the ability to wait to
any number of them? (you can use a future<> as an event with multi
wait capability of course, but still...).

On performance:

- The wait list for boost::fiber::mutex is a deque<fiber_context*>.
Why not an intrusive linked list of stack allocated nodes? This would
remove one or two indirections, a memory allocation and make lock
nothrow.

- The performance session lists a yield at about 4000 clock cycles.
That seem excessive, considering that the context switch itself should
be much less than 100 clock cycles. Where is the overhead coming from?
What's the overhead for an os thread yield?

The last issue is particularly important because I can see a lot of
spinlocks in the implementation. With a very fast yield
implementation, yielding to the next ready fiber could lead to a more
efficient use of resources.

that's all for now,

HTH,

-- gpd


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk