Boost logo

Boost Users :

From: Niall Douglas (s_sourceforge_at_[hidden])
Date: 2021-02-09 18:46:39


On 09/02/2021 18:13, Damien via Boost-users wrote:

>> I agree that the learning curve for a new API will be quite steep
>> initially. But GCD's API is quite nicely designed, it's intuitive, and
>> it "just works".
>>
>> I can't say that the Win32 thread API is as nicely designed. It *is*
>> very flexible and performant, but a lot of it is "non-obvious" relative
>> to GCD's API design.
>>
>> On the other hand, if you implement a GCD based implementation, you'll
>> #ifdef in a Win32 thread API implementation quite easily.
>
> Is this something that can be done with libunifex, that Eric Niebler and
> colleagues are working on?
>
> https://github.com/facebookexperimental/libunifex

To my best knowledge, LLFIO's dynamic_thread_pool_group is the first
attempt to create a portable standards aspiring API abstraction wrapping
all the major proprietary dynamic thread pool implementations.

That said, libunifex watches LLFIO closely, indeed they borrowed quite
heavily from LLFIO's non-public Windows async i/o abstraction, so it
would not surprise me if there has been a recent addition to libunifex
in this area (I haven't been able to keep up with libunifex since covid
began, to be honest).

Equally, FB only super cares (in production deployment terms) about
libunifex on Linux only. Other platforms aren't deployed in production.
As Linux lacks a kernel supported GCD implementation, for what they're
needing which is very high concurrency socket i/o on Linux, there isn't
a strong need for a GCD implementation.

I'll put this another way: you can make do without a GCD like
implementation if you're socket i/o bound, whereas a GCD like
implementation is ideal if you're compute bound. If you're file i/o
bound, traditionally one avoids a GCD like implementation like the
plague because of i/o congestion blowout, but proposed
llfio::dynamic_thread_pool_group is intended to prove to WG21 SG1 that
in fact GCD is great for file i/o, especially lots of memory mapped file
i/o. You just need an i/o load aware work item pacer.

(Future WG21 paper forthcoming will have lots of pretty graphs proving
this on all the major platforms)

Niall


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net