Boost logo

Boost :

Subject: Re: [boost] [Fibers] Performance
From: Giovanni Piero Deretta (gpderetta_at_[hidden])
Date: 2014-01-16 09:51:58


On Thu, Jan 16, 2014 at 2:32 PM, Hartmut Kaiser <hartmut.kaiser_at_[hidden]>wrote:

>
> > > 2014/1/16 Giovanni Piero Deretta <gpderetta_at_[hidden]>
> > >
> > > > I think that Harmut point is that you can very well use threads for
> > > > the same thing. In this particular case you would just perform a
> > > > syncronous read. Yes, to mantain the same level of concurrency you
> > > > need to spawn ten of thousands of threads, but that's feasible on a
> > > > modern os/hardware
> > > pair.
> > > > The point of using fibers (i.e. M:N threading) is almost purely
> > > > performance.
> > > >
> > >
> > > In the context of C10K problem and using the one-thread-per-client
> > > pattern I doubt that this would scale (even on modern hardware). Do
> > > you have some data showing the performance of an modern operating
> > > system and hardware by increasing thread count?
> > >
> > >
> > I do not have hard numbers (do you?), but consider that the C10K page is
> > quite antiquated today.
> >
> > On a previous life I worked on relatively low-latency applications that
> > did handle multiple thousands requests per second per machine. We never
> > bothered with anything but with the one thread per connection model. This
> > was on windows, on, IIRC, octa-core 64 bits machines (today you can
> > "easily" get 24 cores or more on a standard intel server class machine).
> >
> > Now, if we were talking about hundreds of thousands of threads or milions
> > of threads, it would be interesting to see numbers for both threads and
> > fibers...
>
> FWIW, the use cases I'm seeing (and trust me those are very commonplace at
> least in scientific computing) involve not just hundreds or thousands of
> threads, but hundreds of millions of threads (billions of threads a couple
> of years from now).
>
>
On a single machine? That would be impressive!

-- gpd


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk