Boost logo

Boost :

Subject: Re: [boost] [Fibers] Performance
From: Thomas Heller (thom.heller_at_[hidden])
Date: 2014-01-16 12:19:36


Am 16.01.2014 17:57 schrieb "Giovanni Piero Deretta" <gpderetta_at_[hidden]>:
>
> On Thu, Jan 16, 2014 at 4:44 PM, Hartmut Kaiser <hartmut.kaiser_at_[hidden]
>wrote:
>
> > > > > Now, if we were talking about hundreds of thousands of threads or
> > > > > milions of threads, it would be interesting to see numbers for
both
> > > > > threads and fibers...
> > > >
> > > > FWIW, the use cases I'm seeing (and trust me those are very
> > > > commonplace at least in scientific computing) involve not just
> > > > hundreds or thousands of threads, but hundreds of millions of
threads
> > > > (billions of threads a couple of years from now).
> > > >
> > > >
> > > On a single machine? That would be impressive!
> >
> > Well, it depends on the size of the machine, doesn't it? The no. 1
machine
> > on the top 500 list [1] (Tianhe-2 [2]) has 3120000 cores (in 16,000
compute
> > nodes).
> >
> >
> Oh, right!
>
> Do they usually present a single OS image to the application? I.e. do all
> the cores share a single memory address space or nodes communicate via
> message passing (MPI I presume)? std::thread-like scaling is relevant for
> the first case, less so for the later.

If you decide to program with MPI that's certainly true. However HPX[1]
provides the ability to spawn threads remotely, completely embedded in a
standard conforming API. For those remote procedure calls a small overhead
is crucial in order to efficiently utilize your whole machine. We
demonstrated the capability to do exactly that [2].

[1]: http://stellar.cct.lsu.edu
[2]: http://stellar.cct.lsu.edu/pubs/scala13.pdf

>
> -- gpd
>
> _______________________________________________
> Unsubscribe & other changes:
http://lists.boost.org/mailman/listinfo.cgi/boost


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk