Boost logo

Boost :

Subject: Re: [boost] [OT?] SIMD and Auto-Vectorization (was Re: How to structurate libraries ?)
From: Mathias Gaunard (mathias.gaunard_at_[hidden])
Date: 2009-01-18 10:24:57


Dean Michael Berris wrote:

> When you talk about threads, those are high level in the context of
> single-machine parallelism

What is a machine?
What's the difference between two machines with a CPU each and a NUMA
machine with two CPUs? Only the protocol (and its throughput and
latency, of course) to access memory on another CPU.

NUMA is really a form of cluster computing. And you can implement NUMA
in software over a cluster network.

Threads are just tasks that share memory. There is no need for all those
tasks to be on the same machine: memory be can distributed.

> -- unless your OS is a distributed OS that
> spawned processes across multiple machines and magically showed just
> one big environment to programs running in that parallel machine.
> Threads can be considered parallelism primitives in a single machine
> context, but not in a general parallel computing context where you
> might be spanning multiple distributed/independent machines.

That's a single-system image, and it actually works very well if you
have the required low-latency network.

> BTW, you can use asynchronous communication primitives in MPI and
> still use threads on each process. I don't even think MPI and Threads
> work at the same level anyway so it depends on which level you're
> looking from which determines whether you think MPI is low-level or
> Threads are high-level. ;-)

I would personally prefer to describe parallel tasks in one way once and
for all rather than have to express them in different ways depending on
what level of hardware parallelism can be used to run them.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk