|
Boost : |
Subject: Re: [boost] [OT?] SIMD and Auto-Vectorization (was Re: How to structurate libraries ?)
From: Mathias Gaunard (mathias.gaunard_at_[hidden])
Date: 2009-01-18 07:46:05
Dean Michael Berris wrote:
> I personally have dealt with two types of parallelization:
> parallelization at a high level (dealing with High Performance
> Computing using something like MPI for distributed message-passing
> parallel computation across machines) and parallelization at a low
> level (talking about SSE and auto-vectorization).
MPI is by no way high-level; it's low-level in that you have to
explicitly say which tasks execute where and who they communicate with.
Threads, for example, are much more high-level than that: they get
scheduled dynamically, trying to use best the hardware as it is being
used, or some other optimization factor, depending on the scheduler.
The difference between MPI and SIMD is not low-level vs high-level
however: it's task-parallel vs data-parallel.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk