Boost logo

Boost :

Subject: Re: [boost] [OT?] SIMD and Auto-Vectorization (was Re: How to structurate libraries ?)
From: Dean Michael Berris (mikhailberis_at_[hidden])
Date: 2009-01-18 08:10:46


On Sun, Jan 18, 2009 at 8:46 PM, Mathias Gaunard
<mathias.gaunard_at_[hidden]> wrote:
> Dean Michael Berris wrote:
>
>> I personally have dealt with two types of parallelization:
>> parallelization at a high level (dealing with High Performance
>> Computing using something like MPI for distributed message-passing
>> parallel computation across machines) and parallelization at a low
>> level (talking about SSE and auto-vectorization).
>
> MPI is by no way high-level; it's low-level in that you have to explicitly
> say which tasks execute where and who they communicate with.
> Threads, for example, are much more high-level than that: they get scheduled
> dynamically, trying to use best the hardware as it is being used, or some
> other optimization factor, depending on the scheduler.
>
> The difference between MPI and SIMD is not low-level vs high-level however:
> it's task-parallel vs data-parallel.
>

Yes, but the comparison I was making regarding high-level and
low-level were in terms of code.

In MPI you coded in C/C++ expressing communication between/among tasks
without having to know about the actual (i.e. physical) topology of
your architecture. This was considerably higher level than say writing
hand-optimized SSE-aware code in your C++ code to express parallelism.

I understand that MPI is a specification for communication primitives
(much like assembly for distributed computing across logical/actual
machines) and that wasn't the comparison I was going for when I meant
it was "high-level". Though if you think about it, it's high level
because you didn't deal directly with networking/shared-memory
primitives, etc. but I don't want to belabor the point too much. ;-)

When you talk about threads, those are high level in the context of
single-machine parallelism -- unless your OS is a distributed OS that
spawned processes across multiple machines and magically showed just
one big environment to programs running in that parallel machine.
Threads can be considered parallelism primitives in a single machine
context, but not in a general parallel computing context where you
might be spanning multiple distributed/independent machines.

BTW, you can use asynchronous communication primitives in MPI and
still use threads on each process. I don't even think MPI and Threads
work at the same level anyway so it depends on which level you're
looking from which determines whether you think MPI is low-level or
Threads are high-level. ;-)

-- 
Dean Michael C. Berris
Software Engineer, Friendster, Inc.

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk