Boost logo

Boost Users :

Subject: Re: [Boost-users] hybrid parallelism
From: Matthias Troyer (troyer_at_[hidden])
Date: 2010-11-04 11:34:19


On Nov 4, 2010, at 3:14, Brian Budge <brian.budge_at_[hidden]> wrote:

> Do these tasks share a lot of data? If they are really lightwieght
> memory-wise, heavy computationally, and don't require fine-grained
> communication with each other, I'd go with David's suggestion, as it
> will be easier to write, and the performance won't be much different.
>
> If you use a lot of memory, need fine-grained chatter between tasks,
> or the tasks are pretty cheap, threads may be (much) better.
>
> Brian
>

I second this opinion for several reasons

First, mixing MPI with multithreading can be hard since many MPI implementations are not thread safe. Be sure to only let the master thread use MPI.

Secondly, it adds another level of complexity. Starting M*N MPI processes is much easier, unless you waste too much memory that way.

Third, we have just recently benchmarked several multithreaded LAPACK routines in the Intel, AMD and other lapack libraries and compared them to the MPI based routines in SCALAPACK. Surprisingly the MPI implementations outperformed the multithreaded ones by a large margin. For me that shows that using MPI it is - at least in these applications - easier to write efficiently parallel code than using multithreading and the advantage easily overshadows any loss in efficiency due to the distributed memory nature. Keep in mind that between processes on the same computer, MPI uses a shared memory mechanism to send data and does not use the network.

Matthias


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net