hello,

Subsequent to a previous thread asking whether to merge MPI and Openmp to parallelize a large problem, I've been advised to go through MPI only as it would be simpler and that MPI implementations on the same box use shared memory which doesn't have a huge cost (still some compared to a uniprocess multithread where objects are actually shared naturally.... writing this, actually a question comes up:
1.  in the "shared memory" of many mpi processes on the same box, is an object (say a list of numbers) actually shared between the 2 processes address spaces? I guess not unless one explicitly make it so with the "shared memory API" (unix specific?)


So, I currently have a serial application with a GUI that runs some calculations.
My next step is to use OpenMPI with the help of the Boost.MPI wrapper library in C++ to parallelize those calculations.
There is a set of static data objects created once at startup or loaded from files.

2. what are the pros/cons of loading the static data objects individually from each separate mpi process vs broadcasting the static data via MPI itself after only the master reads/sets up the static data?

3. Is it possible to choose the binary archive instead of the text archive when serializing my user-defined types?
Where do I deal with the endianness issue given that I may have Intel/Sparc/PowerPC CPUs?

regards,