|
Boost : |
From: Manuel Jung (gzahl_at_[hidden])
Date: 2007-09-01 06:00:36
Ah okay, now i have a starting point, thanks a lot.
Greetings
Manuel Jung
> Hi Manuel,
>
> On Aug 31, 2007, at 9:34 AM, Manuel Jung wrote:
>> I would like to know, how i can use Boost.MPI for IPC between
>> different PCs.
>> It is described in the introduction:
>> "Unlike communication in threaded environments or using a shared-
>> memory
>> library, Boost.MPI processes can be spread across many different
>> machines,
>> possibly with different operating systems and underlying
>> architectures."
>>
>> Is there an example, how its done? I cant find one. Or isn't
>> implemented
>> yet?
>
> It's implemented, but how it's done depends on the underlying MPI
> implementation and your specific configuration. Boost.MPI builds on
> top of the C MPI interface, for which there are many different
> implementations. Here are some open-source possibilities:
>
> - Open MPI: http://www.open-mpi.org/
> - MPICH2: http://www-unix.mcs.anl.gov/mpi/mpich2/
>
> If you already have a cluster that you plan to use, there is probably
> an MPI available on that cluster. You would have to consult the
> documentation for that cluster or ask a system administrator how
>
> The simplest way to launch MPI jobs on multiple computers is to
> create a machine file containing the names of all of the computers,
> one per line. You can then provide that machine file to either
> "mpiexec" or "mpirun" (again, depending on the MPI implementation you
> use!) to launch your program, e.g.,
>
> mpiexec -machinefile myhosts -n 4 ./hello_world
>
> The "mpiexec" program launches jobs on multiple machines, the "-
> machinefile myhosts" tells mpiexec to retrieve the list of machines
> from the file "myhosts", the "-n 4" tells mpiexec to start 4
> different processes, and "./hello_world" is the program to execute on
> all of those processes.
>
> - Doug
> _______________________________________________
> Unsubscribe & other changes:
> http://lists.boost.org/mailman/listinfo.cgi/boost
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk