Boost logo

Boost Users :

Subject: Re: [Boost-users] mpi isend to group
From: Philipp Kraus (philipp.kraus_at_[hidden])
Date: 2012-12-29 18:02:29


Am 29.12.2012 um 23:32 schrieb Andreas Schäfer:

> Hi,
>
> since you mention both, threads and MPI, I might add that the threading
> support of most MPI implementations contains some caveats, to say the
> least. It is certainly possible to emulate asynchronous collectives
> either by isend/irecv (although that's really tough job given the
> level of optimizations at hand in any major MPI implementation) or to
> just use (possibly empty) synchronous broadcasts (as Riccardo
> suggested). But to get a clearer picture I'd like to ask for some
> details:
>
> - Which MPI are you using?

I use at the moment OpenMPI (but it should be worked also under MS Windows)

> - How many MPI processes do your jobs contain?

The system hast got 64 cores (on each core 2 threads can be created).

> - Which threading level do you request via MPI_Thread_init()?

At my testing I use MPI_THREAD_SERIALIZED

> - How do you ensure asynchronous progress? (i.e.: mostly MPI will only
> send/receive data when some MPI function is being called. Unless
> e.g. MPI_Test is being polled or you MPI supports progress threads,
> the bulk of communication won't be carried out until you call MPI_Wait())
>

Each MPI process runs in the thread loop some database calls, so after each
database block I will check if there is a message from the MPI core 0 and if it
exists, all cores should be "barried". The core 0 checks after the database calls
is there is any data, if yes, it sends this data to the other cores. So I can not use
a MPI_Wait call, because this creates a blocking communication.

Thanks

Phil

>
>
> On 15:15 Thu 27 Dec , Philipp Kraus wrote:
>> Hello,
>>
>> I have got a problem to create a mpi::isend call. I have got a thread loop like:
>>
>> while (thread_is_running)
>> {
>> std::size_t id = 0;
>> if (!mpicom.rank())
>> {
>> try {
>> id = getID();
>> mpicom.isend(id)
>> } catch (...) { }
>> } else {
>> mpicom.ireceive(id);
>> }
>>
>> if (id > 0)
>> {
>> mpi::barrier();
>> do something with id
>> }
>>
>> boost::thread::yield();
>> }
>>
>> I would like to create a non-blocking communication over all nodes, so my node with rank 0
>> checks if a dataset exists, if it exists, the id should be send to all other hosts. After all hosts
>> reveived this id, they should start the calculation. The nodes must be synchronized before the
>> "working part" is startet, but if there is no data send to the host, they should do nothing.
>>
>> I don't know how to use isend to send a message to all hosts. I know only the mpi::any_source flag
>> but I can not find an any_destination. IMHO the ireceive call must be written with iprobe like
>>
>> if (boost::optional<mpi::status> l_status = mpicom.iprobe( 0, myTag ))
>> {
>> std::size_t id = 0;
>> mpicom.recv( l_status->source(), l_status->tag(), id )
>> }
>>
>> Is there a way to create a non-blocking group communication? How can I send if data exists, a message
>> to all nodes and receive it on each node also with non-blocking communication?
>>
>> Thanks for your help
>>
>> Phil
>> }
>>
>> _______________________________________________
>> Boost-users mailing list
>> Boost-users_at_[hidden]
>> http://lists.boost.org/mailman/listinfo.cgi/boost-users
>
> --
> ==========================================================
> Andreas Schäfer
> HPC and Grid Computing
> Chair of Computer Science 3
> Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
> +49 9131 85-27910
> PGP/GPG key via keyserver
> http://www.libgeodecomp.org
> ==========================================================
>
> (\___/)
> (+'.'+)
> (")_(")
> This is Bunny. Copy and paste Bunny into your
> signature to help him gain world domination!
> _______________________________________________
> Boost-users mailing list
> Boost-users_at_[hidden]
> http://lists.boost.org/mailman/listinfo.cgi/boost-users



Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net