
Hmm, I think it should work. Thanks though. The example I posted was modified from: http://ci-tutor.ncsa.illinois.edu/content.php?cid=1137 Namely, /* deadlock avoided */ #include <stdio.h> #include <mpi.h> void main (int argc, char **argv) { int myrank; MPI_Request request; MPI_Status status; double a[100], b[100]; MPI_Init(&argc, &argv); /* Initialize MPI */ MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /* Get rank */ if( myrank == 0 ) { /* Post a receive, send a message, then wait */ MPI_Irecv( b, 100, MPI_DOUBLE, 1, 19, MPI_COMM_WORLD, &request ); MPI_Send( a, 100, MPI_DOUBLE, 1, 17, MPI_COMM_WORLD ); MPI_Wait( &request, &status ); } else if( myrank == 1 ) { /* Post a receive, send a message, then wait */ MPI_Irecv( b, 100, MPI_DOUBLE, 0, 17, MPI_COMM_WORLD, &request ); MPI_Send( a, 100, MPI_DOUBLE, 0, 19, MPI_COMM_WORLD ); MPI_Wait( &request, &status ); } MPI_Finalize(); /* Terminate MPI */ } I tried again with one that more close matches the example: class Item { private: friend class boost::serialization::access; template<class Archive> void serialize(Archive& ar, const unsigned int version) { ar & val; } public: int val; Item() : val(1) { } }; struct Receipt { boost::mpi::request request; std::vector<Item> items; }; int main(int argc, char **argv) { mpi::environment env(argc, argv); mpi::communicator world; Receipt receipt; vector<Item> msg(100000); int myRank = world.rank(); if (myRank == 0) { receipt.request = world.irecv(1, 19, receipt.items); world.send(1, 17, msg); receipt.request.wait(); } else if (myRank == 1) { receipt.request = world.irecv(0, 17, receipt.items); world.send(0, 19, msg); receipt.request.wait(); } cout << "Done" << endl; return 0; } And I still get deadlock. Moreover, there is no deadlock if I irecv / send a very large array of doubles. I'd really appreciate it if someone else could try the example and see if it works. Nick On Jun 13, 2009, at 1:01 AM, Ryo IGARASHI wrote:
Hi, Nick,
# I am not a MPI expert.
On Sat, Jun 13, 2009 at 4:16 AM, Nick Collier<nick.collier@gmail.com> wrote:
I running into an issue where an irecv followed by a send results in deadlock. A simple test case,
Run with mpirun -np 2, this never completes. It does complete with vector<Item> msg(10) however.
According to the MPI standard, MPI_irecv() finishes when MPI_wait() is calledg, and MPI_send() never returns before receive completes. So the deadlock is of no surprise.
I think you shouldn't rely on the behavior with small size object. It is the buffering mechanism in the MPI implementation that avoid the deadlock of the small size object case.
See http://www.mpi-forum.org/docs/ for specification.
Best regards, -- Ryo IGARASHI, Ph.D. rigarash@gmail.com _______________________________________________ Boost-users mailing list Boost-users@lists.boost.org http://lists.boost.org/mailman/listinfo.cgi/boost-users