Boost logo

Boost-MPI :

Subject: Re: [Boost-mpi] mixing boost serialization with C MPI functions
From: Jiaxin Han (hanjiaxin_at_[hidden])
Date: 2015-11-19 13:52:02


To add some information, the error message is found to be caused by the
initialization of packed_oarchive, which calls MPI_Alloc_mem:

oa << sendbuf;

Setting

export MPI_RANKMEMSIZE=100000000

appears to help for our platform mpi installation.

Jiaxin

2015-11-19 16:56 GMT+00:00 Jiaxin Han <hanjiaxin_at_[hidden]>:

> Hi Lorenz,
>
> Thank you very much! Your code works! But it seems I have to manually
> allocate the memory before unpacking the data from the archive? Then if my
> data is a vector of vector, does it mean I have to resize every vector as
> well?
>
> Back to the MPI error, sorry I was wrong to say that it is related to
> boost.MPI. With the new code passing MPI_PACKED data, I am receiving the
> same out of memory error. So it seems my MPI installation has a problem in
> the message size for packed data. Thanks to both Lorenz and Alain!
>
> Jiaxin
>
>
> 2015-11-19 15:56 GMT+00:00 Lorenz Hübschle-Schneider <huebschle_at_[hidden]>:
>
>> Hi,
>>
>> On 19/11/15 16:07, Jiaxin Han wrote:
>>
>>> Hi,
>>>
>>> Could anyone point me to an example of mixing boost serialization with C
>>> MPI functions?
>>>
>>
>> you need to send the archive size first if it's not precisely known. Try
>> something like this:
>>
>> mpi::packed_oarchive oa(comm);
>> oa << sendbuf;
>> auto sendptr = const_cast<void*>(oa.address());
>> // cast to int because MPI uses ints for sizes like it's still 1990
>> int sendsize = static_cast<int>(oa.size());
>> MPI_Send(&sendsize, 1, MPI_INT, 1, 0, comm);
>> MPI_Send(sendptr, sendsize, MPI_PACKED, 1, 0, comm);
>>
>> The receiving side would look similar:
>>
>> mpi::packed_iarchive ia(comm);
>> int recvsize;
>> MPI_Recv(&recvsize, 1, MPI_INT, 0, 0, comm, MPI_STATUS_IGNORE);
>> ia.resize(recvsize);
>> auto recvptr = ia.address();
>> MPI_Recv(recvptr, recvsize, MPI_PACKED, 0, 0, comm, MPI_STATUS_IGNORE);
>>
>>
>> Do I have to make sure buffer is big enough or the oarchive will handle
>>> the memory automatically? If the former, what is the correct memory size
>>> to hold a vector? I guess it should hold not just vec.data(), but also
>>> vec.size().
>>>
>>
>> It does this automatically for the sending size. On the receiver, you
>> have to know how much data to expect and reserve enough memory accordingly.
>> This is what the first transmission in my code above is for.
>>
>> And lastly, oa does not appear to be the correct variable to pass to
>>> MPI_Send. Then what should I pass to MPI_Send after creating the archive?
>>>
>>
>> You pass in the pointer and its length -- oa.address() and oa.size() in
>> the example above.
>>
>> I am asking because the boost mpi installation on our server appears to
>>> have a limitation on the message size. For example, the attached code
>>> reports an error as:
>>>
>>> terminate called after throwing an instance of
>>>
>>> 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::mpi::exception>
>>> >'
>>> what(): MPI_Alloc_mem: MPI_Alloc_mem: Out of "special" (shared)
>>> memory
>>> MPI Application rank 0 killed before MPI_Finalize() with signal 6
>>>
>>> This appears to be a problem only with the boost installation on the
>>> server. The code runs correctly on my local machine.
>>>
>>
>> I'm no expert but to me this sounds like an issue with the MPI
>> installation on your servers. Note that it's not Boost.MPI that's giving
>> you an error - the error is from the C function that it calls.
>>
>> Cheers,
>> Lorenz
>>
>
>



Boost-Commit list run by troyer at boostpro.com