From: Matthias Troyer (troyer_at_[hidden])
Date: 2007-11-30 08:45:22
If the value type of your vector is an MPI data type then skeleton/
content are not needed at all. All ypu need to ensure is that the
receiving vectors on the slaves have the correct size before calling
On Nov 30, 2007, at 11:40 AM, David Osipyan wrote:
> Gather/scatter works fine!
> Is any performance increasing available when skeleton/content
> are used instead of gather/scatter?
> -----Original Message-----
> From: Matthias Troyer [mailto:troyer_at_[hidden]]
> Sent: Friday, November 30, 2007 11:25 AM
> To: boost_at_[hidden]; David Osipyan
> Subject: Re: [boost] [mpi]
> On 29 Nov 2007, at 16:46, David Osipyan wrote:
>> I try to use boost.mpi in my particle-in-cell simulation code.
>> I have a vector of particles. How can I send parts of this vector
>> from root process to all of slaves, and send backward to root ?
>> Is it possible via mpi::scatter() and mpi::skeleton/content?
>> I need to send particles from range 0..n to first slave, n+1 ..
>> 2*n+1 to second slave etc. After porocessing each slave shall
>> return the new values of parts to root.
> Yes, you can use scatter and gather to do that. If your data type is
> an "MPI data type", and you have specialized is_mpi_datatype<T> to be
> mpl::true_ to indicate this, then no skeleton/content is required but
> you can just call gather/scatter on a std::vector.
> This transmission is intended for the sole use of the individual and
> entity to whom it is addressed, and may contain information that is
> privileged, confidential and exempt from disclosure under applicable
> law. You are hereby notified that any use, dissemination,
> distribution or duplication of this transmission by someone other
> than the intended addressee or its designated agent is strictly
> prohibited. If you have received this transmission in error, please
> notify this company immediately by reply to this transmission and
> delete it from your computer. Thank You. Credence Systems Corporation.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk