From: Matthias Troyer (troyer_at_[hidden])
Date: 2006-09-16 04:43:10
On Sep 16, 2006, at 9:21 AM, Geoffrey Irving wrote:
> As far as I can tell, there are three choices when sending a message:
> 1. Send it as MPI_PACK, in which case structure and content can be
> encoded together arbitrarily.
This is the default if no MPI datatype exists and the user does not
employ the skeleton&content trick.
> 2. Send it as a datatype with the entire structure known
> This allows easy use of nonblocking sends and receives.
This is the default if there exists an MPI datatype for the structure.
> 3. Send it as a datatype with the structure determined up to the
> total size of the message. This requires getting the size with
> MPI_Probe, then building a datatype, then receiving it.
> The third option allows you to send a variable size vector with no
> extra explicit buffer. The same applies to a vector plus a constant
> amount of data (such as pair<int,vector<int> >). That would be quite
> useful, but probably difficult to work out automatically.
This could be implemented as an optimization for e.g. std::vector and
Our general mechanism that will work even for more complex data
structures, such as a pair of two vectors of arbitrary length, is the
skeleton&content mechanism where we first send serialized information
about the sizes, and then can send the content (multiple times) using
an MPI data type.
> Unfortunately, this trick interacts very poorly with multiple
> nonblocking messages, since the only ways to wait for one of several
> messages sent this way are to either busy wait or use the same tag
> for all messages. This restriction probably makes it impossible to
> hide this inside the implementation of a general nonblocking send
For std::vector and std::valarray of MPI datatypes I can see your
trick being generally useful and it can be implemented as a special
optimization (Doug, what do you think?). For more general types I
believe that the skeleton&content mechanism is the appropriate
generalization of the third option.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk