From: Ian McCulloch (ianmcc_at_[hidden])
Date: 2006-09-18 12:35:45
Matthias Troyer wrote:
> On Sep 17, 2006, at 7:56 PM, Robert Ramey wrote:
>> Note that you've used packed_archive - I would use mpi_archive
>> instead. I
>> think this is a better description of what it is.
> I still prefer mpi::packed_archive, since there can also be other MPI
> archives. One possible addition to speed up things on homogeneous
> machines might be just an mpi::binary_archive, using a binary buffer.
Yes, this is a realistic idea; almost all MPI programs are run on
homogeneous clusters anyway. Even in a heterogeneous environment there
remains the question as to whether one can do better than MPI_Pack/Unpack
by using some kind of 'portable' archive (although 'transportable' might be
a better word). In principle the answer is definitely yes, as the
conversion functions can be inlined.
>> Really its only a name change - and "packed archive" is already
>> inside an
>> mpi namespace so its not a huge issue. BUT I'm wondering if the
>> idea of
>> rendering C++ data structures as MPI primitives should be more
>> orthogonal to
>> MPI prototcol itself. That is, might it not be sometimes
>> convenient to save
>> such serializations to disk? Wouldn' this provide a portable
>> binary format
>> for free? (Lots of people have asked for this but no one as been
>> sufficiently interested to actually invest the required effort).
> As Doug Gregor pointed out this is not possible since the format is
> implementation-defined, and can change from one execution to another.
This is only true for MPI-1.1. MPI-2 supports multiple data representations
and adds the functions MPI_Pack_external and MPI_Unpack_external to convert
to/from the "external32" format defined in section 9.5.2 of the MPI-2
standard. The intent of this is to be able to transfer data between MPI
implementations. Also, as part of the file I/O interface, MPI-2 also
allows user-defined representations so in principle it would be possible to
make some kind of adaptor to read a different archive format via the MPI-2
file I/O. Not that MPI file I/O seems to be used much anyway...
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk