|
Boost : |
From: Matthias Troyer (troyer_at_[hidden])
Date: 2006-09-16 07:02:21
On Sep 16, 2006, at 12:09 PM, Markus Blatt wrote:
> This skeleton&content approach sounds pretty powerful to me. Is there
> a way for the user to tune this behaviour?
There is no such mechanism provided explicitly by Boost.MPI, but you
can tune the behavior almost arbitrarily with serialization wrappers
and archive adaptors. E.g. you could filter out certain members
depending on whether you are on the sending or receiving side.
> Until now I always had the impression that you only allow the sending
> and receiving of complete objects and/or their structure.
> But often there are other scenarios, too:
>
> One maybe wants to send just parts of an object. And it might be the
> case that the local structure of your object is different from your
> global structure, e.~g. it might need redistribution in the
> communication.
>
> Let me give you an example:
>
> Consider a distributed array or vector. Each entry of the global
> vector has a consecutive, zero starting local index l and a
> coresponding global index g together with tag specifying whether the
> process is owner of the value (meaning that he can compute a new value
> from consistent data without communication) or not.
>
> Process 0:
> local global tag
> 0 0 owner
> 1 3 owner
> 2 4 notowner
>
> Process 1:
> local global tag
> 0 1 owner
> 1 2 owner
> 3 3 notowner
>
> This would represent a global array of 5 entries.
>
> In our case we have the following communication patterns:
>
> owner to notowner
> Each process sends to each other process all entries of the array that
> are tagged as owner and on the other process as notowner in one
> message. The matching is done via the global id.
>
> In this case 0 sends to 1 the value at index 1 (global: 3) and process
> 1 stores is at local index 3. If that is not clear, take a look at
> http://hal.iwr.uni-heidelberg.de/~mblatt/communication.pdf (early
> draft) for the more exact notion of index sets.
>
> Would something like this be possible or can one just send and receive
> objects with the same layout?
The easiest solution in your case is to just send the object at index
1 and receive the object at index 3 by calling the send/receive with
these sub-objects.
For a more general case, I assume that you want to use different
MPI_Datatypes on the receiving and sending side, and create them in
some automatic way? It depends on the details of your mechanism, but
I think that it should be possible to automate your mechanism. I
would propose to introduce a new class for these local/global arrays
and to provide a special serialize function for them.
Matthias
Matthias
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk