问题
According to MPI 2.2 standard Section 4.1: to create a new data type we have to define a typemap which is a sequence of (type, displacement) pairs. The displacements are not required to be positive, increasing, nor distinct.
- Suppose I define a typemap of the following sequence: {(double, 0), (char,0)} this does not make sense, yet possible, how can a standard provide so much flexibility?
回答1:
If that's the only thing you find confusing about typemaps, you're smarter than I am. But as to this particular example -- C unions are exactly this; why shouldn't typemaps allow it?
回答2:
I started writing a comment but ran out of space, so here goes. First off, this code compiles and runs on the HP-MPI implmentation that I have access to:
#include <mpi.h>
int main(int argc, char* argv[])
{
MPI_Init(&argc, &argv);
int count = 2;
int lengths[] = { 1, 1 };
MPI_Aint disp[] = { 0, 0 };
MPI_Datatype types[] = { MPI_DOUBLE, MPI_CHAR };
MPI_Datatype weird_type;
MPI_Type_struct(count, lengths, disp, types, &weird_type);
MPI_Type_commit(&weird_type);
MPI_Finalize();
return 0;
}
However, the { (double, 0), (char,0) }
typemap won't behave like a union: if you send data with this typemap, the same memory address will first be interpreted as a double
, then as a char
, and both values will be sent (assuming the implementation doesn't implode).
I can only think of one plausible use case for this sort of behavior: consider the typemap { (MPI_CHAR, 0), (MPI_BYTE,0) }
. Sending a char
variable using this type will perform representation conversion in the first case, but not in the second: this way, you can check if the character encoding is the same on the sender and receiver machine. Of course there are other ways to do this, but still, the option's there. Though the more likely scenario is that the standard simply doesn't concern itself with exotic special cases.
Also, regarding negative displacements: I actually used these before when I had to pass data from a linked data structure (e.g. a graph). This is not for the faint of heart, but here's the pseudocode of my algorithm:
std::vector<MPI_Aint> displacements;
for (each node n in the graph)
{
if ( n needs to be sent )
{
displacements.push_back(<MPI address of n>);
}
}
for (int i=0; i<displacements.size(); i++)
{
// compute the element #i's offset from the first one
displacements[i] -= displacements[0];
}
// create HIndexed datatype where blocks consist of one node,
// and begin at the memory addresses in 'displacements'
// send nodes as one element of the previously defined type,
// beginning at the address of the first node
Hopefully you can see the importance of negative displacements from there: there's no telling where the various nodes are in memory, so it's entirely possible that some nodes will be at earlier locations than the one at which we begin traversing the graph.
来源:https://stackoverflow.com/questions/4330074/typemap-rule-is-confusing