How does MPI_Reduce with MPI_MIN work?

守給你的承諾、 提交于 2021-02-11 05:10:14

问题


if I have this code:

int main(void) {
    int result=0;
    int num[6] = {1, 2, 4, 3, 7, 1};
    if (my_rank != 0) {
        MPI_Reduce(num, &result, 6, MPI_INT, MPI_MIN, 0, MPI_COMM_WORLD);
    } else {
        MPI_Reduce(num, &result, 6, MPI_INT, MPI_MIN, 0, MPI_COMM_WORLD)
        printf("result = %d\n", result);
    }
}

the result print is 1 ;

But if the num[0]=9; then the result is 9

I read to solve this problem I must to define the variable num as array. I can't understand how the function MPI_Reduce works with MPI_MIN. Why, if the num[0] is not equal to the smallest number, then I must to define the variable num as array?


回答1:


MPI_Reduce performs a reduction over the members of the communicator - not the members of the local array. sendbuf and recvbuf must both be of the same size.

I think the standard says it best:

Thus, all processes provide input buffers and output buffers of the same length, with elements of the same type. Each process can provide one element, or a sequence of elements, in which case the combine operation is executed element-wise on each entry of the sequence.

MPI does not get the minimum of all elements in the array, you have to do that manually.




回答2:


You can use MPI_MIN to obtain the min value among those passed via reduction. Lets' examine the function declaration:

int MPI_Reduce(void* sendbuf, void* recvbuf, int count, MPI_Datatype
                datatype, MPI_Op op, int root, MPI_Comm comm)

Each process send it's value (or array of values) using the buffer sendbuff. The process identified by the root id receive the buffers and stores them in the buffer recvbuf. The number of elements to receive from each of the other processes is specified in count, so that recvbuff must be allocated with dimension sizeof(datatype)*count. If each process has only one integer to send (count = 1) then recvbuff it's also an integer, If each process has two integers then recvbuff it's an array of integers of size 2. See this nice post for further explanations and nice pictures.

Now it should be clear that your code is wrong, sendbuff and recvbuff must be of the same size and there is no need of the condition: if(myrank==0). Simply, recvbuff has meaning only for the root process and sendbuff for the others. In your example you can assign one or more element of the array to a different process and then compute the minvalue (if there are as many processes as values in the array) or the array of minvalues (if there are more values than processes).

Here is a working example that illustrates the usage of MPI_MIN, MPI_MAX and MPI_SUM (slightly modified from this), in the case of simple values (not array). Each process do some work, depending on their rank and send to the root process the time spent doing the work. The root process collect the times and output the min, max and average values of the times.

#include <stdio.h>
#include <mpi.h>

int myrank, numprocs;

/* just a function to waste some time */
float work()
{
    float x, y;
    if (myrank%2) {
        for (int i = 0; i < 100000000; ++i) {
            x = i/0.001;
            y += x;
        }
    } else {
        for (int i = 0; i < 100000; ++i) {
            x = i/0.001;
            y += x;
        }
    }    
    return y;
}

int main(int argc, char **argv)
{
    int node;

    MPI_Init(&argc,&argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &node);

    printf("Hello World from Node %d\n",node);

   /*variables used for gathering timing statistics*/
    double mytime,   
           maxtime,
           mintime,
           avgtime;

    MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
    MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
    MPI_Barrier(MPI_COMM_WORLD);  /*synchronize all processes*/

    mytime = MPI_Wtime();  /*get time just before work section */
    work();
    mytime = MPI_Wtime() - mytime;  /*get time just after work section*/

    /*compute max, min, and average timing statistics*/
    MPI_Reduce(&mytime, &maxtime, 1, MPI_DOUBLE,MPI_MAX, 0, MPI_COMM_WORLD);
    MPI_Reduce(&mytime, &mintime, 1, MPI_DOUBLE, MPI_MIN, 0,MPI_COMM_WORLD);
    MPI_Reduce(&mytime, &avgtime, 1, MPI_DOUBLE, MPI_SUM, 0,MPI_COMM_WORLD);

    /* plot the output */
    if (myrank == 0) {
        avgtime /= numprocs;
        printf("Min: %lf  Max: %lf  Avg:  %lf\n", mintime, maxtime,avgtime);
    }

    MPI_Finalize();

    return 0;
}

If I run this on my OSX laptop, this is what I get:

urcaurca$ mpirun -n 4 ./a.out
Hello World from Node 3
Hello World from Node 0
Hello World from Node 2
Hello World from Node 1
Min: 0.000974  Max: 0.985291  Avg:  0.493081


来源:https://stackoverflow.com/questions/37930116/how-does-mpi-reduce-with-mpi-min-work

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!