MPI communication complexity

霸气de小男生 提交于 2021-02-07 03:59:37

问题


I'm studying the communication complexity of a parallel implementation of Quicksort in MPI and I've found something like this in a book:

"A single process gathers p regular samples from each of the other p-1 processes. Since relatively few values are being passed, message latency is likely to be the dominant term of this step. Hence the communication complexity of the gather is O(log p)" (O is actually a theta and p is the number of processors).

The same affirmation is made for the broadcast message.

Why are these group communications complexity O(log p)? Is it because the communication is done using some kind of tree-based hierarchy?

What if latency is not the dominant term and there's a lot of data being sent? Would the complexity be O(n log (p)) where n would be the size of the data being sent divided by the available bandwidth?

And, what about the communication complexity of an MPI_Send() and an MPI_Recv()?

Thanks in advance!


回答1:


Yes, gather and scatter are implemented using (depending on the particular MPI release) for instance binomial trees, hypercube, linear array or 2D square mesh. An all-gather operations may be implemented using an hypercube and so on.

For a gather or scatter, let lambda be the latency and beta the bandwidth. Then log p steps are required. Suppose you are sending n integers each represented using 4 bytes. The time to send them is

enter image description here

This is O(log p) when n = O(1) and O(log p + n) otherwise. For a broadcast, the time required is

enter image description here

which is O(log p) when n = O(1) and O(n log p) otherwise.

Finally, for point-to-point communications like MPI_Send(), if you are sending n integers the communication complexity is O(n). When n = O(1) then the complexity is obviously O(1).



来源:https://stackoverflow.com/questions/10625643/mpi-communication-complexity

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!