openmpi

How do I optimize the parallelization of Monte Carlo data generation with MPI?

混江龙づ霸主 提交于 2020-08-10 20:16:36
问题 I am currently building a Monte Carlo application in C++ and I have a question regarding parallelization with MPI. The process I want to parallelize is the MC generation of data. To have good precision in my final results, I specify the goal number of data points. Each data point is generated independently, but might require vastly differing amounts of time. How do I organize the parallelization and workload distribution of the data generation most efficiently? What I have done so far So far

安装mpi4py报错:Building wheel for mpi4py (setup.py) … error message while installing stable_baselines

天涯浪子 提交于 2020-01-30 07:27:05
直接使用命令: pip install mpi4py==2.0.0 报错 Building wheel for mpi4py (setup.py) … error message while installing stable_baselines 原因: 没有安装MPI实现软件,比较常用的MPI实现软件有Openmpi,mpich等 解决过程: 1.安装Openmpi a. 下载安装包: wget https://www.open-mpi.org/software/ompi/v1.10/downloads/openmpi-1.10.2.tar.gz b. 将文件移动到主页,右击选择“提取到此处” c. 安装依赖插件: sudo apt-get install libibnetdisc.dev d. 配置安装文件: ./configure e. 安装openmpi: make && sudo make install f. 为/etc/profile文件添加库共享路径: sudo gedit /etc/profile 在其中加入以下内容: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib g. 使该配置文件生效: source /etc/profile 2.安装mpi4py: pip install mpi4py==2.0

MPI not running in parallel in a FORTRAN code

这一生的挚爱 提交于 2020-01-21 09:52:13
问题 I am trying to install an OpenMPI on my Ubuntu (14.04) machine, and I thought that I had succeeded, because I can run codes with mpirun , but recently I have noticed that it's not truly running in parallel. I installed openmpi with the following options: ./configure CXX=g++ CC=gcc F77=gfortran \ F90=gfortran \ FC=gfortran \ --enable-mpi-f77 \ --enable-mpi-f90 \ --prefix=/opt/openmpi-1.6.5 make all sudo make install As I said, I have run a code ( not written by myself ) and it seemed to work

Segmentation fault OpenMPI

丶灬走出姿态 提交于 2020-01-17 08:30:51
问题 I include a static header file utils.h with a function linspace . My main.cpp file is as follows: #include <iostream> #include <utils.h> #include <mpi.h> using namespace std; int main(int argc, const char * argv[]) { float start = 0., end = 1.; unsigned long int num = 100; double *linspaced; float delta = (end - start) / num; int size, rank; MPI_Init(NULL, NULL); MPI_Comm_size(MPI_COMM_WORLD, &size); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Status status; // These have to be converted into

MPI's Scatterv operation

最后都变了- 提交于 2020-01-17 03:08:27
问题 I'm not sure that I am correctly understanding what MPI_Scatterv is supposed to do. I have 79 items to scatter amounts a variable amount of nodes. However, when I use the MPI_Scatterv command I get ridiculous numbers (as if the array elements of my receiving buffer are uninitialized). Here is the relevant code snippet: MPI_Init(&argc, &argv); int id, procs; MPI_Comm_rank(MPI_COMM_WORLD, &id); MPI_Comm_size(MPI_COMM_WORLD, &procs); //Assign each file a number and figure out how many files

MPI_Scatter - not working as expected

自闭症网瘾萝莉.ら 提交于 2020-01-16 06:58:12
问题 I am writing my first program using MPI and I am having hard time trying to properly send data to other processes using MPI_Scatter, modify them and receive the values using MPI_Gather. The code is as follows: int** matrix; int m = 2, n = 2; int status; // could have been int matrix[2][2]; matrix = malloc(m*sizeof(int*)); for(i = 0; i < m; i++) { matrix[i] = malloc(n*sizeof(int)); } matrix[0][0] = 1; matrix[0][1] = 2; matrix[1][0] = 2; matrix[1][1] = 3; MPI_Init( &argc, &argv ); MPI_Comm_rank

MPI_Reduce select first k results

房东的猫 提交于 2020-01-15 11:14:14
问题 I want to find the first k results over all nodes using MPI. For that I wanted to use MPI_Reduce with an own function. However my code does not work because the len parameter of the function is not the same as the count parameter given to MPI_Reduce. I found here that implementations may do this to pipeline the computation. My code is similar to this one: inline void MPI_user_select_top_k(int *invec, acctbal_pair *inoutvec, int *len, MPI_Datatype *dtpr) { std::vector<acctbal_pair> temp; for

Gathering results of MPI_SCAN

♀尐吖头ヾ 提交于 2020-01-14 04:16:07
问题 I have this array [1 2 3 4 5 6 7 8 9] and i am performing scan operation on that. I have 3 mpi tasks and each task gets 3 elements then each task calculates its scan and returns result to master task task 0 - [1 2 3] => [1 3 6] task 1 - [4 5 6 ] => [4 9 15] task 2 - [7 8 9] => [7 15 24] Now task 0 gets all the results [1 3 6] [4 9 15] [7 15 24] How can I combine these results to produce final scan output? final scan output of array would be [1 3 6 10 15 21 28 36 45] can anyone help me please?

Unable to implement MPI_Intercomm_create

ⅰ亾dé卋堺 提交于 2020-01-07 07:44:06
问题 I am trying to implement an MPI_intercomm in Fortran between 2 inter communicators, one which has first 2 process and the other having the rest. I need to perform send, recv operations between the newly created communicators. The code: program hello include 'mpif.h' integer tag,ierr,rank,numtasks,color,new_comm,inter1,inter2 tag = 22 call MPI_Init(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD,rank,ierr) call MPI_COMM_SIZE(MPI_COMM_WORLD,numtasks,ierr) if (rank < 2) then color = 0 else color = 1 end

Controlling node mapping of MPI_COMM_SPAWN

穿精又带淫゛_ 提交于 2020-01-07 05:39:10
问题 The context: This whole issue can be summarized that I'm trying replicate the behaviour of a call to system (or fork ), but in an mpi environment. (Turns out that you can't call system in parallel.) Meaning I have a program running on many nodes, one process on each node, and then I want each process to call an external program (so for n nodes I'd have n copies of the external program running), wait for all those copies to finish, then keep running the original program. To achieve this in a