mpi4py

mpi4py: Replace built-in serialization

一曲冷凌霜 提交于 2019-12-24 02:13:46
问题 I'd like to replace MPI4PY's built-in Pickle -serialization with dill. According to the doc the class _p_Pickle should have 2 attributes called dumps and loads . However, python says there are no such attributes when i try the following from mpi4py Import MPI MPI._p_Pickle.dumps -> AttributeError: type object 'mpi4py.MPI._p_Pickle' has no attribute 'dumps' Where have dumps and loads gone? 回答1: In v2.0 you can change it via MPI.pickle.dumps = dill.dumps MPI.pickle.loads = dill.loads It seems

Unable to call PETSc/MPI-based external code in parallel OpenMDAO

时光怂恿深爱的人放手 提交于 2019-12-23 02:19:39
问题 I am writing an OpenMDAO problem that calls a group of external codes in a parallel group. One of these external codes is a PETSc-based fortran FEM code. I realize this is potentially problematic since OpenMDAO also utilizes PETSc. At the moment, I'm calling the external code in a component using python's subprocess. If I run my OpenMDAO problem in serial (i.e. python2.7 omdao_problem.py), everything, including the external code, works just fine. When I try to run it in parallel, however (i.e

How can I use more CPU to run my python script

*爱你&永不变心* 提交于 2019-12-19 12:24:12
问题 I want to use more processors to run my code to minimize the running time only. Though I have tried to do it but failed to get the desired result. My code is a very big one that's why I'm giving here a very small and simple code (though it does not need parallel job to run this code) just to know how can I do parallel job in python. Any comments/ suggestions will be highly appreciated. import numpy as np import matplotlib.pyplot as plt from scipy.integrate import odeint def solveit(n,y0): def

How to scattering a numpy array in python using comm.Scatterv

断了今生、忘了曾经 提交于 2019-12-18 09:38:48
问题 I am tring to write a MPI-based code to do some calculation using python and MPI4py. However, following the example, I CANNOT scatter a numpy vector into cores. Here is the code and errors, is there anyone can help me? Thanks. import numpy as np from mpi4py import MPI comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() n = 6 if rank == 0: d1 = np.arange(1, n+1) split = np.array_split(d1, size) split_size = [len(split[i]) for i in range(len(split))] split_disp = np.insert(np

python MPI sendrecv() to pass a python object

扶醉桌前 提交于 2019-12-13 18:10:09
问题 I am trying to use mpi4py's sendrecv() to pass a dictionary obj. from mpi4py import MPI comm=MPI_COMM_WORLD rnk=comm.Get_rank() size=comm.Get_size() idxdict={1:2} buffer=None comm.sendrecv(idxdict,dest=(rnk+1)%size,sendtag=rnk,recvobj=buffer,source=(rnk-1+size)%size,recvtag=(rnk-1+size)%size) idxdict=buffer If I print idxidct at the last step, I will get a bunch of "None"s, so the dictionary idxdict is not passed between cores. If I use a dictionary as buffer: buffer={} , then there is

MPI Bcast or Scatter to specific ranks

ぃ、小莉子 提交于 2019-12-13 03:36:06
问题 I have some array of data. What I was trying to do is like this: Use rank 0 to bcast data to 50 nodes. Each node has 1 mpi process on it with 16 cores available to that process. Then, each mpi process will call python multiprocessing. Some calculations are done, then the mpi process saves the data that was calculated with multiprocessing. The mpi process then changes some variable, and runs multiprocessing again. Etc. So the nodes do not need to communicate with each other besides the initial

Using matplotlib on non-0 MPI rank causes “QXcbConnection: Could not connect to display”

本小妞迷上赌 提交于 2019-12-12 06:39:44
问题 I have written a program that uses mpi4py to do some job (making an array) in the node of rank 0 in the following code. Then it makes another array in the node of rank 1. Then I plot both the arrays. The array in node 0 is broad casted to node 1. However the code gives a bizarre error. I used the following command: mpiexec -n 2 -f mfile python mpi_test_4.py The program goes as: from mpi4py import MPI import matplotlib.pyplot as plt import numpy as np comm = MPI.COMM_WORLD rank = comm.rank x =

ipython with MPI clustering using machinefile

邮差的信 提交于 2019-12-12 03:33:00
问题 I have successfully configured mpi with mpi4py support across three nodes, as per testing of the hellowworld.py script in the mpi4py demo directory: gms@host:~/development/mpi$ mpiexec -f machinefile -n 10 python ~/development/mpi4py/demo/helloworld.py Hello, World! I am process 3 of 10 on host. Hello, World! I am process 1 of 10 on worker1. Hello, World! I am process 6 of 10 on host. Hello, World! I am process 2 of 10 on worker2. Hello, World! I am process 4 of 10 on worker1. Hello, World! I

python script running with mpirun not stopping if assert on processor 0 fails

回眸只為那壹抹淺笑 提交于 2019-12-12 03:17:35
问题 I have a python script with a set of operations done in parallel, with the library mpi4py. At the end of the operations the processor with rank 0 executes an assert test. If the assert fails, the process should stop and the program terminate. However, the program doesn't exit and this I guess is because the other processors are holding. How to make the program ending the execution if the assert fails? I run things with a command like: mpirun -np 10 python myscript.py and then I have a line in

Along what axis does mpi4py Scatterv function split a numpy array?

橙三吉。 提交于 2019-12-12 02:54:12
问题 I have the following MWE using comm.Scatterv and comm.Gatherv to distribute a 4D array across a given number of cores ( size ) import numpy as np from mpi4py import MPI import matplotlib.pyplot as plt comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() if rank == 0: test = np.random.rand(411,48,52,40) #Create array of random numbers outputData = np.zeros(np.shape(test)) split = np.array_split(test,size,axis = 0) #Split input array by the number of available cores split_sizes =