Optimization: correct way to pass large array to map using pathos multiprocessing
问题 I need to perform a function on elements of a large array and I want to optimize the code using multiprocessing so that I can utilize all the cores on a supercomputer. This is a follow up to the question I asked here. I used the code: import numpy as np from scipy import misc, ndimage import itertools from pathos.multiprocessing import ProcessPool import time start = time.time() #define the original array a as a= np.load('100by100by100array.npy') n= a.ndim #number of dimensions imx, imy, imz