Here\'s a MWE
of a much larger code I\'m using. Basically, it performs a Monte Carlo integration over a KDE (kernel density estimate) for all values located bel
The claim in the comments section of this article (link below) is
"SciPy’s gaussian_kde doesn’t use FFT, while there is a statsmodels implementation that does"
…which is a possible cause of the observed poor performance. It goes on to report orders of magnitude improvement using FFT. See @jseabold's reply.
http://slendrmeans.wordpress.com/2012/05/01/will-it-python-machine-learning-for-hackers-chapter-2-part-1-summary-stats-and-density-estimators/
Disclaimer: I have no experience with statsmodels or scipy.
Probably the easiest way to speed this up is to parallelize kernel(sample)
:
Taking this code fragment:
tik = time.time()
insample = kernel(sample) < iso
print 'filter/sample: ', time.time()-tik
#filter/sample: 1.94065904617
Change this to use multiprocessing
:
from multiprocessing import Pool
tik = time.time()
#Create definition.
def calc_kernel(samp):
return kernel(samp)
#Choose number of cores and split input array.
cores = 4
torun = np.array_split(sample, cores, axis=1)
#Calculate
pool = Pool(processes=cores)
results = pool.map(calc_kernel, torun)
#Reintegrate and calculate results
insample_mp = np.concatenate(results) < iso
print 'multiprocessing filter/sample: ', time.time()-tik
#multiprocessing filter/sample: 0.496874094009
Double check they are returning the same answer:
print np.all(insample==insample_mp)
#True
A 3.9x improvement on 4 cores. Not sure what you are running this on, but after about 6 processors your input array size is not large enough to get considerably gains. For example using 20 processors its only about 5.8x faster.