Speed up sampling of kernel estimate

前端 未结 2 866
轮回少年
轮回少年 2020-12-11 03:25

Here\'s a MWE of a much larger code I\'m using. Basically, it performs a Monte Carlo integration over a KDE (kernel density estimate) for all values located bel

2条回答
  •  不知归路
    2020-12-11 04:16

    Probably the easiest way to speed this up is to parallelize kernel(sample):

    Taking this code fragment:

    tik = time.time()
    insample = kernel(sample) < iso
    print 'filter/sample: ', time.time()-tik
    #filter/sample:  1.94065904617
    

    Change this to use multiprocessing:

    from multiprocessing import Pool
    tik = time.time()
    
    #Create definition.
    def calc_kernel(samp):
        return kernel(samp)
    
    #Choose number of cores and split input array.
    cores = 4
    torun = np.array_split(sample, cores, axis=1)
    
    #Calculate
    pool = Pool(processes=cores)
    results = pool.map(calc_kernel, torun)
    
    #Reintegrate and calculate results
    insample_mp = np.concatenate(results) < iso
    
    print 'multiprocessing filter/sample: ', time.time()-tik
    #multiprocessing filter/sample:  0.496874094009
    

    Double check they are returning the same answer:

    print np.all(insample==insample_mp)
    #True
    

    A 3.9x improvement on 4 cores. Not sure what you are running this on, but after about 6 processors your input array size is not large enough to get considerably gains. For example using 20 processors its only about 5.8x faster.

提交回复
热议问题