Improving Numpy Performance

China☆狼群 提交于 2019-11-28 05:58:21

The code in scipy for doing 2d convolutions is a bit messy and unoptimized. See http://svn.scipy.org/svn/scipy/trunk/scipy/signal/firfilter.c if you want a glimpse into the low-level functioning of scipy.

If all you want is to process with a small, constant kernel like the one you showed, a function like this might work:

def specialconvolve(a):
    # sorry, you must pad the input yourself
    rowconvol = a[1:-1,:] + a[:-2,:] + a[2:,:]
    colconvol = rowconvol[:,1:-1] + rowconvol[:,:-2] + rowconvol[:,2:] - 9*a[1:-1,1:-1]
    return colconvol

This function takes advantage of the separability of the kernel like DarenW suggested above, as well as taking advantage of the more optimized numpy arithmetic routines. It's over 1000 times faster than the convolve2d function by my measurements.

For the particular example 3x3 kernel, I'd observe that

1  1  1
1 -8  1
1  1  1

  1  1  1     0  0  0
= 1  1  1  +  0 -9  0
  1  1  1     0  0  0

and that the first of these is factorable - it can be convoluted by convolving (1 1 1) for each row, and then again for each column. Then subtract nine times the original data. This may or may not be faster, depending on whether the scipy programmers made it smart enough to automatically do this. (I haven't checked in a while.)

You probably want to do more interesting convolutions, where factoring may or may not be possible.

Before going to say C with ctypes, I'd suggest running a standalone convolve in C, to see where the limit is.
Similarly for CUDA, cython, scipy.weave ...

Added 7feb: convolve33 8-bit data with clipping takes ~ 20 clock cycles per point, 2 clock cycles per mem access, on my mac g4 pcc with gcc 4.2. Your mileage will vary.

A couple of subtleties:

  • do you care about correct clipping to 0..255 ? np.clip() is slow, cython etc. don't know.
  • Numpy/scipy may need memory for temps the size of A (so keep 2*sizeof(A) < cache size).
    If your C code, though, does a running update inplace, that's half the mem but a different algorithm.

By the way, google theano convolve => "A convolution op that should mimic scipy.signal.convolve2d, but faster! In development"

A typical optimization for convolution is to use the FFT of your signal. The reason is: the convolution in real space is a product in FFT space. It is often faster to compute the FFT, then the product, and the iFFT of the result than convolve the usual way.

As of 2018, seems like SciPy/Numpy combo has been sped up a lot. This is what I saw on my laptop (Dell Inspiron 13, i5). OpenCV did the best but you don't have any control on modes.

>>> img= np.random.rand(1000,1000)
>>> kernel = np.ones((3,3), dtype=np.float)/9.0
>>> t1= time.time();dst1 = cv2.filter2D(img,-1,kernel);print(time.time()-t1)
0.0235188007355
>>> t1= time.time();dst2 = signal.correlate(img,kernel,mode='valid',method='fft');print(time.time()-t1)
0.140458106995
>>> t1= time.time();dst3 = signal.convolve2d(img,kernel,mode='valid');print(time.time()-t1)
0.0548939704895
>>> t1= time.time();dst4 = signal.correlate2d(img,kernel,mode='valid');print(time.time()-t1)
0.0518119335175
>>> t1= time.time();dst5 = signal.fftconvolve(img,kernel,mode='valid');print(time.time()-t1)
0.13204407692
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!