I\'d like to improve the performance of convolution using python, and was hoping for some insight on how to best go about improving performance.
I am currently usin
The code in scipy for doing 2d convolutions is a bit messy and unoptimized. See http://svn.scipy.org/svn/scipy/trunk/scipy/signal/firfilter.c if you want a glimpse into the low-level functioning of scipy.
If all you want is to process with a small, constant kernel like the one you showed, a function like this might work:
def specialconvolve(a):
# sorry, you must pad the input yourself
rowconvol = a[1:-1,:] + a[:-2,:] + a[2:,:]
colconvol = rowconvol[:,1:-1] + rowconvol[:,:-2] + rowconvol[:,2:] - 9*a[1:-1,1:-1]
return colconvol
This function takes advantage of the separability of the kernel like DarenW suggested above, as well as taking advantage of the more optimized numpy arithmetic routines. It's over 1000 times faster than the convolve2d function by my measurements.