Numpy: Replace every value in the array with the mean of its adjacent elements

匿名 (未验证) 提交于 2019-12-03 08:46:08

问题:

I have an ndarray, and I want to replace every value in the array with the mean of its adjacent elements. The code below can do the job, but it is super slow when I have 700 arrays all with shape (7000, 7000) , so I wonder if there are better ways to do it. Thanks!

a = np.array(([1,2,3,4,5,6,7,8,9],[4,5,6,7,8,9,10,11,12],[3,4,5,6,7,8,9,10,11])) row,col = a.shape new_arr = np.ndarray(a.shape) for x in xrange(row):     for y in xrange(col):         min_x = max(0, x-1)         min_y = max(0, y-1)         new_arr[x][y] = a[min_x:(x+2),min_y:(y+2)].mean() print new_arr 

回答1:

Well, that's a smoothing operation in image processing, which can be achieved with 2D convolution. You are working a bit differently on the near-boundary elements. So, if the boundary elements are let off for precision, you can use scipy's convolve2d like so -

from scipy.signal import convolve2d as conv2  out = (conv2(a,np.ones((3,3)),'same')/9.0 

This specific operation is a built-in in OpenCV module as cv2.blur and is very efficient at it. The name basically describes its operation of blurring the input arrays representing images. I believe the efficiency comes from the fact that internally its implemented entirely in C for performance with a thin Python wrapper to handle NumPy arrays.

So, the output could be alternatively calculated with it, like so -

import cv2 # Import OpenCV module  out = cv2.blur(a.astype(float),(3,3)) 

Here's a quick show-down on timings on a decently big image/array -

In [93]: a = np.random.randint(0,255,(5000,5000)) # Input array  In [94]: %timeit conv2(a,np.ones((3,3)),'same')/9.0 1 loops, best of 3: 2.74 s per loop  In [95]: %timeit cv2.blur(a.astype(float),(3,3)) 1 loops, best of 3: 627 ms per loop 


回答2:

Following the discussion with @Divakar, find bellow a comparison of different convolution methods present in scipy:

import numpy as np from scipy import signal, ndimage  def conv2(A, size):     return signal.convolve2d(A, np.ones((size, size)), mode='same') / float(size**2)  def fftconv(A, size):     return signal.fftconvolve(A, np.ones((size, size)), mode='same') / float(size**2)  def uniform(A, size):     return ndimage.uniform_filter(A, size, mode='constant') 

All 3 methods return exactly the same value. However, note that uniform_filter has a parameter mode='constant', which indicates the boundary conditions of the filter, and constant == 0 is the same boundary condition that the Fourier domain (in the other 2 methods) is enforced. For different use cases you can change the boundary conditions.

Now some test matrices:

A = np.random.randn(1000, 1000) 

And some timings:

%timeit conv2(A, 3)     # 33.8 ms per loop %timeit fftconv(A, 3)   # 84.1 ms per loop %timeit uniform(A, 3)   # 17.1 ms per loop  %timeit conv2(A, 5)     # 68.7 ms per loop %timeit fftconv(A, 5)   # 92.8 ms per loop %timeit uniform(A, 5)   # 17.1 ms per loop  %timeit conv2(A, 10)     # 210 ms per loop %timeit fftconv(A, 10)   # 86 ms per loop %timeit uniform(A, 10)   # 16.4 ms per loop  %timeit conv2(A, 30)     # 1.75 s per loop %timeit fftconv(A, 30)   # 102 ms per loop %timeit uniform(A, 30)   # 16.5 ms per loop 

So in short, uniform_filter seems faster, and it because the convolution is separable in two 1D convolutons (similar to gaussian_filter which is also separable).

Other non-separable filters with different kernels are more likely to be faster using signal module (the one in @Divakar's) solution.

The speed of both fftconvolve and uniform_filter remains constant for different kernel sizes, while convolve2d gets slightly slower.



回答3:

I had a similar problem recently and had to find a different solution since I can't use scipy.

import numpy as np  a = np.random.randint(100, size=(7000,7000)) #Array of 7000 x 7000 row,col = a.shape  column_totals =  a.sum(axis=0) #Dump the sum of all columns into a single array  new_array = np.zeros([row,col]) #Create an receiving array  for i in range(row):     #Resulting row = the value of all rows minus the orignal row, divided by the row number minus one.      new_array[i] = (column_totals - a[i]) / (row - 1)  print(new_array) 


标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!