convolution

How to create 64 Gabor features at each scale and orientation in the spatial and frequency domain

痞子三分冷 提交于 2019-11-30 15:31:28
问题 Normally, a Gabor filter, as its name suggests, is used to filter an image and extract everything that it is oriented in the same direction of the filtering. In this question, you can see more efficient code than written in this Link Assume 16 scales of Filters at 4 orientations, so we get 64 gabor filters. scales=[7:2:37], 7x7 to 37x37 in steps of two pixels, so we have 7x7, 9x9, 11x11, 13x13, 15x15, 17x17, 19x19, 21x21, 23x23, 25x25, 27x27, 29x29, 31x31, 33x33, 35x35 and 37x37. directions=

How does Richardson–Lucy algorithm work? Code example?

元气小坏坏 提交于 2019-11-30 15:06:12
I am trying to figure out how deconvolution works. I understand the idea behind it but I want to understand some of the actual algorithms which implement it - algorithms which take as input a blurred image with its point sample function (blur kernel) and produce as output the latent image. So far I found Richardson–Lucy algorithm where the math does not seem to be that difficult however I can't figure how the actual algorithm works. At Wikipedia it says: This leads to an equation for which can be solved iteratively according... however it does not show the actual loop. Can anyone point me to a

How to create 64 Gabor features at each scale and orientation in the spatial and frequency domain

穿精又带淫゛_ 提交于 2019-11-30 14:36:51
Normally, a Gabor filter, as its name suggests, is used to filter an image and extract everything that it is oriented in the same direction of the filtering. In this question, you can see more efficient code than written in this Link Assume 16 scales of Filters at 4 orientations, so we get 64 gabor filters. scales=[7:2:37], 7x7 to 37x37 in steps of two pixels, so we have 7x7, 9x9, 11x11, 13x13, 15x15, 17x17, 19x19, 21x21, 23x23, 25x25, 27x27, 29x29, 31x31, 33x33, 35x35 and 37x37. directions=[0, pi/4, pi/2, 3pi/4]. The equation of Gabor filter in the spatial domain is: The equation of Gabor

Wiener Filter for image deblur

喜欢而已 提交于 2019-11-30 14:15:54
问题 I am trying to implement the Wiener Filter to perform deconvolution on blurred image. My implementation is like this import numpy as np from numpy.fft import fft2, ifft2 def wiener_filter(img, kernel, K = 10): dummy = np.copy(img) kernel = np.pad(kernel, [(0, dummy.shape[0] - kernel.shape[0]), (0, dummy.shape[1] - kernel.shape[1])], 'constant') # Fourier Transform dummy = fft2(dummy) kernel = fft2(kernel) kernel = np.conj(kernel) / (np.abs(kernel) ** 2 + K) dummy = dummy * kernel dummy = np

Convolution computations in Numpy/Scipy

Deadly 提交于 2019-11-30 11:21:00
Profiling some computational work I'm doing showed me that one bottleneck in my program was a function that basically did this ( np is numpy , sp is scipy ): def mix1(signal1, signal2): spec1 = np.fft.fft(signal1, axis=1) spec2 = np.fft.fft(signal2, axis=1) return np.fft.ifft(spec1*spec2, axis=1) Both signals have shape (C, N) where C is the number of sets of data (usually less than 20) and N is the number of samples in each set (around 5000). The computation for each set (row) is completely independent of any other set. I figured that this was just a simple convolution, so I tried to replace

Inverse convolution of image

半世苍凉 提交于 2019-11-30 11:07:51
问题 I have source and result image. I know, that some convolution matrix has been used on source to get result. Can this convolution matrix be computed ? Or at least not exact one, but very similar. 回答1: In principle, yes. Just convert both images to frequency space using an FFT and divide the FFT of the result image by that of the source image. Then apply the inverse FFT to get an approximation of the convolution kernel. To see why this works, note that convolution in the spatial domain

Inverse convolution of image

做~自己de王妃 提交于 2019-11-29 23:17:41
I have source and result image. I know, that some convolution matrix has been used on source to get result. Can this convolution matrix be computed ? Or at least not exact one, but very similar. In principle, yes. Just convert both images to frequency space using an FFT and divide the FFT of the result image by that of the source image. Then apply the inverse FFT to get an approximation of the convolution kernel. To see why this works, note that convolution in the spatial domain corresponds to multiplication in the frequency domain, and so deconvolution similarly corresponds to division in the

Convolution of NumPy arrays of arbitrary dimension for Cauchy product of multivariate power series

∥☆過路亽.° 提交于 2019-11-29 18:08:24
I'm trying to implement the idea I have suggested here , for Cauchy product of multivariate finite power series (i.e. polynomials) represented as NumPy ndarrays. numpy.convolve does the job for 1D arrays, respectively. But to my best knowledge there is no implementations of convolution for arbitrary dimensional arrays. In the above link, I have suggested the equation: for convolution of two n dimensional arrays Phi of shape P=[p1,...,pn] and Psi of the shape Q=[q1,...,qn] , where: omega s are the elements of n dimensional array Omega of the shape O=P+Q-1 <A,B>_F is the generalization of

Convolution computations in Numpy/Scipy

安稳与你 提交于 2019-11-29 16:54:09
问题 Profiling some computational work I'm doing showed me that one bottleneck in my program was a function that basically did this ( np is numpy , sp is scipy ): def mix1(signal1, signal2): spec1 = np.fft.fft(signal1, axis=1) spec2 = np.fft.fft(signal2, axis=1) return np.fft.ifft(spec1*spec2, axis=1) Both signals have shape (C, N) where C is the number of sets of data (usually less than 20) and N is the number of samples in each set (around 5000). The computation for each set (row) is completely

cPickle very large amount of data

ε祈祈猫儿з 提交于 2019-11-29 15:36:06
问题 I have about 0.8 million images of 256x256 in RGB, which amount to over 7GB. I want to use them as training data in a Convolutional Neural Network, and want to put them in a cPickle file, along with their labels. Now, this is taking a lot of memory, to the extent that it needs to swap with my hard drive memory, and almost consume it all. Is this is a bad idea? What would be the smarter/more practical way to load into CNN or pickle them without causing too much memory issue? This is what the