convolution

Working on Separable Gabor filters in matlab

ε祈祈猫儿з 提交于 2019-12-12 10:23:42
问题 A filter g is called separable if it can be expressed as the multiplication of two vectors grow and gcol . Employing one dimensional filters decreases the two dimensional filter's computational complexity from O(M^2 N^2) to O(2M N^2) where M and N are the width (and height) of the filter mask and the image respectively. In this stackoverflow link, I wrote the equation of a Gabor filter in the spatial domain, then I wrote a matlab code which serves to create 64 gabor features. According to the

Applying low pass filter

五迷三道 提交于 2019-12-12 10:03:30
问题 I want to simulate an interpolator in MATLAB using upsampling followed by a low pass filter. First I have up-sampled my signal by introducing 0's. Now I want to apply a low pass filter in order to interpolate. I have designed the following filter: The filter is exactly 1/8 of the normalized frequency because I need to downsample afterward. (it's a specific excersise to upsample interpolate and downsample in this particular order.) However when I apply this filter to my data using the function

Deconvolution with R (decon and deamer package)

旧街凉风 提交于 2019-12-12 09:10:02
问题 I have a model of the form: y = x + noise. I know the distribution of 'y' and of the noise and would like to have the distribution of 'x'. So I tried to deconvolve the distributions with R. I found 2 packages (decon and deamer) and I thought both methods should make more or less the same but I don't understand why deconvoluting with DeconPdf gives me a something like a normal distribution and deconvoluting with deamerKE gives me a uniform distribution. Here is an example code: library

Dimensions in convolutional neural network

心不动则不痛 提交于 2019-12-12 08:48:55
问题 I am trying to understand how the dimensions in convolutional neural network behave. In the figure below the input is 28-by-28 matrix with 1 channel. Then there are 32 5-by-5 filters (with stride 2 in height and width). So I understand that the result is 14-by-14-by-32. But then in the next convolutional layer we have 64 5-by-5 filters (again with stride 2). So why the result is 7-by-7- by 64 and not 7-by-7-by 32*64? Aren't we applying each one of the 64 filters to each one of the 32 channels

How do I perform a convolution in python with a variable-width Gaussian?

萝らか妹 提交于 2019-12-12 08:34:10
问题 I need to perform a convolution using a Gaussian, however the width of the Gaussian needs to change. I'm not doing traditional signal processing but instead I need to take my perfect Probability Density Function (PDF) and ``smear" it, based on the resolution of my equipment. For instance, suppose my PDF starts out as a spike/delta-function. I'll model this as a very narrow Gaussian. After being run through my equipment, it will be smeared out according to some Gaussian resolution. I can

Convolve2d just by using Numpy

こ雲淡風輕ζ 提交于 2019-12-12 07:11:13
问题 I am studying image-processing using Numpy and facing a problem with filtering with convolution. I would like to convolve a gray-scale image. (convolve a 2d Array with a smaller 2d Array) Does anyone have an idea to refine my method ? I know that scipy supports convolve2d but I want to make a convolve2d only by using Numpy. What I have done First, I made a 2d array the submatrices. a = np.arange(25).reshape(5,5) # original matrix submatrices = np.array([ [a[:-2,:-2], a[:-2,1:-1], a[:-2,2:]],

R, Integrate at each point of array

断了今生、忘了曾经 提交于 2019-12-12 04:37:01
问题 I'm stuck with computing the integral at each point of an array. The idea is first to create a function ("Integrand"). Then, create a second function ("MyConvolve") that computes the necessary integral. Here's what I did up to now: Integrand = function(s,x) { 1/4*(abs(x-s)<=1)*(abs(s)<=1) } MyConvolve = function(func,data) { return( integrate(func, lower=-Inf, upper=Inf, data) ) } Now, running the code with some array, I get an error message: SomeMatrix = replicate(10, rnorm(10)) MyConvolve

How backpropagation works in Convolutional Neural Network(CNN)?

筅森魡賤 提交于 2019-12-12 04:07:53
问题 I have few question regarding CNN. In the figure below between Layer S2 and C3, 5*5 sized kernel has been used. Q1. How many kernel has been used there? Do each of these kernel connected with each of the feature map in Layer S2 ? Q2. When using Max-pooling, while backpropageting error how a max-pooling feature/neuron knows/determines from which (feature map/neuron) in its previous immediate layer it got the max value ? Q3. If we want to train kernel then we initialize with random value, is

Do I have to create a whole new array to store results from convolution?

╄→尐↘猪︶ㄣ 提交于 2019-12-11 19:27:08
问题 I was playing with my convolution algorithm in Python and I noticed that while sliding the filter along the original array and updating entries therein, the result came out quite murky: whereas if I created a totally new array it came out with levels similar to the original. My silly question - is the latter the right way to write this algorithm (I'm guessing it is)? What is lost in the former - or rather, is there a way that I can write this algorithm so that I don't have to initialize

Is it possible to extend “im2col” and “col2im” to N-D images?

孤街浪徒 提交于 2019-12-11 18:48:28
问题 "Im2col" has already been implemented, Implement MATLAB's im2col 'sliding' in Python, efficiently for 2-D images in Python. I was wondering whether it is possible to extend this to arbitrary N-D images? Many applications involve high-dimensional data (e.g. convolutions, filtering, max pooling, etc.). 回答1: So the purpose of this question was really just to post my solution to this problem publicly. I could not seem to find such a solution on Google, so I decided to take a stab at it myself.