convolution

Multiple convolutions in Matlab

拥有回忆 提交于 2019-12-06 14:07:20
I want to numerically calculate several convolutions like where the x , y , z , w functions are given in the below code: t = linspace(-100,100,10000); x = t.*exp(-t.^2); y = exp(-4*t.^2).*cos(t); z = (t-2)/((t-2).^2+3^2); w = exp(-3*t.^2).*exp(2i*t); u = conv(conv(conv(x,y),z),w); plot(t,u) % ??? - if we want to convolute N functions, what range should t span? Is this the most efficient way to calculate and plot multiple convolutions? Is it generally better to numerically integrate the functions for each convolution? Edit: This is the plot of the real part of my convolution, u vs t : whereas

Applying low pass filter

断了今生、忘了曾经 提交于 2019-12-06 13:21:54
I want to simulate an interpolator in MATLAB using upsampling followed by a low pass filter. First I have up-sampled my signal by introducing 0's. Now I want to apply a low pass filter in order to interpolate. I have designed the following filter: The filter is exactly 1/8 of the normalized frequency because I need to downsample afterward. (it's a specific excersise to upsample interpolate and downsample in this particular order.) However when I apply this filter to my data using the function filter(myfilter, data) the following signal is generated: I really don't know what is happening to my

Homomorphic Filter output

别等时光非礼了梦想. 提交于 2019-12-06 12:27:41
I have written the following code to develop a Homomorphic Filter. I think (I am not sure though) the color images are being filtered well. In case of Grayscale images, Why is the kernel always Green? Also, the filter was supposed to be sharpening the image. But, its not doing so. What could have possibly gone wrong? . . Source Code: Here is the Github repository. public class HomomorphicFilter { public HomoMorphicKernel Kernel = null; public bool IsPadded { get; set; } public int Width { get; set; } public int Height { get; set; } public double RH { get; set; } public double RL { get; set; }

Replicating TensorFlows Conv2D Operating using Eigen Tensors

戏子无情 提交于 2019-12-06 12:22:13
I'm trying to implement a lightweight (minimal library dependencies) version of a TensorFlow graph in c++ and I'm trying to use Eigen Tensor objects to perform the graphs operations. Right now I'm stuck trying to use the Eigen Tensor.convolve() method to try and replicate the behaviour of TensorFlow's Conv2D operation. To keep things simple my initial Conv2D operation has no padding and strides of one. The input to convolutional layer is a 51x51x1 tensor which is being convolved with a filter bank of size 3x3x1x16. In tensorflow this generates an output tensor of size 49x49x16. Setting up this

Parallelizing a for loop (1D Naive Convolution) in CUDA

☆樱花仙子☆ 提交于 2019-12-06 11:04:21
问题 Can someone please help me convert a nested for loop into a CUDA kernel? Here is the function I am trying to convert into a CUDA kernel: // Convolution on Host void conv(int* A, int* B, int* out) { for (int i = 0; i < N; ++i) for (int j = 0; j < N; ++j) out[i + j] += A[i] * B[j]; } I have tried very hard to parallelize this code. Here is my attempt: __global__ void conv_Kernel(int* A, int* B, int* out) { int i = blockIdx.x; int j = threadIdx.x; __shared__ int temp[N]; __syncthreads(); temp[i

Understanding and evaluating template matching methods

半世苍凉 提交于 2019-12-06 08:23:55
问题 OpenCV has the matchTemplate() function, which operates by sliding the template input across the output, and generating an array output corresponding to the match. Where can I learn more about how to interpret the six TemplateMatchModes? I've read through and implemented code based on the tutorial, but other than understanding that one looks for minimum results for TM_SQDIFF for a match and maximums for the rest, I don't know how to interpret the different approaches, and the situations where

PHP sharpness convolution martix

血红的双手。 提交于 2019-12-06 08:06:26
问题 I'm using a convolution matrix for sharpness in PHP GD and I want to change the sharpness "level" . Where would I make changes to this if I want to make it more or less sharp ? $image = imagecreatefromjpeg('pic.jpg'); $matrix = array( array(0, -1, 0), array(-1, 5, -1), array(0, -1, 0) ); imageconvolution($image, $matrix, 1, 0.001); header("Content-type: image/jpeg"); imagejpeg($image); 回答1: try looking on http://www.gamedev.net/reference/programming/features/imageproc/page2.asp There are lots

Why isn't this Conv2d_Transpose / deconv2d returning the original input in tensorflow?

烈酒焚心 提交于 2019-12-06 06:11:39
问题 weights = tf.placeholder("float",[5,5,1,1]) imagein = tf.placeholder("float",[1,32,32,1]) conv = tf.nn.conv2d(imagein,weights,strides=[1,1,1,1],padding="SAME") deconv = tf.nn.conv2d_transpose(conv, weights, [1,32,32,1], [1,1,1,1],padding="SAME") dw = np.random.rand(5,5,1,1) noise = np.random.rand(1,32,32,1) sess = tf.InteractiveSession() convolved = conv.eval(feed_dict={imagein: noise, weights: dw}) deconvolved = deconv.eval(feed_dict={imagein: noise, weights: dw}) I've been trying to figure

Why does FFT accelerate the calculation involved in convolution?

被刻印的时光 ゝ 提交于 2019-12-06 06:07:25
问题 I am seeing a lot of literature in which they say that by using the fft one can reach a faster convolution. I know that one needs to get fft and and then ifft from the results, but I really do not understand why using the fft can make the convolution faster? 回答1: FFT speeds up convolution for large enough filters, because convolution requires N multiplications (and N-1) additions for each output sample and conversely (2)N^2 operations for a block of N samples. Taking account, that one has to

ValueError: Error when checking target: expected dense_2 to have 4 dimensions, but got array with shape (7942, 1)

帅比萌擦擦* 提交于 2019-12-06 05:42:26
I have been using the following functional API for an image classification task using CNN: def create_model(X_train, X_test): visible = Input(shape=(X_train.shape[0], X_train.shape[1], 1)) conv1 = Conv2D(32, kernel_size=4, activation='relu')(visible) hidden1 = Dense(10, activation='relu')(pool2) output = Dense(1, activation='sigmoid')(hidden1) model = Model(inputs = visible, outputs = output) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) return model X_tr = np.reshape(X_train, (1,X_train.shape[0], X_train.shape[1], 1)) X_te = np.reshape(X_test, (1,X_test