convolution

Fast 2D convolution for DSP

爷,独闯天下 提交于 2019-12-02 21:14:47
I want to implement some image-processing algorithms which are intended to run on a beagleboard . These algorithms use convolutions extensively. I'm trying to find a good C implementation for 2D convolution (probably using the Fast Fourier Transform). I also want the algorithm to be able to run on the beagleboard's DSP, because I've heard that the DSP is optimized for these kinds of operations (with its multiply-accumulate instruction). I have no background in the field so I think it won't be a good idea to implement the convolution myself (I probably won't do it as good as someone who

How to use pre-multiplied during image convolution to solve alpha bleed problem?

最后都变了- 提交于 2019-12-02 20:52:52
i'm trying to apply a box blur to an transparent image, and i'm getting a "dark halo" around the edges. Jerry Huxtable has a short mention of the problem , and a very good demonstration showing the problem happen: But i, for the life of me, cannot understand how " pre-multiplied alpha " can fix the problem. Now for a very simple example. i have a 3x3 image, containing one red and one green pixel: In reality the remaining pixels are transparent: Now we will apply a 3x3 Box Blur to the image. For simplicities sake, we'll only calculate the new value of the center pixel. The way a box blur works

For what reason Convolution 1x1 is used in deep neural networks?

為{幸葍}努か 提交于 2019-12-02 16:21:44
I'm looking at InceptionV3 (GoogLeNet) architecture and cannot understand why do we need conv1x1 layers? I know how convolution works, but I see a profit with patch size > 1. nessuno You can think about 1x1xD convolution as a dimensionality reduction technique when it's placed somewhere into a network. If you have an input volume of 100x100x512 and you convolve it with a set of D filters each one with size 1x1x512 you reduce the number of features from 512 to D. The output volume is, therefore, 100x100xD . As you can see this (1x1x512)xD convolution is mathematically equivalent to a fully

Simple GLSL convolution shader is atrociously slow

孤人 提交于 2019-12-02 15:56:15
I'm trying to implement a 2D outline shader in OpenGL ES2.0 for iOS. It is insanely slow. As in 5fps slow. I've tracked it down to the texture2D() calls. However, without those any convolution shader is undoable. I've tried using lowp instead of mediump, but with that everything is just black, although it does give another 5fps, but it's still unusable. Here is my fragment shader. varying mediump vec4 colorVarying; varying mediump vec2 texCoord; uniform bool enableTexture; uniform sampler2D texture; uniform mediump float k; void main() { const mediump float step_w = 3.0/128.0; const mediump

Keras conv1d layer parameters: filters and kernel_size

99封情书 提交于 2019-12-02 14:42:12
I am very confused by these two parameters in the conv1d layer from keras: https://keras.io/layers/convolutional/#conv1d the documentation says: filters: Integer, the dimensionality of the output space (i.e. the number output of filters in the convolution). kernel_size: An integer or tuple/list of a single integer, specifying the length of the 1D convolution window. But that does not seem to relate to the standard terminologies I see on many tutorials such as https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner 's-Guide-To-Understanding-Convolutional-Neural-Networks/ and https:/

How to use Bidirectional RNN and Conv1D in keras when shapes are not matching?

此生再无相见时 提交于 2019-12-02 10:35:04
问题 I am brand new to Deep-Learning so I'm reading though Deep Learning with Keras by Antonio Gulli and learning a lot. I want to start using some of the concepts. I want to try and implement a neural network with a 1-dimensional convolutional layer that feeds into a bidirectional recurrent layer (like the paper below). All the tutorials or code snippets I've encountered do not implement anything remotely similar to this (e.g. image recognition) or use an older version of keras with different

Matlab Convolution using gpu

倖福魔咒の 提交于 2019-12-02 10:16:18
I tried the matlab's convolution function conv2 convn with gpuArray. For example convn(gpuArray.rand(100,100,10,'single'),gpuArray.rand(5,'single') and compared it to the cpu version convn(rand(100,100,10),rand(5)). Unfortunately the gpu version is much slower than the cpu version, especially noticeable when I put the function for example into a loop(which will be relevant for me). Does anyone know an alternative to fast convolution using matlab and the gpu for relatively small filtering kernels from 5x5 to 14x14? The GPU performance is limited by the data array size [100x100x10] and [5x5] in

How to use Bidirectional RNN and Conv1D in keras when shapes are not matching?

放肆的年华 提交于 2019-12-02 08:22:05
I am brand new to Deep-Learning so I'm reading though Deep Learning with Keras by Antonio Gulli and learning a lot. I want to start using some of the concepts. I want to try and implement a neural network with a 1-dimensional convolutional layer that feeds into a bidirectional recurrent layer (like the paper below). All the tutorials or code snippets I've encountered do not implement anything remotely similar to this (e.g. image recognition) or use an older version of keras with different functions and usage. What I'm trying to do is a variation of this paper : (1) convert DNA sequences to one

A formula to find the size of a matrix after convolution

梦想的初衷 提交于 2019-12-02 04:39:50
If my input size is 5x5, the stride is 1x1, and the filter size is 3x3 then I can compute on paper that the final size of the convolved matrix will be 3x3. But, when this input size changes to 28x28, or 50x50 then how can I compute the size of the convolved matrix on paper? Is there any formula or any trick to do that? Yes, there's a formula (see the details in cs231n class ): W2 = (W1 - F + 2*P) / S + 1 H2 = (H1 - F + 2*P) / S + 1 where W1xH1 is the original image size, F is the filter size, S is the stride and P is one more parameter - the padding size . Also note that result channel size

Would Richardson–Lucy deconvolution work for recovering the latent kernel?

感情迁移 提交于 2019-12-02 04:27:34
I am aware that Richardson–Lucy deconvolution is for recovering the latent image, but suppose we have a noisy image and the original image. Can we find the kernel that caused the transformation? Below is a MATLAB code for Richardson-Lucy deconvolution and I am wondering if it is easy to modify and make it recover the kernel instead of the latent image . My thoughts are that we change the convolution options to valid so the output would represent the kernel, what do you think? function latent_est = RL_deconvolution(observed, psf, iterations) % to utilise the conv2 function we must make sure the