convolution

scipy.convolve gives “ValueError: object too deep for desired array” with 3D array and 3D kernel

血红的双手。 提交于 2019-12-20 07:47:39
问题 I am using Python 3 on Anacona Spyder on CentOS 7. The following call scipy.convolve(nda, box) gives the following error message. ValueError: object too deep for desired array nda and box have the same type and dimensions. np.shape(nda) Out[51]: (70, 70, 70) np.shape(box) Out[52]: (3, 3, 3) type(nda) Out[53]: numpy.ndarray type(box) Out[54]: numpy.ndarray It is my understanding that scipy.convolve can handles multidimensional objects. I cannot understand this error message. 回答1: The name

tensorflow conv2d diffrent start index between even and odd stride

倖福魔咒の 提交于 2019-12-20 06:17:36
问题 To my understanding from tf.nn.conv2d doc for SAME convolution (no matter the stride) The first dot product should be centered around (0,0) though as you can see bellow when the stride is odd the first dot product seems to be centered around (1,1): in this toy example input shape is [5,5,1] filer shape is [3,3,1,1] res = tf.nn.conv2d(X, F, strides=[1,x,x,1], padding='SAME') stride 1 result: array([[ 1.49573362, 2.65084887, 2.96818447, 3.04787111, 1.89275599], [ 3.1941781 , 4.47312069, 4

Matlab Convolution using gpu

亡梦爱人 提交于 2019-12-20 06:14:13
问题 I tried the matlab's convolution function conv2 convn with gpuArray. For example convn(gpuArray.rand(100,100,10,'single'),gpuArray.rand(5,'single') and compared it to the cpu version convn(rand(100,100,10),rand(5)). Unfortunately the gpu version is much slower than the cpu version, especially noticeable when I put the function for example into a loop(which will be relevant for me). Does anyone know an alternative to fast convolution using matlab and the gpu for relatively small filtering

A formula to find the size of a matrix after convolution

怎甘沉沦 提交于 2019-12-20 04:54:33
问题 If my input size is 5x5, the stride is 1x1, and the filter size is 3x3 then I can compute on paper that the final size of the convolved matrix will be 3x3. But, when this input size changes to 28x28, or 50x50 then how can I compute the size of the convolved matrix on paper? Is there any formula or any trick to do that? 回答1: Yes, there's a formula (see the details in cs231n class): W2 = (W1 - F + 2*P) / S + 1 H2 = (H1 - F + 2*P) / S + 1 where W1xH1 is the original image size, F is the filter

C# Convolution filter for any size matrix (1x1, 3x3, 5x5, …) not fully applied

大兔子大兔子 提交于 2019-12-20 03:10:02
问题 I'm making a convolution filter for my project and I managed to make it for any size of matrix but as it gets bigger I noticed that not all bits are changed. Here are the pictures showing the problem: First one is the original Filter: Blur 9x9 Filter: EdgeDetection 9x9: As you can see, there is a little stripe that is never changed and as the matrix gets bigger, the stripe also gets bigger (in 3x3 it wasn't visible) My convolution matrix class: public class ConvMatrix { public int Factor = 1;

How can I get a 1D convolution in theano

北城余情 提交于 2019-12-19 18:18:07
问题 The only function I can find is for 2D convolutions described here... Is there any optimised 1D function ? 回答1: While I believe there's no conv1d in theano, Lasagne (a neural network library on top of theano) has several implementations of Conv1D layer. Some are based on conv2d function of theano with one of the dimensions equal to 1, some use single or multiple dot products. I would try all of them, may be a dot-product based ones will perform better than conv2d with width=1 . https://github

Can't access TensorFlow Adam optimizer namespace

拜拜、爱过 提交于 2019-12-19 04:17:25
问题 I'm trying to learn about GANs and I'm working through the example here. The code below using the Adam optimizer gives me the error "ValueError: Variable d_w1/Adam/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?" I'm using TF 1.1.0 d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=Dx, labels=tf.fill([batch_size, 1], 0.9))) d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=Dg, labels=tf

Python keras how to change the size of input after convolution layer into lstm layer

廉价感情. 提交于 2019-12-19 03:41:51
问题 I have a problem with the connection between convolution layer and lstm layer. The data is of shape(75,5) where there is 75 timesteps x 5 data points for each time step. What I want to do is do a convolution on (75x5), get new convolved (75x5) data and feed that data into lstm layer. However, it does not work because the shape of output of convolution layer has number of filters which I do not need. And therefore the shape of convolution layer output is (1,75,5) and input needed for lstm

How does Richardson–Lucy algorithm work? Code example?

五迷三道 提交于 2019-12-18 16:56:45
问题 I am trying to figure out how deconvolution works. I understand the idea behind it but I want to understand some of the actual algorithms which implement it - algorithms which take as input a blurred image with its point sample function (blur kernel) and produce as output the latent image. So far I found Richardson–Lucy algorithm where the math does not seem to be that difficult however I can't figure how the actual algorithm works. At Wikipedia it says: This leads to an equation for which

How to correctly get layer weights from Conv2D in keras?

筅森魡賤 提交于 2019-12-18 11:57:08
问题 I have Conv2D layer defines as: Conv2D(96, kernel_size=(5, 5), activation='relu', input_shape=(image_rows, image_cols, 1), kernel_initializer=initializers.glorot_normal(seed), bias_initializer=initializers.glorot_uniform(seed), padding='same', name='conv_1') This is the first layer in my network. Input dimensions are 64 by 160, image is 1 channel. I am trying to visualize weights from this convolutional layer but not sure how to get them. Here is how I am doing this now: 1.Call layer.get