deconvolution

How to stack multiple layers of conv2d_transpose() of Tensorflow

拥有回忆 提交于 2021-02-06 09:21:42
问题 I'm trying to stack 2 layers of tf.nn.conv2d_transpose() to up-sample a tensor. It works fine during feed forward, but I get an error during backward propagation: ValueError: Incompatible shapes for broadcasting: (8, 256, 256, 24) and (8, 100, 100, 24) . Basically, I've just set the output of the first conv2d_transpose as the input of the second one: convt_1 = tf.nn.conv2d_transpose(...) convt_2 = tf.nn.conv2d_transpose(conv_1) Using just one conv2d_transpose , everything works fine. The

How to stack multiple layers of conv2d_transpose() of Tensorflow

喜夏-厌秋 提交于 2021-02-06 09:21:09
问题 I'm trying to stack 2 layers of tf.nn.conv2d_transpose() to up-sample a tensor. It works fine during feed forward, but I get an error during backward propagation: ValueError: Incompatible shapes for broadcasting: (8, 256, 256, 24) and (8, 100, 100, 24) . Basically, I've just set the output of the first conv2d_transpose as the input of the second one: convt_1 = tf.nn.conv2d_transpose(...) convt_2 = tf.nn.conv2d_transpose(conv_1) Using just one conv2d_transpose , everything works fine. The

How to create a layer to invert a softmax (TensforFlow,python)?

笑着哭i 提交于 2021-02-05 12:09:37
问题 I am building a deconvolution network. I would like to add a layer to it which is the reverse of a softmax. I tried to write a basic python function that returns the inverse of a softmax for a given matrix and put that in a tensorflow Lambda and add it to my model. I have no error but when I doing a predict I only have 0 at the exit. When I don't add this layer to my network I have output something other than zeros. This therefore justifies that they are due to my inv_softmax function which

How to create a layer to invert a softmax (TensforFlow,python)?

£可爱£侵袭症+ 提交于 2021-02-05 12:08:30
问题 I am building a deconvolution network. I would like to add a layer to it which is the reverse of a softmax. I tried to write a basic python function that returns the inverse of a softmax for a given matrix and put that in a tensorflow Lambda and add it to my model. I have no error but when I doing a predict I only have 0 at the exit. When I don't add this layer to my network I have output something other than zeros. This therefore justifies that they are due to my inv_softmax function which

How to create a layer to invert a softmax (TensforFlow,python)?

谁都会走 提交于 2021-02-05 12:08:26
问题 I am building a deconvolution network. I would like to add a layer to it which is the reverse of a softmax. I tried to write a basic python function that returns the inverse of a softmax for a given matrix and put that in a tensorflow Lambda and add it to my model. I have no error but when I doing a predict I only have 0 at the exit. When I don't add this layer to my network I have output something other than zeros. This therefore justifies that they are due to my inv_softmax function which

Deconvolution layer FCN initialization - loss drops too fast

自闭症网瘾萝莉.ら 提交于 2020-01-07 03:56:26
问题 I'm training a small (10M weights on 12K images) FCN (see e.g. Long et al, 2015). The architecture is the following (it starts with FCN8s fc7 layer): fc7->relu1->dropout->conv2048->conv1024->conv512->deconv1->deconv2->deconv3->deconv4->deconv5->crop->softmax_with_loss When I initialized all deconv layers with Gaussian weights, I got some (though not always) reasonable result. Then I decided to do it the right way, and used the scripts provided by Shelhamer (e.g. https://github.com/zeakey

How to update the weights of a Deconvolutional Layer?

时光总嘲笑我的痴心妄想 提交于 2019-12-23 07:57:58
问题 I'm trying to develop a deconvolutional layer (or a transposed convolutional layer to be precise). In the forward pass, I do a full convolution (convolution with zero padding) In the backward pass, I do a valid convolution (convolution without padding) to pass the errors to the previous layer The gradients of the biases are easy to compute, simply a matter of averaging over the superfluous dimensions. The problem is I don't know how to update the weights of the convolutional filters. What are

Deblurring an image

老子叫甜甜 提交于 2019-12-22 04:34:15
问题 I am trying to deblur an image in Python but have run into some problems. Here is what I've tried, but keep in mind that I am not an expert on this topic. According to my understanding, if you know the point spread function, you should be able to deblur the image quite simply by performing a deconvolution. However, this doesn't seem to work and I don't know if I'm doing something stupid or if I just don't understand things correctly. In Mark Newman's Computational Physics book (using Python),

What are the constraints on the divisor argument of scipy.signal.deconvolve to ensure numerical stability?

别来无恙 提交于 2019-12-21 20:36:45
问题 Here is my problem: I am going to process data coming from a system for which I will have a good idea of the impulse response. Having used Python for some basic scripting before, I am getting to know the scipy.signal.convolve and scipy.signal.deconvolve functions. In order to get some confidence in my final solution, I would like to understand their requirements and limitations. I used the following test: 1. I built a basic signal made of two Gaussians. 2. I built a Gaussian impulse response.

CNN: input stride vs. output stride

不羁岁月 提交于 2019-12-21 19:52:20
问题 In the paper 'Fully Convolutional Networks for Semantic Segmentation' the author distinguishes between input stride and output stride in the context of deconvolution. How do these terms differ from each other? 回答1: Input stride is the stride of the filter . How much you shift the filter in the output . Output Stride this is actually a nominal value . We get feature map in a CNN after doing several convolution , max-pooling operations . Let's say our input image is 224 * 224 and our final