deconvolution

Why do we have to specify output shape during deconvolution in tensorflow?

时光毁灭记忆、已成空白 提交于 2019-12-01 22:20:57
The TF documentation has an output_shape parameter in tf.conv2d_transpose . Why is this needed? Don't the strides, filter size and padding parameters of the layer decide the output shape of that layer, similar to how it is decided during convolution? This question was already asked on TF github and received an answer: output_shape is needed because the shape of the output can't necessarily be computed from the shape of the input, specifically if the output is smaller than the filter and we're using VALID padding so the input is an empty image. However, this degenerate case is unimportant most

Deconvolution2D layer in keras

早过忘川 提交于 2019-11-30 09:20:21
This layer in not ready documented very well and I'm having a bit of trouble figuring out exactly how to use it. I'm Trying something like: input_img = Input(shape=(1, h, w)) x = Convolution2D(16, 7, 7, activation='relu', border_mode='valid')(input_img) d = Deconvolution2D(1, 7, 7, (None, 1, 2*h, 2*w)) x = d(x) but when I try to write d.output_shape , I get the original shape of the image instead of twice that size (which is what I was expecting). Any help will be greatly appreciated! Short answer: you need to add subsample=(2,2) to Deconvolution2D if you wish the output to truly be twice as

looking for source code of from gen_nn_ops in tensorflow

笑着哭i 提交于 2019-11-28 20:19:00
I am new to tensorflow for deep learning and interested in deconvolution (convolution transpose) operation in tensorflow. I need to take a look at the source code for operating deconvolution. The function is I guess conv2d_transpose() in nn_ops.py . However, in the function it calls another function called gen_nn_ops.conv2d_backprop_input() . I need to take a look at what is inside this function, but I am unable to find it in the repository. Any help would be appreciated. You can't find this source because the source is automatically generated by bazel. If you build from source, you'll see

Tensorflow Convolution Neural Network with different sized images

自古美人都是妖i 提交于 2019-11-28 12:08:36
I am attempting to create a deep CNN that can classify each individual pixel in an image. I am replicating architecture from the image below taken from this paper. In the paper it is mentioned that deconvolutions are used so that any size of input is possible. This can be seen in the image below. Github Repository Currently, I have hard coded my model to accept images of size 32x32x7, but I would like to accept any size of input. What changes would I need to make to my code to accept variable sized input? x = tf.placeholder(tf.float32, shape=[None, 32*32*7]) y_ = tf.placeholder(tf.float32,

Understanding scipy deconvolve

北城以北 提交于 2019-11-27 08:05:18
I'm trying to understand scipy.signal.deconvolve . From the mathematical point of view a convolution is just the multiplication in fourier space so I would expect that for two functions f and g : Deconvolve(Convolve(f,g) , g) == f In numpy/scipy this is either not the case or I'm missing an important point. Although there are some questions related to deconvolve on SO already (like here and here ) they do not address this point, others remain unclear ( this ) or unanswered ( here ). There are also two questions on SignalProcessing SE ( this and this ) the answers to which are not helpful in

Tensorflow Convolution Neural Network with different sized images

我是研究僧i 提交于 2019-11-27 06:45:00
问题 I am attempting to create a deep CNN that can classify each individual pixel in an image. I am replicating architecture from the image below taken from this paper. In the paper it is mentioned that deconvolutions are used so that any size of input is possible. This can be seen in the image below. Github Repository Currently, I have hard coded my model to accept images of size 32x32x7, but I would like to accept any size of input. What changes would I need to make to my code to accept variable