convolution

Tensorflow: Trainable Variable Masking

半腔热情 提交于 2019-12-01 07:37:43
问题 I am working on a convolutional neural net that requires some parts of the a kernel weights to be untrainable. tf.nn.conv2d(x, W) takes in a trainable variable W as weights. How can I make some of the elements of W to be untrainable? 回答1: Maybe you could have your trainable weights W1 , a mask M indicating where the trainable variables are, and a constant / untrainable weight matrix W2 , and use W = tf.multiply(W1, tf.cast(M, dtype=W1.dtype)) + tf.multiply(W2, tf.cast(tf.logical_not(M), dtype

What is the meaning of 2D stride in convolution?

依然范特西╮ 提交于 2019-12-01 06:16:53
I know what meaning stride has when it is just an integer number (by which step you should apply filter to image). But what about (1, 1) or even more dimensional stride? The stride defines how the filter is moved along the input image (tensor). Nothing stops you from striding along different axes differently, e.g., stride=[1, 2] means move 1px at a time along 0 axis, and 2px at a time along 1 axis. This particular combination isn't common, but possible. Tensorflow API goes even further and allows custom striding for all axes of the 4D input tensor (see tf.nn.conv2d ). Using this API it's not

What is the meaning of 2D stride in convolution?

放肆的年华 提交于 2019-12-01 05:57:33
问题 I know what meaning stride has when it is just an integer number (by which step you should apply filter to image). But what about (1, 1) or even more dimensional stride? 回答1: The stride defines how the filter is moved along the input image (tensor). Nothing stops you from striding along different axes differently, e.g., stride=[1, 2] means move 1px at a time along 0 axis, and 2px at a time along 1 axis. This particular combination isn't common, but possible. Tensorflow API goes even further

FFT Convolution - 3x3 kernel

孤街浪徒 提交于 2019-12-01 05:44:30
I have written some routines to sharpen a Grayscale image using a 3x3 kernel, -1 -1 -1 -1 9 -1 -1 -1 -1 The following code is working well in case of non-FFT (spatial-domain) convolution, but, not working in FFT-based (frequency-domain) convolution. The output image seems to be blurred. I have several problems: (1) This routine is not being able to generate desired result. It also freezes the application. public static Bitmap ApplyWithPadding(Bitmap image, Bitmap mask) { if(image.PixelFormat == PixelFormat.Format8bppIndexed) { Bitmap imageClone = (Bitmap)image.Clone(); Bitmap maskClone =

Fast convolution algorithm

谁都会走 提交于 2019-12-01 05:38:39
问题 I need to convolve two one dimensional signals, one has on average 500 points (This one is a Hanning window function), the other 125000. Per run, I need to apply three times the convolution operation. I already have an implementation running based on the documentation of scipy. You can see the code here if you want to (Delphi code ahead): function Convolve(const signal_1, signal_2 : ExtArray) : ExtArray; var capital_k : Integer; capital_m : Integer; smallest : Integer; y : ExtArray; n :

Memory Issues Using Keras Convolutional Network

六月ゝ 毕业季﹏ 提交于 2019-12-01 01:24:37
I am very new to ML using Big Data and I have played with Keras generic convolutional examples for the dog/cat classification before, however when applying a similar approach to my set of images, I run into memory issues. My dataset consists of very long images that are 10048 x1687 pixels in size. To circumvent the memory issues, I am using a batch size of 1, feeding in one image at a time to the model. The model has two convolutional layers, each followed by max-pooling which together make the flattened layer roughly 290,000 inputs right before the fully-connected layer. Immediately after

Can't access TensorFlow Adam optimizer namespace

亡梦爱人 提交于 2019-12-01 00:29:53
I'm trying to learn about GANs and I'm working through the example here . The code below using the Adam optimizer gives me the error "ValueError: Variable d_w1/Adam/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?" I'm using TF 1.1.0 d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=Dx, labels=tf.fill([batch_size, 1], 0.9))) d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=Dg, labels=tf.zeros_like(Dg))) d_loss = d_loss_real + d_loss_fake tvars = tf.trainable_variables() d_vars = [var for

Python keras how to change the size of input after convolution layer into lstm layer

谁说我不能喝 提交于 2019-11-30 22:51:52
I have a problem with the connection between convolution layer and lstm layer. The data is of shape(75,5) where there is 75 timesteps x 5 data points for each time step. What I want to do is do a convolution on (75x5), get new convolved (75x5) data and feed that data into lstm layer. However, it does not work because the shape of output of convolution layer has number of filters which I do not need. And therefore the shape of convolution layer output is (1,75,5) and input needed for lstm layer is (75,5). How do I just take the first filter. model = Sequential() model.add(Convolution2D(1, 5,5

Keras 1D CNN: How to specify dimension correctly?

和自甴很熟 提交于 2019-11-30 21:10:20
So, what I'm trying to do is to classify between exoplanets and non exoplanets using the kepler data obtained here . The data type is a time series with the dimension of ( num_of_samples,3197 ). I figured out that this can be done by using 1D Convolutional Layer in Keras. But I keep messing the dimensions and get the following error Error when checking model input: expected conv1d_1_input to have shape (None, 3197, 1) but got array with shape (1, 570, 3197) So, the questions are: 1.Does the data (training_set and test_set) need to be converted into 3D tensor? If yes, what is the correct

Wiener Filter for image deblur

為{幸葍}努か 提交于 2019-11-30 20:32:41
I am trying to implement the Wiener Filter to perform deconvolution on blurred image. My implementation is like this import numpy as np from numpy.fft import fft2, ifft2 def wiener_filter(img, kernel, K = 10): dummy = np.copy(img) kernel = np.pad(kernel, [(0, dummy.shape[0] - kernel.shape[0]), (0, dummy.shape[1] - kernel.shape[1])], 'constant') # Fourier Transform dummy = fft2(dummy) kernel = fft2(kernel) kernel = np.conj(kernel) / (np.abs(kernel) ** 2 + K) dummy = dummy * kernel dummy = np.abs(ifft2(dummy)) return np.uint8(dummy) This implementation is based on the Wiki Page . The TIFF image