deep-learning

Invalid argument error (incompatible shapes) with TensorFlow

那年仲夏 提交于 2020-07-08 22:36:50
问题 I'm trying to train a simple network with tensorflow for the MNIST dataset. At the moment though it is not working. It is basically a modified version of the example given on the TensorFlow website. I just changed a couple lines and removed a layer to see what happened. Here is my code: #!/usr/bin/python import input_data import tensorflow as tf #MNIST dataset def weight_variable(shape): initial=tf.truncated_normal(shape,stddev=0.1) return tf.Variable(initial) def bias_variable(shape):

Keras mean squared error loss layer

帅比萌擦擦* 提交于 2020-07-08 11:31:35
问题 I am currently implementing a custom loss layer and in the process, I stumbled upon the implementation of mean squared error in the objectives.py file [1]. I know I'm missing something in my understanding of this loss calculation because I always thought that the average was done separately across the samples for each output in each mini-batch (axis 0 of the tensor) but it appears that the average is actually being done across the last axis, which in a single vector, would mean it's being

What's the input of each LSTM layer in a stacked LSTM network?

萝らか妹 提交于 2020-07-08 03:12:26
问题 I'm having some difficulty understanding the input-output flow of layers in stacked LSTM networks. Let's say i have created a stacked LSTM network like the one below: # parameters time_steps = 10 features = 2 input_shape = [time_steps, features] batch_size = 32 # model model = Sequential() model.add(LSTM(64, input_shape=input_shape, return_sequences=True)) model.add(LSTM(32,input_shape=input_shape)) where our stacked-LSTM network consists of 2 LSTM layers with 64 and 32 hidden units

“Layer is not connected” issue while accessing intermediate layer from within the custom callback if model is built by sub-classing

空扰寡人 提交于 2020-07-07 14:27:05
问题 I've a simple model and need access of intermediate layers within a custom callback to get intermediate predictions. import tensorflow as tf import numpy as np X = np.ones((8,16)) y = np.sum(X, axis=1) class CustomCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): get_output = tf.keras.backend.function( inputs = self.model.layers[0].input, outputs = self.model.layers[1].output ) print("\nLayer output: ", get_output(X)) If I build the model by sub-classing like

“Layer is not connected” issue while accessing intermediate layer from within the custom callback if model is built by sub-classing

怎甘沉沦 提交于 2020-07-07 14:26:26
问题 I've a simple model and need access of intermediate layers within a custom callback to get intermediate predictions. import tensorflow as tf import numpy as np X = np.ones((8,16)) y = np.sum(X, axis=1) class CustomCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): get_output = tf.keras.backend.function( inputs = self.model.layers[0].input, outputs = self.model.layers[1].output ) print("\nLayer output: ", get_output(X)) If I build the model by sub-classing like

“Layer is not connected” issue while accessing intermediate layer from within the custom callback if model is built by sub-classing

徘徊边缘 提交于 2020-07-07 14:26:11
问题 I've a simple model and need access of intermediate layers within a custom callback to get intermediate predictions. import tensorflow as tf import numpy as np X = np.ones((8,16)) y = np.sum(X, axis=1) class CustomCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): get_output = tf.keras.backend.function( inputs = self.model.layers[0].input, outputs = self.model.layers[1].output ) print("\nLayer output: ", get_output(X)) If I build the model by sub-classing like

Keras ImageDataGenerator for segmentaion with images and masks in separate directories

ε祈祈猫儿з 提交于 2020-07-07 11:51:03
问题 I am trying to build a semantic segmentation model using tensorflow.keras . The dataset that I am using has the images and masks stored in separate directories and each filename has is an id for mapping an image file with its respective mask. Following is the structure of my dataset directory: new - rendered_imges - render - image_1.tif - image_2.tif - image_3.tif - ground_truths - masks - mask_1.tif - mask_2.tif - mask_3.tif In the above directory structure, image_{i}.tif corresponds to mask

Clarification about keras.utils.Sequence

一笑奈何 提交于 2020-07-06 11:26:49
问题 Keras have very little info about keras.utils.Sequence, actually the only reason I want to derive my batch generator from keras.utils.Sequence is that I want to not to write thread pool with queue by myself, but I'm not sure if it's best choice for my task, here is my questions: What should __len__ return if I have random generator and I don't have any predefined 'list' with samples. How keras.utils.Sequence should be used with fit_generator , I'm interested in max_queue_size , workers , use

Clarification about keras.utils.Sequence

感情迁移 提交于 2020-07-06 11:26:12
问题 Keras have very little info about keras.utils.Sequence, actually the only reason I want to derive my batch generator from keras.utils.Sequence is that I want to not to write thread pool with queue by myself, but I'm not sure if it's best choice for my task, here is my questions: What should __len__ return if I have random generator and I don't have any predefined 'list' with samples. How keras.utils.Sequence should be used with fit_generator , I'm interested in max_queue_size , workers , use

How to get output of hidden layer given an input, weights and biases of the hidden layer in keras?

瘦欲@ 提交于 2020-07-05 03:06:00
问题 Suppose I have trained the model below for an epoch: model = Sequential([ Dense(32, input_dim=784), # first number is output_dim Activation('relu'), Dense(10), # output_dim, input_dim is taken for granted from above Activation('softmax'), ]) And I got the weights dense1_w , biases dense1_b of first hidden layer (named it dense1 ) and a single data sample sample . How do I use these to get the output of dense1 on the sample in keras ? Thanks! 回答1: The easiest way is to use the keras backend.