neural-network

How to compute gradient of output wrt input in Tensorflow 2.0

我们两清 提交于 2020-06-25 12:18:11
问题 I have a trained Tensorflow 2.0 model (from tf.keras.Sequential()) that takes an input layer with 26 columns (X) and produces an output layer with 1 column (Y). In TF 1.x I was able to calculate the gradient of the output with respect to the input with the following: model = load_model('mymodel.h5') sess = K.get_session() grad_func = tf.gradients(model.output, model.input) gradients = sess.run(grad_func, feed_dict={model.input: X})[0] In TF2 when I try to run tf.gradients(), I get the error:

How to compute gradient of output wrt input in Tensorflow 2.0

◇◆丶佛笑我妖孽 提交于 2020-06-25 12:16:07
问题 I have a trained Tensorflow 2.0 model (from tf.keras.Sequential()) that takes an input layer with 26 columns (X) and produces an output layer with 1 column (Y). In TF 1.x I was able to calculate the gradient of the output with respect to the input with the following: model = load_model('mymodel.h5') sess = K.get_session() grad_func = tf.gradients(model.output, model.input) gradients = sess.run(grad_func, feed_dict={model.input: X})[0] In TF2 when I try to run tf.gradients(), I get the error:

Create image of Neural Network structure

ぃ、小莉子 提交于 2020-06-25 08:57:29
问题 Many papers use very nice images of neural networks. I also like to create such an image for a report which i'm writing. An example: "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation" from V. Badrinarayanan et al., page 4 https://arxiv.org/pdf/1511.00561v3.pdf My question: Which tool might be used to create such images? Especially the convvolution rectangles look very nice. Thank you very much 回答1: I wrote a small class which helps to draw such images. Probably

R neuralNet: “non-conformable arguments”

允我心安 提交于 2020-06-25 03:36:06
问题 Argh! I keep getting the following error when attempting to compute with my neural network: > net.compute <- compute(net, matrix.train2) Error in neurons[[i]] %*% weights[[i]] : non-conformable arguments I can't figure out what the problem is. Below I'll provide you with an example data and formatting from my matrices and then I'll show you the code I'm attempting to run. matrix.train1 is used for training the network > matrix.train1 (Intercept) survived pclass sexmale age sibsp parch fare

Implementing im2col in TensorFlow

百般思念 提交于 2020-06-24 11:49:07
问题 I wish to implement an operation similar to 2D convolution in TensorFlow. As per my understanding, the most common approach to implementing convolution is by first applying an im2col operation to the image (see here - subsection " Implementation as Matrix Multiplication ") - an operation that transforms an image into a 2D matrix with individual "chunks" of the image to which the kernel is applied as flattened columns. In other words, this excerpt from the above linked resource explains what

How to understand loss acc val_loss val_acc in Keras model fitting

夙愿已清 提交于 2020-06-24 05:02:08
问题 I'm new on Keras and have some questions on how to understanding my model results. Here is my result:(for your convenience, I only paste the loss acc val_loss val_acc after each epoch here) Train on 4160 samples, validate on 1040 samples as below: Epoch 1/20 4160/4160 - loss: 3.3455 - acc: 0.1560 - val_loss: 1.6047 - val_acc: 0.4721 Epoch 2/20 4160/4160 - loss: 1.7639 - acc: 0.4274 - val_loss: 0.7060 - val_acc: 0.8019 Epoch 3/20 4160/4160 - loss: 1.0887 - acc: 0.5978 - val_loss: 0.3707 - val

Adam optimizer goes haywire after 200k batches, training loss grows

余生颓废 提交于 2020-06-23 22:24:20
问题 I've been seeing a very strange behavior when training a network, where after a couple of 100k iterations (8 to 10 hours) of learning fine, everything breaks and the training loss grows : The training data itself is randomized and spread across many .tfrecord files containing 1000 examples each, then shuffled again in the input stage and batched to 200 examples. The background I am designing a network that performs four different regression tasks at the same time, e.g. determining the

Keras neural network takes only few samples to train

不问归期 提交于 2020-06-23 16:20:17
问题 data = np.random.random((10000, 150)) labels = np.random.randint(10, size=(10000, 1)) labels = to_categorical(labels, num_classes=10) model = Sequential() model.add(Dense(units=32, activation='relu', input_shape=(150,))) model.add(Dense(units=10, activation='softmax')) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(data, labels, epochs=30, validation_split=0.2) I created 10000 random samples to train my net, but it use only few of them(250

Keras neural network takes only few samples to train

纵饮孤独 提交于 2020-06-23 16:19:50
问题 data = np.random.random((10000, 150)) labels = np.random.randint(10, size=(10000, 1)) labels = to_categorical(labels, num_classes=10) model = Sequential() model.add(Dense(units=32, activation='relu', input_shape=(150,))) model.add(Dense(units=10, activation='softmax')) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(data, labels, epochs=30, validation_split=0.2) I created 10000 random samples to train my net, but it use only few of them(250

Easy way to clamp Neural Network outputs between 0 and 1?

喜你入骨 提交于 2020-06-17 00:04:28
问题 So I'm working on writing a GAN neural network and I want to set my network's output to 0 if it is less than 0 and 1 if it is greater than 1 and leave it unchanged otherwise. I'm pretty new to tensorflow, but I don't know of any tensorflow function or activation to do this without unwanted side effects. So I made my loss function so it calculates the loss as if the output was clamped, with this code: def discriminator_loss(real_output, fake_output): real_output_clipped = min(max(real_output