neural-network

How to compute loss gradient w.r.t to model inputs in a Keras model?

。_饼干妹妹 提交于 2020-08-26 02:11:36
问题 What I want to achieve is to compute the gradient of cross entropy with respect to input values x . In TensorFlow I had no troubles with that: ce_grad = tf.gradients(cross_entropy, x) But as my networks grew bigger and bigger I switched to Keras to build them faster. However, now I don't really know how to achieve the above? Is there a way to extract cross entropy and input tensor from model variable that store my whole model? Just for clarity my cross_entropy is: cross_entropy = tf.reduce

Predicting next numbers in sequence Keras - Python

喜欢而已 提交于 2020-08-24 03:37:06
问题 I'm new to python and neural networks. I have a simple network written in Keras that can predict the next number in a linear sequence: import numpy as np from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM data = [[i for i in range(6)]]; data = np.array(data, dtype=int); target = [[i for i in range(10, 16)]]; target = np.array(target, dtype=int); model = Sequential(); model.add(Dense(1, input_dim=1)) model.add(Dense(1)); model.compile(loss='mean

Torchtext AttributeError: 'Example' object has no attribute 'text_content'

Deadly 提交于 2020-08-23 05:04:03
问题 I'm working with RNN and using Pytorch & Torchtext. I've got a problem with building vocab in my RNN. My code is as follows: TEXT = Field(tokenize=tokenizer, lower=True) LABEL = LabelField(dtype=torch.float) trainds = TabularDataset( path='drive/{}'.format(TRAIN_PATH), format='tsv', fields=[ ('label_start', LABEL), ('label_end', None), ('title', None), ('symbol', None), ('text_content', TEXT), ]) testds = TabularDataset( path='drive/{}'.format(TEST_PATH), format='tsv', fields=[ ('text_content

Torchtext AttributeError: 'Example' object has no attribute 'text_content'

不羁岁月 提交于 2020-08-23 05:02:16
问题 I'm working with RNN and using Pytorch & Torchtext. I've got a problem with building vocab in my RNN. My code is as follows: TEXT = Field(tokenize=tokenizer, lower=True) LABEL = LabelField(dtype=torch.float) trainds = TabularDataset( path='drive/{}'.format(TRAIN_PATH), format='tsv', fields=[ ('label_start', LABEL), ('label_end', None), ('title', None), ('symbol', None), ('text_content', TEXT), ]) testds = TabularDataset( path='drive/{}'.format(TEST_PATH), format='tsv', fields=[ ('text_content

Tensorflow: loss decreasing, but accuracy stable

 ̄綄美尐妖づ 提交于 2020-08-22 03:25:00
问题 My team is training a CNN in Tensorflow for binary classification of damaged/acceptable parts. We created our code by modifying the cifar10 example code. In my prior experience with Neural Networks, I always trained until the loss was very close to 0 (well below 1). However, we are now evaluating our model with a validation set during training (on a separate GPU), and it seems like the precision stopped increasing after about 6.7k steps, while the loss is still dropping steadily after over

Tensorflow: loss decreasing, but accuracy stable

核能气质少年 提交于 2020-08-22 03:24:33
问题 My team is training a CNN in Tensorflow for binary classification of damaged/acceptable parts. We created our code by modifying the cifar10 example code. In my prior experience with Neural Networks, I always trained until the loss was very close to 0 (well below 1). However, we are now evaluating our model with a validation set during training (on a separate GPU), and it seems like the precision stopped increasing after about 6.7k steps, while the loss is still dropping steadily after over