recurrent-neural-network

TensorFlow dynamic_rnn regressor: ValueError dimension mismatch

喜夏-厌秋 提交于 2019-12-04 22:56:59
问题 I would like to build a toy LSTM model for regression. This nice tutorial is already too complicated for a beginner. Given a sequence of length time_steps , predict the next value. Consider time_steps=3 and the sequences: array([ [[ 1.], [ 2.], [ 3.]], [[ 2.], [ 3.], [ 4.]], ... the target values should be: array([ 4., 5., ... I define the following model: # Network Parameters time_steps = 3 num_neurons= 64 #(arbitrary) n_features = 1 # tf Graph input x = tf.placeholder("float", [None, time

Keras LSTM - feed sequence data with Tensorflow dataset API from the generator

三世轮回 提交于 2019-12-04 21:57:17
I am trying to solve how I can feed data to my LSTM model for training. (I will simplify the problem in my example below.) I have the following data format in csv files in my dataset. Timestep Feature1 Feature2 Feature3 Feature4 Output 1 1 2 3 4 a 2 5 6 7 8 b 3 9 10 11 12 c 4 13 14 15 16 d 5 17 18 19 20 e 6 21 22 23 24 f 7 25 26 27 28 g 8 29 30 31 32 h 9 33 34 35 36 i 10 37 38 39 40 j The task is to estimate the Output of any future timestep based on the data from last 3 timesteps. Some input-output exapmles are as following: Example 1: Input: Timestep Feature1 Feature2 Feature3 Feature4 1 1 2

In what order are weights saved in a LSTM kernel in Tensorflow

。_饼干妹妹 提交于 2019-12-04 21:13:21
I looked into the saved weights for a LSTMCell in Tensorflow. It has one big kernel and bias weights. The dimensions of the kernel are (input_size + hidden_size)*(hidden_size*4) Now from what I understand this is encapsulating 4 input to hidden layer affine transforms as well as 4 hidden to hidden layer transforms. So there should be 4 matrices of size input_size*hidden_size and 4 of size hidden_size*hidden_size Can someone tell me or point me to the code where TF saves these, so I can break the kernel matrix into smaller matrices. c2huc2hu The weights are combined as mentioned in the other

How can we define one-to-one, one-to-many, many-to-one, and many-to-many LSTM neural networks in Keras? [duplicate]

六眼飞鱼酱① 提交于 2019-12-04 19:33:16
This question already has answers here : Many to one and many to many LSTM examples in Keras (2 answers) Closed last year . I am reading this article (The Unreasonable Effectiveness of Recurrent Neural Networks) and want to understand how to express one-to-one, one-to-many, many-to-one, and many-to-many LSTM neural networks in Keras. I have read a lot about RNN and understand how LSTM NNs work, in particular vanishing gradient, LSTM cells, their outputs and states, sequence output and etc. However, I have trouble expressing all these concepts in Keras. To start with I have created the

Keras - Input a 3 channel image into LSTM

谁说我不能喝 提交于 2019-12-04 19:00:42
问题 I have read a sequence of images into a numpy array with shape (7338, 225, 1024, 3) where 7338 is the sample size, 225 are the time steps and 1024 (32x32) are flattened image pixels, in 3 channels (RGB). I have a sequential model with an LSTM layer: model = Sequential() model.add(LSTM(128, input_shape=(225, 1024, 3)) But this results in the error: Input 0 is incompatible with layer lstm_1: expected ndim=3, found ndim=4 The documentation mentions that the input tensor for LSTM layer should be

How to get the output shape of a layer in Keras?

我的梦境 提交于 2019-12-04 18:40:14
问题 I have the following code in Keras (Basically I am modifying this code for my use) and I get this error: 'ValueError: Error when checking target: expected conv3d_3 to have 5 dimensions, but got array with shape (10, 4096)' Code: from keras.models import Sequential from keras.layers.convolutional import Conv3D from keras.layers.convolutional_recurrent import ConvLSTM2D from keras.layers.normalization import BatchNormalization import numpy as np import pylab as plt from keras import layers # We

TensorFlow: Performing this loss computation

穿精又带淫゛_ 提交于 2019-12-04 18:12:57
问题 My question and problem is stated below the two blocks of code. Loss Function def loss(labels, logits, sequence_lengths, label_lengths, logit_lengths): scores = [] for i in xrange(runner.batch_size): sequence_length = sequence_lengths[i] for j in xrange(length): label_length = label_lengths[i, j] logit_length = logit_lengths[i, j] # get top k indices <==> argmax_k(labels[i, j, 0, :], label_length) top_labels = np.argpartition(labels[i, j, 0, :], -label_length)[-label_length:] top_logits = np

Recurrent Neural Network Binary Classification

江枫思渺然 提交于 2019-12-04 17:24:57
I have access to a dataframe of 100 persons and how they performed on a certain motion test. This frame contains about 25,000 rows per person since the performance of this person is kept track of (approximately) each centisecond (10^-2). We want to use this data to predict a binary y-label, that is to say, if someone has a motor problem or not. The columns and some values of the dataset are follows: 'Person_ID', 'time_in_game', 'python_time', 'permutation_game, 'round', 'level', 'times_level_played_before', 'speed', 'costheta', 'y_label', 'gender', 'age_precise', 'ax_f', 'ay_f', 'az_f', 'acc',

Tensorflow dynamic_rnn deprecation

。_饼干妹妹 提交于 2019-12-04 17:06:27
It seems that the tf.nn.dynamic_rnn has been deprecated: Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use keras.layers.RNN(cell), which is equivalent to this API I have checked out keras.layers.RNN(cell) and it says that it can use masking which I assume can act as a replacement for dynamic_rnn 's sequence_length parameter? This layer supports masking for input data with a variable number of timesteps. To introduce masks to your data, use an Embedding layer with the mask_zero parameter set to True. But there is no further

Neural Machine Translation model predictions are off-by-one

霸气de小男生 提交于 2019-12-04 14:36:20
Problem Summary In the following example, my NMT model has high loss because it correctly predicts target_input instead of target_output . Targetin : 1 3 3 3 3 6 6 6 9 7 7 7 4 4 4 4 4 9 9 10 10 10 3 3 10 10 3 10 3 3 10 10 3 9 9 4 4 4 4 4 3 10 3 3 9 9 3 6 6 6 6 6 6 10 9 9 10 10 4 4 4 4 4 4 4 4 4 4 4 4 9 9 9 9 3 3 3 6 6 6 6 6 9 9 10 3 4 4 4 4 4 4 4 4 4 4 4 4 9 9 10 3 10 9 9 3 4 4 4 4 4 4 4 4 4 10 10 4 4 4 4 4 4 4 4 4 4 9 9 10 3 6 6 6 6 3 3 3 10 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 9 9 3 3 10 6 6 6 6 6 3 9 9 3 3 3 3 3 3 3 10 10 3 9 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 9 3 6 6 6 6 6 6 3 5 3 3 3 3 10 10