lstm

keras (lstm) - necessary shape when using return_sequences=True

烂漫一生 提交于 2019-12-13 03:02:32
问题 I am trying to fit an LSTM network to a sin function. Currently, as far as I understand Keras, my code does only predict the next value. According to this link: Many to one and many to many LSTM examples in Keras it is a many to one model. However, my goal is to implement a Many-to-many model. Basically, I want to be able to predict let's say 10 values, to a given time. When I am trying to use return_sequences=True (see line model.add(..) ), which is supposed to be the solution, the following

LSTM Producing Same Predictions for any Input

混江龙づ霸主 提交于 2019-12-13 02:49:28
问题 So, I am currently working on a machine learning algorithm problem pertaining to car speeds and angles, and I'm trying to improve upon some of my work. I recently got done with an XGBRegressor that yielded between 88 - 95% accuracy on my cross-validated data. However, I'm trying to improve upon it, so I've been looking into the LSTM algorithm, because my data is time-series dependent. Essentially, every link includes a steering angle, the previous times steering angle (x-1), the time before

Keras LSTM Accuracy too high

大兔子大兔子 提交于 2019-12-13 00:34:59
问题 Im trying to get a LSTM working in Keras but even after the first epoch, the accuracy seems to be too high (90%) and Im worried is not training properly, I took some ideas from this post: https://machinelearningmastery.com/text-generation-lstm-recurrent-neural-networks-python-keras/ Here's my code: import numpy from keras.utils import np_utils from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from keras.layers import Dropout from keras

Defining a seed value in Branscripts for CNTK sequential machine learning models

陌路散爱 提交于 2019-12-12 23:34:32
问题 This is respect to CNTK brain scripts. I went through [1] to figure out whether there is an option to specify the random seed value, although I couldn't find any (Yes there is an option to set the 'random seed' parameter through the ParameterTensor() function, but if I followed that approach, I might have to explicitly initialize all the LSTM weights separately(defining separate weights for input layer gate, forget layer gate etc. ), instead of using the model sequence as below). Is there any

How to get weight matrix of one layer at every epoch in LSTM model based on Keras?

六眼飞鱼酱① 提交于 2019-12-12 21:56:11
问题 I have a simple LSTM model based on Keras. X_train, X_test, Y_train, Y_test = train_test_split(input, labels, test_size=0.2, random_state=i*10) X_train = X_train.reshape(80,112,12) X_test = X_test.reshape(20,112,12) y_train = np.zeros((80,112),dtype='int') y_test = np.zeros((20,112),dtype='int') y_train = np.repeat(Y_train,112, axis=1) y_test = np.repeat(Y_test,112, axis=1) np.random.seed(1) # create the model model = Sequential() batch_size = 20 model.add(BatchNormalization(input_shape=(112

LSTM autoencoder always returns the average of the input sequence

大城市里の小女人 提交于 2019-12-12 17:05:50
问题 I'm trying to build a very simple LSTM autoencoder with PyTorch. I always train it with the same data: x = torch.Tensor([[0.0], [0.1], [0.2], [0.3], [0.4]]) I have built my model following this link: inputs = Input(shape=(timesteps, input_dim)) encoded = LSTM(latent_dim)(inputs) decoded = RepeatVector(timesteps)(encoded) decoded = LSTM(input_dim, return_sequences=True)(decoded) sequence_autoencoder = Model(inputs, decoded) encoder = Model(inputs, encoded) My code is running with no errors but

expected ndim=3, found ndim=2

ぐ巨炮叔叔 提交于 2019-12-12 15:10:21
问题 I'm new with Keras and I'm trying to implement a Sequence to Sequence LSTM. Particularly, I have a dataset with 9 features and I want to predict 5 continuous values. I split the training and the test set and their shape are respectively: X TRAIN (59010, 9) X TEST (25291, 9) Y TRAIN (59010, 5) Y TEST (25291, 5) The LSTM is extremely simple at the moment: model = Sequential() model.add(LSTM(100, input_shape=(9,), return_sequences=True)) model.compile(loss="mean_absolute_error", optimizer="adam"

How to lay out training data with stateful LSTMs and batch_size > 1

社会主义新天地 提交于 2019-12-12 12:31:56
问题 Background I would like to do mini-batch training of "stateful" LSTMs in Keras. My input training data is in a large matrix "X" whose dimensions are m x n where m = number-of-subsequences n = number-of-time-steps-per-sequence Each row of X contains a subsequence which picks up where the subsequence on the preceding row leaves off. So given a long sequence of data, Data = ( t01, t02, t03, ... ) where "tK" means the token at position K in the original data, the sequence is layed out in X like

Tensorflow : ValueError: Shape must be rank 2 but is rank 3

邮差的信 提交于 2019-12-12 10:53:41
问题 I'm new to tensorflow and I'm trying to update some code for a bidirectional LSTM from an old version of tensorflow to the newest (1.0), but I get this error: Shape must be rank 2 but is rank 3 for 'MatMul_3' (op: 'MatMul') with input shapes: [100,?,400], [400,2]. The error happens on pred_mod. _weights = { # Hidden layer weights => 2*n_hidden because of foward + backward cells 'w_emb' : tf.Variable(0.2 * tf.random_uniform([max_features,FLAGS.embedding_dim], minval=-1.0, maxval=1.0, dtype=tf

Saving and Restoring a trained LSTM in Tensor Flow

三世轮回 提交于 2019-12-12 10:40:02
问题 I trained a LSTM classifier, using a BasicLSTMCell. How can I save my model and restore it for use in later classifications? 回答1: I was wondering this myself. As other pointed out, the usual way to save a model in TensorFlow is to use tf.train.Saver() , however I believe this saves the values of tf.Variables . I'm not exactly sure if there are tf.Variables inside the BasicLSTMCell implementation which are saved automatically when you do this, or if there is perhaps another step that need to