lstm

Adding static data( not changing over time) to sequence data in LSTM

倖福魔咒の 提交于 2019-12-12 10:02:30
问题 I am trying to build a model like the following figure. Please see the following image: I want to pass sequence data in LSTM layer and static data (blood group, gender) in another feed forward neural network layer. Later I want to merge them. However, I am confused about the dimenstion here. If my understaning is right(which i depict in the image), how the 5-dimensional sequence data can be merged with 4 dimenstional static data? Also, what is the difference of attention mechanism with this

Predict using data with less time steps (different dimension) using Keras RNN model

你说的曾经没有我的故事 提交于 2019-12-12 08:55:02
问题 According to the nature of RNN, we can get an output of predicted probabilities at every time stamp (unfold in time). Suppose I train an RNN with 5 time steps, each having 6 features. Thus I have to specify the first layer like this(suppose we use a LSTM layer with 20 nodes as the first layer): model.add(LSTM(20, return_sequences=True, input_shape=(5, 6))) And the model works well if I input the same dimension data. However, now I want to use first 3 time steps of the data to get the

Correct way to split data to batches for Keras stateful RNNs

做~自己de王妃 提交于 2019-12-12 08:53:45
问题 As the documentation states the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch does it mean that to split data to batches I need to do it the following way e.g. let's assume that I am training a stateful RNN to predict the next integer in range(0, 5) given the previous one # batch_size = 3 # 0, 1, 2 etc in x are samples (timesteps and features omitted for brevity of the example) x = [0, 1, 2, 3, 4] y = [1, 2, 3,

why this tensorflow tutorial code not working

眉间皱痕 提交于 2019-12-12 06:57:24
问题 Now i'm trying lstm tutorial, look some one's book. But it didn't work. What's the problem? : import tensorflow as tf import numpy as np from tensorflow.contrib import rnn import pprint pp = pprint.PrettyPrinter(indent=4) sess = tf.InteractiveSession() a = [1, 0, 0, 0] b = [0, 1, 0, 0] c = [0, 0, 1, 0] d = [0, 0, 0, 1] init=tf.global_variables_initializer() with tf.variable_scope('one_cell') as scope: hidden_size = 2 cell = tf.contrib.rnn.BasicRNNCell(num_units=hidden_size) print(cell.output

LSTM: Figuring out the library?

限于喜欢 提交于 2019-12-12 05:59:24
问题 I'm using the library https://github.com/cazala/synaptic I am trying to predict the next value (value X) in the following series: 0 0 0 1 0 0 0 1 0 0 0 X Which should be a 1 . Here is the code: const options = { peepholes: Layer.connectionType.ALL_TO_ALL, hiddenToHidden: false, outputToHidden: false, outputToGates: false, inputToOutput: true, }; // 1 input, 3 hidden layers (4 nodes per layer), 1 output const lstm = new Architect.LSTM(1, 4, 4, 4, 1, options); const trainingArray = [ { input:

Tensorflow: Feeding every LSTM timestep into the same logit layer (generaly feeding a dynamic amount of tensors into one layer)

南笙酒味 提交于 2019-12-12 04:48:20
问题 I stumbled upon this issue while trying to build an LSTM-classifier. Using tf.nn.dynamic_rnn to auto-unfold over time i get an output lstm_output of size [batch_size, time_steps, number_cells] from the lstm cell (ignoring the state which is also an output). Now this output should for every timestep be fed into the same fully connected layer (planned to use tf.contrib.layers.fully_connected(lstm_output_oneTimestep, numClasses) to reduce the size from number_cells to number_classes (for using

TensorFlow tf.nn.rnn function … how to use the results of your training to do a single forward-pass through the RNN

荒凉一梦 提交于 2019-12-12 01:42:40
问题 I'm having a tough time using the 'initial state' argument in the tf.nn.rnn function. val, _ = tf.nn.rnn(cell1, newBatch, initial_state=stateP, dtype=tf.float32) newBatch.shape => (1, 1, 11) stateP.shape => (2, 2, 1, 11) In general, I've gone through the training for my LSTM neural net and now I want to use the values of it. How do I do this? I know that the tf.nn.rnn() function will return state... but I don't know how to plug it in. fyi stateP.shape => (2, 2, 1, 11) ..... maybe because I

LSTM after embedding of a N-dimensional sequence

偶尔善良 提交于 2019-12-12 01:27:42
问题 I have an input sequence with 2-dimensions train_seq with shape (100000, 200, 2) i.e. 100000 training examples, sequence length of 200, and 2 features. The sequences are text, so each element is one word with a vocabulary of 5000 words. Hence, I want to use an embedding layer prior to my LSTM. MAX_SEQUENCE_LENGTH = 200 EMBEDDING_SIZE = 64 MAX_FEATURES = 5000 NUM_CATEGORIES = 5 model_input = Input(shape=(MAX_SEQUENCE_LENGTH,2)) x = Embedding(output_dim=EMBEDDING_SIZE, input_dim=MAX_FEATURES,

Input 0 is incompatible with layer lstm_93: expected ndim=3, found ndim=2

只愿长相守 提交于 2019-12-11 23:50:24
问题 My X_train shape is (171,10,1) and y_train shape is (171,)(contains values from 1 to 19). The output should be probability of each of the 19 class. I am trying to use a RNN for classification of 19 classes. from sklearn.preprocessing import LabelEncoder,OneHotEncoder label_encoder_X=LabelEncoder() label_encoder_y=LabelEncoder() y_train=label_encoder_y.fit_transform(y_train) y_train=np.array(y_train) X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1)) from keras.models

How to connect Convlolutional layer with LSTM in tensorflow Keras

混江龙づ霸主 提交于 2019-12-11 19:45:25
问题 I'm experimenting with architecture of neural network and I try to connect 2D convolution to LSTM cell in tensorflow Keras. Here is my original model: model = Sequential() model.add(CuDNNLSTM(256, input_shape=(train_x.shape[1:]), return_sequences=True)) model.add(Dropout(0.2)) model.add(BatchNormalization()) model.add(Dense(64, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(4, activation='softmax')) It works like magic. train_x is 1209 sequences, each set has 23 numbers and