lstm

Error when checking target: expected dense_1 to have 3 dimensions, but got array with shape (118, 1)

戏子无情 提交于 2019-12-22 03:45:10
问题 I'm training a model to predict the stock price and input data is close price. I use 45 days data to predict the 46th day's close price and a economic Indicator to be second feature, here is the model: model = Sequential() model.add( LSTM( 512, input_shape=(45, 2), return_sequences=True)) model.add( LSTM( 512, return_sequences=True)) model.add( (Dense(1))) model.compile(loss='mse', optimizer='adam') history = model.fit( X_train, y_train, batch_size = batchSize, epochs=epochs, shuffle = False)

In what order are weights saved in a LSTM kernel in Tensorflow

▼魔方 西西 提交于 2019-12-22 01:36:58
问题 I looked into the saved weights for a LSTMCell in Tensorflow. It has one big kernel and bias weights. The dimensions of the kernel are (input_size + hidden_size)*(hidden_size*4) Now from what I understand this is encapsulating 4 input to hidden layer affine transforms as well as 4 hidden to hidden layer transforms. So there should be 4 matrices of size input_size*hidden_size and 4 of size hidden_size*hidden_size Can someone tell me or point me to the code where TF saves these, so I can break

Keras - Nan in summary histogram LSTM

橙三吉。 提交于 2019-12-21 20:46:04
问题 I've written an LSTM model using Keras, and using LeakyReLU advance activation: # ADAM Optimizer with learning rate decay opt = optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0001) # build the model model = Sequential() num_features = data.shape[2] num_samples = data.shape[1] model.add( LSTM(16, batch_input_shape=(None, num_samples, num_features), return_sequences=True, activation='linear')) model.add(LeakyReLU(alpha=.001)) model.add(Dropout(0.1)) model.add(LSTM(8

tensorflow static_rnn error: input must be a sequence

萝らか妹 提交于 2019-12-21 20:36:57
问题 I'm trying to feed my own 3D data to a LSTM. The data have: height = 365, width = 310, time = unknown / inconsistent, consist of 0 and 1, each block of data that produce an output are separated to a single file. import tensorflow as tf import os from tensorflow.contrib import rnn filename = "C:/Kuliah/EmotionRecognition/Train1/D2N2Sur.txt" hm_epochs = 10 n_classes = 12 n_chunk = 443 n_hidden = 500 data = tf.placeholder(tf.bool, name='data') cat = tf.placeholder("float", [None, n_classes])

Understanding Keras prediction output of a rnn model in R

半城伤御伤魂 提交于 2019-12-21 09:20:25
问题 I'm trying out the Keras package in R by doing this tutorial about forecasting the temperature. However, the tutorial has no explanation on how to predict with the trained RNN model and I wonder how to do this. To train a model I used the following code copied from the tutorial: dir.create("~/Downloads/jena_climate", recursive = TRUE) download.file( "https://s3.amazonaws.com/keras-datasets/jena_climate_2009_2016.csv.zip", "~/Downloads/jena_climate/jena_climate_2009_2016.csv.zip" ) unzip( "~

Understanding ConvLSTM2D by Stacking Convolution2D and LSTM layers using TimeDistributed to get similar results

做~自己de王妃 提交于 2019-12-21 05:24:08
问题 I have 950 training video samples and 50 testing video samples. Each video sample has 10 frames and each frame has a shape of (n_row=28, n_col=28, n_channels=1). My inputs (x) and outputs (y) have same shapes. x_train shape: (950, 10, 28, 28,1), y_train shape: (950, 10, 28, 28,1), x_test shape: (50, 10, 28, 28,1), y_test shape: (50, 10, 28, 28,1). I want to give input video samples (x) as input to my model to predict output video samples (y). My model so far is: from keras.layers import Dense

Low GPU Usage & Performance with Tensorflow + RNNs

孤者浪人 提交于 2019-12-21 03:01:25
问题 I have implemented a network that tries to predict a word from a sentence. The network is actually pretty complex, but here’s a simple version of it: Take indices of words in a sentences and convert to embeddings Run each sentence through LSTM Give each word in the sentence a score with a linear multiplication of the LSTM output And here’s the code: # 40 samples with random size up to 500, vocabulary size is 10000 with 50 dimensions def inference(inputs): inputs = tf.constant(inputs) word

How to speedup rnn training speed of tensorflow?

♀尐吖头ヾ 提交于 2019-12-21 02:33:34
问题 Now base tensorflow-char-rnn I start a word-rnn project to predict the next word. But I found that speed is too slow in my train data set. Here is my training details: Training data size: 1 billion words Vocabulary size: 0.75 millions RNN model: lstm RNN Layer: 2 Cell size: 200 Seq length: 20 Batch size: 40 (too big batch size will be cause OOM exception) The machine details: Amazon p2 instance 1 core K80 GPU 16G video memory 4 core CPU 60G memory In my test, the time of training data 1 epoch

How to use multilayered bidirectional LSTM in Tensorflow?

放肆的年华 提交于 2019-12-21 01:09:22
问题 I want to know how to use multilayered bidirectional LSTM in Tensorflow. I have already implemented the contents of bidirectional LSTM, but I wanna compare this model with the model added multi-layers. How should I add some code in this part? x = tf.unstack(tf.transpose(x, perm=[1, 0, 2])) #print(x[0].get_shape()) # Define lstm cells with tensorflow # Forward direction cell lstm_fw_cell = rnn.BasicLSTMCell(n_hidden, forget_bias=1.0) # Backward direction cell lstm_bw_cell = rnn.BasicLSTMCell(n

Highlighting important words in a sentence using Deep Learning

限于喜欢 提交于 2019-12-20 15:33:29
问题 I am trying to highlight important words in imdb dataset which contributed finally to the sentiment analysis prediction . The dataset is like : X_train - A review as string . Y_train - 0 or 1 Now after using Glove embeddings for embedding the X_train value I can feed it to a neural net . Now my question is , how can I highlight the most important words probability wise ? just like deepmoji.mit.edu ? What have I tried : I tried splitting the input sentences into bi-grams and using a 1D CNN to