lstm

LSTM Sequence Prediction in Keras just outputs last step in the input

倾然丶 夕夏残阳落幕 提交于 2019-12-07 23:31:59
问题 I am currently working with Keras using Tensorflow as the backend. I have a LSTM Sequence Prediction model shown below that I am using to predict one step ahead in a data series (input 30 steps [each with 4 features], output predicted step 31). model = Sequential() model.add(LSTM( input_dim=4, output_dim=75, return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM( 150, return_sequences=False)) model.add(Dropout(0.2)) model.add(Dense( output_dim=4)) model.add(Activation("linear")) model

TensorFlow人工智能引擎入门教程所有目录

╄→尐↘猪︶ㄣ 提交于 2019-12-07 21:12:55
TensorFlow 人工智能引擎 入门教程之一 基本概念以及理解 TensorFlow人工智能引擎入门教程之二 CNN卷积神经网络的基本定义理解。 TensorFlow人工智能引擎入门教程之三 实现一个自创的CNN卷积神经网络 TensorFlow人工智能引擎入门教程之四 TensorBoard面板可视化管理 TensorFlow人工智能引擎入门教程之五 AlphaGo 的策略网络(CNN)简单的实现 TensorFlow人工智能引擎入门教程之六 训练的模型Model 保存 文件 并使用 TensorFlow人工智能引擎入门教程之七 DNN深度神经网络 的原理 以及 使用 TensorFlow人工智能引擎入门教程之八 接着补充一章MLP多层感知器网络原理以及 使用 TensorFlow人工智能引擎入门教程之九 RNN循环网络原理以及 使用 TensorFlow人工智能引擎入门教程之十 最强网络 RSNN深度残差网络 平均准确率96-99% TensorFlow人工智能入门教程之十一 最强网络DLSTM 双向长短期记忆网络(阿里小AI实现) TensorFlow人工智能引擎入门教程之十二 Tensorflow Caffe相互转换 TensorFlow人工智能引擎入门教程之十三 Tensorflow RCNN区域卷积神经网络 Tensorflow 人工智能引擎之 十四

Keras LSTM: Error when checking model input dimension

喜夏-厌秋 提交于 2019-12-07 17:41:01
问题 I am a new user of keras, and trying to implement a LSTM model. For test I declared the model like below, but it fails because of difference of input dimension. Although I found similar problems in this site, I could not find my mistakes by myself. ValueError: Error when checking model input: expected lstm_input_4 to have 3 dimensions, but got array with shape (300, 100) My environment python 3.5.2 keras 1.2.0 (Theano) Code from keras.layers import Input, Dense from keras.models import

Tensorflow save final state of LSTM in dynamic_rnn for prediction

前提是你 提交于 2019-12-07 13:00:47
问题 I want to save the final state of my LSTM such that it's included when I restore the model and can be used for prediction. As explained below, the Saver only has knowledge of the final state when I use tf.assign . However, this throws an error (also explained below). During training I always feed the final LSTM state back into the network, as explained in this post. Here are the important parts of the code: When building the graph: self.init_state = tf.placeholder(tf.float32, [ self.n_layers,

Keras Stateful LSTM fit_generator how to use batch_size > 1

霸气de小男生 提交于 2019-12-07 12:42:07
问题 I want to train an stateful LSTM network using the functional API in Keras. The fit method is fit_generator . I am able to train it, using: batch_size = 1 My Input layer is: Input(shape=(n_history, n_cols),batch_shape=(batch_size, n_history, n_cols), dtype='float32', name='daily_input') The generator is as follows: def training_data(): while 1: for i in range(0,pdf_daily_data.shape[0]-n_history,1): x = f(i)() # f(i) shape is (1, n_history, n_cols) y = y(i) yield (x,y) And then the fit is:

Does applying a Dropout Layer after the Embedding Layer have the same effect as applying the dropout through the LSTM dropout parameter?

六月ゝ 毕业季﹏ 提交于 2019-12-07 11:08:45
问题 I am slightly confused on the different ways to apply dropout to my Sequential model in Keras. My model is the following: model = Sequential() model.add(Embedding(input_dim=64,output_dim=64, input_length=498)) model.add(LSTM(units=100,dropout=0.5, recurrent_dropout=0.5)) model.add(Dense(units=1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) Assume that I added an extra Dropout layer after the Embedding layer in the below manner:

Multi-feature causal CNN - Keras implementation

99封情书 提交于 2019-12-07 07:14:50
问题 I'm currently using a basic LSTM to make regression predictions and I would like to implement a causal CNN as it should be computationally more efficient. I'm struggling to figure out how to reshape my current data to fit the causal CNN cell and represent the same data/timestep relationship as well as what the dilation rate should be set at. My current data is of this shape: (number of examples, lookback, features) and here's a basic example of the LSTM NN I'm using right now. lookback = 20 #

How to merge two LSTM layers in Keras

爱⌒轻易说出口 提交于 2019-12-07 06:13:36
问题 I’m working with Keras on a sentence similarity task (using the STS dataset) and am having problems merging the layers. The data consists of 1184 sentence pairs each scored between 0 and 5. Below are the shapes of my numpy arrays. I’ve padded each of the sentences to 50 words and run them through and embedding layer, using the glove embedding’s with 100 dimensions. When merging the two networks I'm getting an error.. Exception: Error when checking model input: the list of Numpy arrays that

Dimension Mismatch in LSTM Keras

筅森魡賤 提交于 2019-12-07 05:19:34
问题 I want to create a basic RNN that can add two bytes. Here are the input and outputs, which are expected of a simple addition X = [[0, 0], [0, 1], [1, 1], [0, 1], [1, 0], [1, 0], [1, 1], [1, 0]] That is, X1 = 00101111 and X2 = 01110010 Y = [1, 0, 1, 0, 0, 0, 0, 1] I created the following sequential model model = Sequential() model.add(GRU(output_dim = 16, input_length = 2, input_dim = 8)) model.add(Activation('relu'`)) model.add(Dense(2, activation='softmax')) model.compile(loss = 'binary

ctc_loss error “No valid path found.”

旧街凉风 提交于 2019-12-07 04:20:34
问题 Training a model with tf.nn.ctc_loss produces an error every time the train op is run: tensorflow/core/util/ctc/ctc_loss_calculator.cc:144] No valid path found. Unlike in previous questions about this function, this is not due to divergence. I have a low learning rate, and the error occurs on even the first train op. The model is a CNN -> LSTM -> CTC. Here is the model creation code: # Build Graph self.videoInput = tf.placeholder(shape=(None, self.maxVidLen, 50, 100, 3), dtype=tf.float32)