lstm

【深度学习】 基于Keras的Attention机制代码实现及剖析——LSTM+Attention

你。 提交于 2019-12-11 12:11:32
说明 这是接前面 【深度学习】基于Keras的Attention机制代码实现及剖析——Dense+Attention 的后续。   参考的 代码来源1 : Attention mechanism Implementation for Keras. 网上大部分代码都源于此,直接使用时注意Keras版本,若版本不对应,在merge处会报错,解决办法为:导入Multiply层并将merge改为Multiply()。   参考的 代码来源2 : Attention Model(注意力模型)思想初探 ,这篇也是运行了一下来源1,做对照。 在实验之前需要一些预备知识,如RNN、LSTM的基本结构,和Attention的大致原理,快速获得这方面知识可看 RNN&Attention机制&LSTM 入门了解 。 实验目的 现实生活中有很多序列问题,对一个序列而言,其每个元素的“重要性”显然是不同的,即权重不同,这样一来就有使用Attention机制的空间,本次实验将在LSTM基础上实现Attention机制的运用。 检验Attention是否真的捕捉到了关键特征,即被Attention分配的关键特征的权重是否更高。 实验设计 问题设计:同Dense+Attention一样,我们也设计成二分类问题,给定特征和标签进行训练。 Attention聚焦测试:将特征的某一列与标签值设置成相同

How to train Keras LSTM with multiple multivariate time-series data?

ぐ巨炮叔叔 提交于 2019-12-11 11:48:00
问题 I have a mechanical problem as kind of a time series with raw data as follows time dtime cur dcur type proc start end 122088 1554207711521 3140 0.766106 0.130276 0 87556 1554203520000 1554207720000 122089 1554207714411 1800 0.894529 0.089670 0 87556 1554203520000 1554207720000 For every proc , there is a time series with time-instances not exactly in proper intervals. I have data from a set of different proc s, each coming from the same type of mechanical problem. The target is to predict the

DataLayer placement in the .prototxt file generated by Shai's LSTM implementation

爱⌒轻易说出口 提交于 2019-12-11 11:10:31
问题 Regarding the answer provided by @Shai in LSTM module for Caffe, where caffe.NetSpec() is used to explicitly unroll LSTM units in time for training. Using this code implementation, why does the "DummyData" layer, or any data layer used instead as input X , appears at the end of the t0 time step, just before "t1/lstm/Mx" in the prototxt file? I don't get it... A manipulation (cut / paste) is hence needed. 回答1: Shai's NetSpec implementation of LSTM unrolls the net in time. Hence for every time

Tensorflow - TypeError: 'int' object is not iterable

我们两清 提交于 2019-12-11 10:58:09
问题 I'm getting an error but it's buried down in the TensorFlow library so I'm struggling to figure out what's wrong with my model. I'm trying to use an RNN with LSTM. My model looks like this: model = Sequential() model.add(LSTM(128, activation='relu', input_shape=1000, return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM(128, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(32, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(2, activation='softmax')) opt = tf

How to prepare data for stateful LSTM in Keras?

孤街浪徒 提交于 2019-12-11 10:45:26
问题 I would like to develop a time series approach for binary classification, with stateful LSTM in Keras Here is how my data look. I got a lot , say N , recordings. Each recording consists in 22 time series of length M_i(i=1,...N) . I want to use a stateful model in Keras but I don't know how to reshape my data, especially about how I should define my batch_size . Here is how I proceeded for stateless LSTM. I created sequences of length look_back for all the recordings so that I had data of size

Effect of setting sequence_length on the returned state in dynamic_rnn

こ雲淡風輕ζ 提交于 2019-12-11 10:36:58
问题 Suppose I have an LSTM network to classify timeseries on length 10, the standard way to feed the timeseries to the LSTM is to form a [batch size X 10 X vector size] array and feed it to the LSTM: self.rnn_t, self.new_state = tf.nn.dynamic_rnn( \ inputs=self.X, cell=self.lstm_cell, dtype=tf.float32, initial_state=self.state_in) When using the sequence_length parameter I can specify the length of the timeseries. My question, for the scenario defined above, if I call dynamic_rnn 10 time with a

Stateful LSTM in Keras: reset with fit, evaluate, and predict?

蓝咒 提交于 2019-12-11 10:19:23
问题 I'd like to expand on this question of when to reset states. Stateful LSTM: When to reset states? Suppose I train a stateful model as such: for i in range(epochs): model.fit(X_train, y_train, epochs=1, batch_size=1, shuffle=False) model.reset_states() My training and test sets are from one time-series data set, with the test set following immediately after the training set. Next, I want to evaluate the test set and get an array of the predictions. score = model.evaluate(X_test, y_test, batch

How to set the variables of LSTMCell as input instead of letting it create it in Tensorflow?

匆匆过客 提交于 2019-12-11 09:39:29
问题 When I create a tf.contrib.rnn.LSTMCell, it creates its kernel and bias trainable variables during initialisation. How the code looks now: cell_fw = tf.contrib.rnn.LSTMCell(hidden_size_char, state_is_tuple=True) What I want it to look: kernel = tf.get_variable(...) bias = tf.get_variable(...) cell_fw = tf.contrib.rnn.LSTMCell(kernel, bias, hidden_size, state_is_tuple=True) What I want to do is to create those variables myself, and give it to the LSTMCell class when instantiating it as input

What is difference between 'call' and '__call__' in TensorFlow BasicLSTMCell implementation?

ε祈祈猫儿з 提交于 2019-12-11 08:42:29
问题 I am studying Tensorflow BasicLSTMCell while I found that there are two similar methods within the class: __call__ and call . The two methods has the same parameters and the documentation does not say the difference. Refering the source code does not give me any clue of this. But I am guessing that the the __call__ method is inherited from somewhere, and call overrides __call__ . If this is the case, why not just use __call__ instead of call in the source code? 回答1: I ran into similar problem

python - Reading multiple CSVs for Keras LSTM

我的梦境 提交于 2019-12-11 07:31:42
问题 I'm trying to implement a LSTM network using Keras but I'm having problems with taking input. My dataset is in the form of multiple CSV files (all files have same dimensions 68x250 with each entry containing 2 values). There are about 200 CSV files, between various classes. Preview of one of the CSVs How do i take these multiple CSVs as input? 回答1: I did something similar recently, as Pedro said you shoudl use fit_generator and write your custom generator. Here is an example of a generator: