lstm

RNN/LSTM deep learning model?

风流意气都作罢 提交于 2019-12-30 14:07:54
问题 I am trying to build an RNN/LSTM model for binary classification 0 or 1 a sample of my dataset (patient number, time in mill/sec., normalization of X Y and Z, kurtosis, skewness, pitch, roll and yaw, label) respectively. 1,15,-0.248010047716,0.00378335508419,-0.0152548459993,-86.3738760481,0.872322164158,-3.51314800063,0 1,31,-0.248010047716,0.00378335508419,-0.0152548459993,-86.3738760481,0.872322164158,-3.51314800063,0 1,46,-0.267422664673,0.0051143782875,-0.0191247001961,-85.7662354031,1

Why my ConvLSTM model can not predict?

China☆狼群 提交于 2019-12-30 07:43:09
问题 I have built a Convolutional LSTM model using Tensorflow ConvLSTMCell(), tf.nn.dynamic_rnn(), and tf.contrib.legacy_seq2seq.rnn_decoder(). I have 3 layers of encoder, and 3 layers of decoder, the initial states of decoders come from the final states of encoders. I have 128, 64, and 64 filters for layer 1, layer 2, and layer 3 respectively. finally, I concatenate the outputs of decoders and pass them through a convolution layer to decrease the number of channels to one. and then I apply the

What is a “cell class” in Keras?

左心房为你撑大大i 提交于 2019-12-30 04:15:07
问题 Or, more specific: what is the difference between ConvLSTM2D and ConvLSTM2DCell ? What is the difference between SimpleRNN and SimpleRNNCell? Same question for GRU and GRUCell Keras manuals are not very verbose here. I can see from RTFS (reading those fine sources) that these classes are descendants of different base classes. Those, with names, ending with Cell , are subclasses of Layer . In my task I need to classify video sequences. That is, my classifier's input is a sequence of video

How to deal with multi step time series forecasting in multivariate LSTM in keras

六月ゝ 毕业季﹏ 提交于 2019-12-30 01:38:13
问题 I am trying to do multi-step time series forecasting using multivariate LSTM in Keras. Specifically, I have two variables (var1 and var2) for each time step originally. Having followed the online tutorial here, I decided to use data at time (t-2) and (t-1) to predict the value of var2 at time step t. As sample data table shows, I am using the first 4 columns as input, Y as output. The code I have developed can be seen here, but I have got three questions. var1(t-2) var2(t-2) var1(t-1) var2(t

Cyclic computational graphs with Tensorflow or Theano

孤街浪徒 提交于 2019-12-29 09:14:18
问题 Both TensorFlow and Theano do not seem to support cyclic computational graphs, cyclic elements are implemented as recurrent cells with buffer and unrolling (RNN / LSTM cells), but this limitation is mostly related with the computation of back-propagation. I don't have a particular need for computing back-propagation but just the forward propagations. Is there a way to ignore this limitation, or perhaps just to break down arbitrary computational graphs in acyclic components? 回答1: TensorFlow

Keras LSTM predicted timeseries squashed and shifted

做~自己de王妃 提交于 2019-12-29 07:30:39
问题 I'm trying to get some hands on experience with Keras during the holidays, and I thought I'd start out with the textbook example of timeseries prediction on stock data. So what I'm trying to do is given the last 48 hours worth of average price changes (percent since previous), predict what the average price chanege of the coming hour is. However, when verifying against the test set (or even the training set) the amplitude of the predicted series is way off, and sometimes is shifted to be

Mean or max pooling with masking support in Keras

倾然丶 夕夏残阳落幕 提交于 2019-12-29 04:54:29
问题 ... print('Build model...') model = Sequential() model.add(Embedding(max_features, 128)) model.add(LSTM(size, return_sequences=True, dropout_W=0.2 dropout_U=0.2)) model.add(GlobalAveragePooling1D()) model.add(Dense(1)) model.add(Activation('sigmoid')) .... I need to be able to take the mean or max of the vectors for all time steps in a sample after LSTM layer before giving this mean or max vector to the dense layer in Keras. I think timedistributedmerge was able to do this but it was

How visualize attention LSTM using keras-self-attention package?

霸气de小男生 提交于 2019-12-28 18:44:54
问题 I'm using (keras-self-attention) to implement attention LSTM in KERAS. How can I visualize the attention part after training the model? This is a time series forecasting case. from keras.models import Sequential from keras_self_attention import SeqWeightedAttention from keras.layers import LSTM, Dense, Flatten model = Sequential() model.add(LSTM(activation = 'tanh' ,units = 200, return_sequences = True, input_shape = (TrainD[0].shape[1], TrainD[0].shape[2]))) model.add(SeqSelfAttention())

Predicting the next word using the LSTM ptb model tensorflow example

混江龙づ霸主 提交于 2019-12-28 06:01:40
问题 I am trying to use the tensorflow LSTM model to make next word predictions. As described in this related question (which has no accepted answer) the example contains pseudocode to extract next word probabilities: lstm = rnn_cell.BasicLSTMCell(lstm_size) # Initial state of the LSTM memory. state = tf.zeros([batch_size, lstm.state_size]) loss = 0.0 for current_batch_of_words in words_in_dataset: # The value of state is updated after processing each batch of words. output, state = lstm(current

Using pre-trained word2vec with LSTM for word generation

拥有回忆 提交于 2019-12-28 03:24:08
问题 LSTM/RNN can be used for text generation. This shows way to use pre-trained GloVe word embeddings for Keras model. How to use pre-trained Word2Vec word embeddings with Keras LSTM model? This post did help. How to predict / generate next word when the model is provided with the sequence of words as its input? Sample approach tried: # Sample code to prepare word2vec word embeddings import gensim documents = ["Human machine interface for lab abc computer applications", "A survey of user opinion