lstm

How can I predict, forecast the value of the next day using Keras' LSTM?

老子叫甜甜 提交于 2019-12-02 09:08:11
I seek your advice to forecast the values after that using LSTM in Keras. I have x_train 62796 and x_test 15684 and I want to predict the values after that. Twenty data collections correspond to a day So, I set the look_back to 20. and Here is my code : ... look_back = 20 train_size = int(len(data) * 0.80) test_size = len(data) - train_size train = data[0:train_size] test = data[train_size:len(data)] x_train, y_train = create_dataset(train, look_back) x_test, y_test = create_dataset(test, look_back) x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1)) x_test = np.reshape(x

Tensorflow LSTM Regularization

╄→尐↘猪︶ㄣ 提交于 2019-12-02 08:38:35
I was wondering how one can implement l1 or l2 regularization within an LSTM in TensorFlow? TF doesn't give you access to the internal weights of the LSTM, so I'm not certain how one can calculate the norms and add it to the loss. My loss function is just RMS for now. The answers here don't seem to suffice. The answers in the link you mentioned are the correct way to do it. Iterate through tf.trainable_variables and find the variables associated with your LSTM. An alternative, more complicated and possibly more brittle approach is to re-enter the LSTM's variable_scope, set reuse_variables=True

Check perplexity of a Language Model

一个人想着一个人 提交于 2019-12-02 08:29:56
I created a language model with Keras LSTM and now I want to assess wether it's good so I want to calculate perplexity. What is the best way to calc perplexity of a model in Python? I've come up with two versions and attached their corresponding source, please feel free to check the links out. def perplexity_raw(y_true, y_pred): """ The perplexity metric. Why isn't this part of Keras yet?! https://stackoverflow.com/questions/41881308/how-to-calculate-perplexity-of-rnn-in-tensorflow https://github.com/keras-team/keras/issues/8267 """ # cross_entropy = K.sparse_categorical_crossentropy(y_true, y

循环神经网络导读

一笑奈何 提交于 2019-12-02 07:56:26
循环神经网络导读 循环神经网络(Recurrent Neural Network)是一类以序列数据为输入,在序列的演进方向进行递归且所有节点(循环单元)按链式连接的递归神经网络。其中双向循环神经网络(Bidirectional RNN, Bi-RNN)和长短期记忆网络(Long Short-Term Memory networks,LSTM)是常见的的循环神经网络。今天,小编带你认识常见的几种循环神经网络模型,主要内容来自Colah的博客,外加一些自己的总结,一起来学习吧~ 循环神经网络 RNNs 在阅读或思考的过程中,人类常常结合以前的信息得到结果。但是,普通的神经网络并不能做到这一点。这或许便是传统神经网络的主要缺点。 循环神经网络 可以做到这一点,通过循环,循环神经网络将当前步所学习到的信息传递下去,从而学会像人一样进行思考。 上图即是循环神经网络的一个示例。可以看到,同普通的神经网络相似,其同样具有输入输出层以及隐层。但是,循环神经网络会结合当前步的输入 以及上一步网络所给出的 hidden state , 计算出当前时间步的 ,并将 作为输入输出到下一时间步的循环神经网络之中。同时,网络的其他部分会根据当前的状态 计算出当前步的输出 . 给出计算公式为: 为了便于理解,我们可以将循环神经网络视作一系列共享权值的网络,并将其展开。 展开之后,可以感觉到循环神经网络似乎天然与

matlab LSTM序列分类的官方示例

℡╲_俬逩灬. 提交于 2019-12-02 06:20:11
matlab版本是2018b及其以上。 %% %加载序列数据 %数据描述:总共270组训练样本共分为9类,每组训练样本的训练样个数不等,每个训练训练样本由12个特征向量组成, [XTrain,YTrain] = japaneseVowelsTrainData; %数据可视化 figure plot(XTrain{1}') xlabel("Time Step") title("Training Observation 1") legend("Feature " + string(1:12),'Location','northeastoutside') %% %LSTM可以将分组后等量的训练样本进行训练,从而提高训练效率 %如果每组的样本数量不同,进行小批量拆分,则需要尽量保证分块的训练样本数相同 %首先找到每组样本数和总的组数 numObservations = numel(XTrain); for i=1:numObservations sequence = XTrain{i}; sequenceLengths(i) = size(sequence,2); end %绘图前后排序的各组数据个数 figure subplot(1,2,1) bar(sequenceLengths) ylim([0 30]) xlabel("Sequence") ylabel("Length")

深度学习--Matlab使用LSTM长短期记忆网络对负荷进行分类

江枫思渺然 提交于 2019-12-02 06:14:04
一、概述 关于LSTM同系列的 前一篇文章 写的是利用LSTM网络对电力负荷进行预测【 LSTM预测 】,其本质是sequence-to-sequence problems,序列到序列的预测应用。这里做一下sequence-to-label classification problems,序列到标签的分类应用【 LSTM分类 】。关于LSTM的网络特性不再赘述。 本篇博文的具体示例是对给定的电力负荷进行分类,电力负荷数据格式为每日96个数据点的一维时间序列值,每条负荷数据均对应一个类型标签,总共类别为6类。其他的例子可以参考官网给定的japaneseVowelsTrainData 案例。 负荷数据是某电力公司内部数据,鉴于保密要求,这里仅描述数据格式,负荷数据集不提供。 类别:6 数据长度:96 训练数据条数:9821 测试数据条数:2456 二、数据格式转换 首先看一下需要传到LSTM网络的训练参数格式。 trainedNet = trainNetwork(C, Y, layers, options); 它必须从序列输入层开始,C是一个包含序列或时间序列预测器的元胞数组。C是d行1列,d代表有多少个训练样本,每个训练样本又包括N行M列,N代表训练样本的数据维度,M代表序列长度,y是标签的分类向量,是categorical类型。 因此,训练数据应该转换成元胞数组

Error when checking target: expected time_distributed_5 to have 3 dimensions, but got array with shape (14724, 1)

你离开我真会死。 提交于 2019-12-02 06:12:05
问题 Trying to build a single output regression model, but there seems to be problem in the last layer inputs = Input(shape=(48, 1)) lstm = CuDNNLSTM(256,return_sequences=True)(inputs) lstm = Dropout(dropouts[0])(lstm) #aux_input auxiliary_inputs = Input(shape=(48, 7)) auxiliary_outputs = TimeDistributed(Dense(4))(auxiliary_inputs) auxiliary_outputs = TimeDistributed(Dense(7))(auxiliary_outputs) #concatenate output = keras.layers.concatenate([lstm, auxiliary_outputs]) output = TimeDistributed

TF LSTM: Save State from training session for prediction session later

≡放荡痞女 提交于 2019-12-02 05:09:28
I am trying to save the latest LSTM State from training to be reused during the prediction stage later. The problem I am encountering is that in the TF LSTM model the State is passed around from one training iteration to next via a combination of a placeholder and a numpy array -- neither of which seems to be included in the Graph by default when the session is saved. To work around this, I am creating a dedicated TF variable to hold the latest version of the state so as to add it to the Session graph, like so: # latest State from last training iteration: _, y, ostate, smm = sess.run([train

Convert Sequential to Functional in Keras

喜夏-厌秋 提交于 2019-12-02 01:41:08
问题 I have a keras code written in Sequential style. But I am trying to switch Functional mode because I want to use merge function. But I faced an error below when declaring Model(x, out) . What is wrong in my Functional API code? # Sequential, this is working # out_size==16, seq_len==1 model = Sequential() model.add(LSTM(128, input_shape=(seq_len, input_dim), activation='tanh', return_sequences=True)) model.add(TimeDistributed(Dense(out_size, activation='softmax'))) # Functional API x = Input(

stock prediction : GRU model predicting same given values instead of future stock price

£可爱£侵袭症+ 提交于 2019-12-02 01:16:46
i was just testing this model from kaggle post this model suppose to predict 1 day ahead from given set of last stocks . After tweaking few parameters i got surprisingly good result, as you can see. mean squared error was 5.193.so overall it looks good at predicting future stocks right? well it turned out to be horrible when i take a look closely on the results. as you can see that this model is predicting last value of the given stocks which is our current last stock. so i did adjusted predictions to one step back.. so now you can clearly see that model is predicting one step backward or last