rnn

implementing RNN with numpy

こ雲淡風輕ζ 提交于 2019-12-05 03:35:38
I'm trying to implement the recurrent neural network with numpy. My current input and output designs are as follow: x is of shape: (sequence length, batch size, input dimension) h : (number of layers, number of directions, batch size, hidden size) initial weight : (number of directions, 2 * hidden size, input size + hidden size) weight : (number of layers -1, number of directions, hidden size, directions*hidden size + hidden size) bias : (number of layers, number of directions, hidden size) I have looked up pytorch API of RNN the as reference ( https://pytorch.org/docs/stable/nn.html?highlight

tensorflow static_rnn error: input must be a sequence

吃可爱长大的小学妹 提交于 2019-12-04 12:16:36
I'm trying to feed my own 3D data to a LSTM. The data have: height = 365, width = 310, time = unknown / inconsistent, consist of 0 and 1, each block of data that produce an output are separated to a single file. import tensorflow as tf import os from tensorflow.contrib import rnn filename = "C:/Kuliah/EmotionRecognition/Train1/D2N2Sur.txt" hm_epochs = 10 n_classes = 12 n_chunk = 443 n_hidden = 500 data = tf.placeholder(tf.bool, name='data') cat = tf.placeholder("float", [None, n_classes]) weights = { 'out': tf.Variable(tf.random_normal([n_hidden, n_classes])) } biases = { 'out': tf.Variable(tf

PyTorch之—循环层,RNN,LSTM

爱⌒轻易说出口 提交于 2019-12-04 06:03:46
文章目录 一、RNN 循环神经网络 参数详解 二、LSTM 长短期记忆网络 参数详解 三、词嵌入 Embedding 小案例 使用RNN 训练模型 使用LSTM对文本进行词性标注 基于LSTM的词性标注模型 一、RNN 循环神经网络 参数详解 class torch.nn.RNN( args, * kwargs) 将一个多层的 Elman RNN,激活函数为 tanh 或者 ReLU ,用于输入序列。 对输入序列中每个元素,RNN每层的计算公式为 h t = tanh ⁡ ( w i h x t + b i h + w h h h t − 1 + b h h ) h_t=\tanh(w_{ih} x_t+b_{ih}+w_{hh} h_{t-1}+b_{hh}) h t ​ = tanh ( w i h ​ x t ​ + b i h ​ + w h h ​ h t − 1 ​ + b h h ​ ) h t h_t h t ​ 是时刻 t t t 的隐状态。 x t x_t x t ​ 是上一层时刻 t t t 的隐状态,或者是第一层在时刻 t t t 的输入。如果 nonlinearity=‘relu’, 那么将使用 relu 代替 tanh 作为激活函数。 RNN模型(公式)参数: weight_ih_l[k] – 第k层的 input-hidden 权重, 可学习,形状是

Multivariate LSTM Forecast Loss and evaluation

こ雲淡風輕ζ 提交于 2019-12-03 23:05:08
问题 I have a CNN-RNN model architecture with Bidirectional LSTMS for time series regression problem. My loss does not converge over 50 epochs. Each epoch has 20k samples. The loss keeps bouncing between 0.001 - 0.01 . batch_size=1 epochs = 50 model.compile(loss='mean_squared_error', optimizer='adam') trainingHistory=model.fit(trainX,trainY,epochs=epochs,batch_size=batch_size,shuffle=False) I tried to train the model with incorrectly paired X and Y data for which the loss stays around 0.5 , is it

Tensorflow Estimator - Periodic Evaluation on Eval Dataset

蓝咒 提交于 2019-12-03 20:22:29
The tensorflow documentation does not provide any example of how to perform a periodic evaluation of the model on an evaluation set. Some people suggested the use of an Experiment, which sounds great but unfortunately does not work (depreciation and triggers an error). Others suggested the use of SummarySaverHook, but I don't see how you can use that with an evaluation set (as opposed to the training set). A solution would be to do the following for i in range(number_of_epoch): estimator.train(...) // on training set estimator.evaluate(...) // on evaluation set This architecture is explicitly

Use of cuDNN RNN

匿名 (未验证) 提交于 2019-12-03 09:06:55
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I will first summarize what I think I understood about cuDNN 5.1 rnn functions: Tensor dimensions x = [seq_length, batch_size, vocab_size] # input y = [seq_length, batch_size, hiddenSize] # output dx = [seq_length, batch_size, vocab_size] # input gradient dy = [seq_length, batch_size, hiddenSize] # output gradient hx = [num_layer, batch_size, hiddenSize] # input hidden state hy = [num_layer, batch_size, hiddenSize] # output hidden state cx = [num_layer, batch_size, hiddenSize] # input cell state cy = [num_layer, batch_size, hiddenSize] #

Keras simple RNN implementation

匿名 (未验证) 提交于 2019-12-03 08:56:10
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I found problems when trying to compile a network with one recurrent layer. It seems there is some issue with the dimensionality of the first layer and thus my understanding of how RNN layers work in Keras. My code sample is: model.add(Dense(8, input_dim = 2, activation = "tanh", use_bias = False)) model.add(SimpleRNN(2, activation = "tanh", use_bias = False)) model.add(Dense(1, activation = "tanh", use_bias = False)) The error is ValueError: Input 0 is incompatible with layer simple_rnn_1: expected ndim=3, found ndim=2 This error is

Cannot replace LSTMBlockCell with LSTMBlockFusedCell in Python TensorFlow

匿名 (未验证) 提交于 2019-12-03 08:48:34
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Replacing LSTMBlockCell with LSTMBlockFusedCell throws a TypeError in static_rnn`. I'm using TensorFlow 1.2.0-rc1 compiled from source. The full error message: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-3-2986e054cb6b> in <module>() 19 enc_cell = tf.contrib.rnn.LSTMBlockFusedCell(rnn_size) 20 enc_layers = tf.contrib.rnn.MultiRNNCell([enc_cell] * num_layers, state_is_tuple=True) ---> 21 _, enc_state = tf.contrib.rnn.static_rnn(enc_layers, enc_input

How to use tensorflow&#039;s Dataset API Iterator as an input of a (recurrent) neural network?

匿名 (未验证) 提交于 2019-12-03 07:50:05
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: When using the tensorflow's Dataset API Iterator, my goal is to define an RNN that operates on the iterator's get_next() tensors as its input (see (1) in the code). However, simply defining the dynamic_rnn with get_next() as its input results in an error: ValueError: Initializer for variable rnn/basic_lstm_cell/kernel/ is from inside a control-flow construct, such as a loop or conditional. When creating a variable inside a loop or conditional, use a lambda as the initializer. Now I know that one workaround is to simply create a placeholder

Tensorflow: value error with variable_scope

匿名 (未验证) 提交于 2019-12-03 07:36:14
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: This is my code below: ''' Tensorflow LSTM classification of 16x30 images. ''' from __future__ import print_function import tensorflow as tf from tensorflow.python.ops import rnn, rnn_cell import numpy as np from numpy import genfromtxt from sklearn.cross_validation import train_test_split import pandas as pd ''' a Tensorflow LSTM that will sequentially input several lines from each single image i.e. The Tensorflow graph will take a flat (1,480) features image as it was done in Multi-layer perceptron MNIST Tensorflow tutorial, but then