lstm

How to create input samples from pandas dataframe for a LSTM model?

老子叫甜甜 提交于 2020-01-15 06:10:11
问题 I'm trying to create a LSTM model that gives me binary output buy or not. I have data that is in the format of: [date_time, close, volume] , in millions of rows. I'm stuck at formatting the data as 3-D; Samples, Timesteps, features. I have used pandas to read the data. I want to format it so I can get 4000 samples with 400 timesteps each, and two features (close and volume). Can someone advise on how to do this? EDIT: I am using the TimeseriesGenerator as advised, but I am not sure how to

使用RNN时,cell的设置

馋奶兔 提交于 2020-01-15 05:19:34
使用静态rnn处理时序数据 import tensorflow as tf from tensorflow . contrib import rnn x = tf . placeholder ( "float" , [ None , time_steps , length ] ) y = tf . placeholder ( "float" , [ None , n_classes ] ) input = tf . unstack ( x , time_steps , 1 ) lstm_layer = rnn . BasicLSTMCell ( num_units , forget_bias = 1 ) #lstm_layer=rnn.LSTMCell(num_units,use_peepholes=True,forget_bias=1) outputs , _ = rnn . static_rnn ( lstm_layer , input , dtype = "float32" ) 使用动态rnn处理时序数据 import tensorflow as tf from tensorflow . contrib import rnn x = tf . placeholder ( "float" , [ None , time_steps , length ] ) y = tf .

Adding Attention on top of simple LSTM layer in Tensorflow 2.0

拥有回忆 提交于 2020-01-14 04:08:10
问题 I have a simple network of one LSTM and two Dense layers as such: model = tf.keras.Sequential() model.add(layers.LSTM(20, input_shape=(train_X.shape[1], train_X.shape[2]))) model.add(layers.Dense(20, activation='sigmoid')) model.add(layers.Dense(1, activation='sigmoid')) model.compile(loss='mean_squared_error') It is training on data with 3 inputs (normalized 0 to 1.0) and 1 output (binary) for the purpose of classification. The data is time series data where there is a relation between time

Adding Attention on top of simple LSTM layer in Tensorflow 2.0

左心房为你撑大大i 提交于 2020-01-14 04:05:48
问题 I have a simple network of one LSTM and two Dense layers as such: model = tf.keras.Sequential() model.add(layers.LSTM(20, input_shape=(train_X.shape[1], train_X.shape[2]))) model.add(layers.Dense(20, activation='sigmoid')) model.add(layers.Dense(1, activation='sigmoid')) model.compile(loss='mean_squared_error') It is training on data with 3 inputs (normalized 0 to 1.0) and 1 output (binary) for the purpose of classification. The data is time series data where there is a relation between time

Increasing Label Error Rate (Edit Distance) and Fluctuating Loss?

烂漫一生 提交于 2020-01-14 03:04:53
问题 I am training a handwriting recognition model of this architecture: { "network": [ { "layer_type": "l2_normalize" }, { "layer_type": "conv2d", "num_filters": 16, "kernel_size": 5, "stride": 1, "padding": "same" }, { "layer_type": "max_pool2d", "pool_size": 2, "stride": 2, "padding": "same" }, { "layer_type": "l2_normalize" }, { "layer_type": "dropout", "keep_prob": 0.5 }, { "layer_type": "conv2d", "num_filters": 32, "kernel_size": 5, "stride": 1, "padding": "same" }, { "layer_type": "max

自然语言处理之序列标注问题

≯℡__Kan透↙ 提交于 2020-01-13 20:38:42
  序列标注问题是自然语言中最常见的问题,在深度学习火起来之前,常见的序列标注问题的解决方案都是借助于HMM模型,最大熵模型,CRF模型。尤其是CRF,是解决序列标注问题的主流方法。随着深度学习的发展,RNN在序列标注问题中取得了巨大的成果。而且深度学习中的end-to-end,也让序列标注问题变得更简单了。   序列标注问题包括自然语言处理中的分词,词性标注,命名实体识别,关键词抽取,词义角色标注等等。我们只要在做序列标注时给定特定的标签集合,就可以进行序列标注。   序列标注问题是NLP中最常见的问题,因为绝大多数NLP问题都可以转化为序列标注问题,虽然很多NLP任务看上去大不相同,但是如果转化为序列标注问题后其实面临的都是同一个问题。所谓“序列标注”,就是说对于一个一维线性输入序列:        给线性序列中的每个元素打上标签集合中的某个标签:        所以,其本质上是对线性序列中每个元素根据上下文内容进行分类的问题。一般情况下,对于NLP任务来说,线性序列就是输入的文本,往往可以把一个汉字看做线性序列的一个元素,而不同任务其标签集合代表的含义可能不太相同,但是相同的问题都是:如何根据汉字的上下文给汉字打上一个合适的标签(无论是分词,还是词性标注,或者是命名实体识别,道理都是想通的)。 序列标注问题之中文分词   以中文分词任务来说明序列标注的过程。假设现在输入句子

tensorflow/tflearn input shape

十年热恋 提交于 2020-01-13 20:19:15
问题 I'm trying to create a lstm-rnn to generate sequences of music. The training data is a sequence of vectors of size 4, representing various features (including MIDI note) of each note in some songs to train on. From my reading, it looks like what I'm trying to do is have for each input sample, the output sample is the next size 4 vector (i.e. it should be trying to predict the next note given the current one, and because of the LSTMs incorporating knowledge of samples that have come before). I

tensorflow/tflearn input shape

爷,独闯天下 提交于 2020-01-13 20:17:58
问题 I'm trying to create a lstm-rnn to generate sequences of music. The training data is a sequence of vectors of size 4, representing various features (including MIDI note) of each note in some songs to train on. From my reading, it looks like what I'm trying to do is have for each input sample, the output sample is the next size 4 vector (i.e. it should be trying to predict the next note given the current one, and because of the LSTMs incorporating knowledge of samples that have come before). I

LSTM model just repeats the past in forecasting time series

一世执手 提交于 2020-01-13 06:35:28
问题 I want to predict one output variable from nine input variables. This data is also a time series and the goal is to predict the output variable 2 timesteps ahead. I normalised all data using mean normalisation and added some features so now the data look like this: weekday (weekend vs weekday) hour (f_real - 50)*70 ACE [Mwh] \ 0 -1.579094 -1.341627 0.032171 2.017604 1 -1.579094 -0.447209 0.032171 -0.543702 2 -1.579094 0.447209 0.037651 0.204731 3 -1.579094 1.341627 0.043130 -0.601538 4 -1

Converting state-parameters of Pytorch LSTM to Keras LSTM

耗尽温柔 提交于 2020-01-12 07:54:20
问题 I was trying to port an existing trained PyTorch model into Keras. During the porting, I got stuck at LSTM layer. Keras implementation of LSTM network seems to have three state kind of state matrices while Pytorch implementation have four. For eg, for an Bidirectional LSTM with hidden_layers=64, input_size=512 & output size=128 state parameters where as follows State params of Keras LSTM [<tf.Variable 'bidirectional_1/forward_lstm_1/kernel:0' shape=(512, 256) dtype=float32_ref>, <tf.Variable