lstm

Training a multi-variate multi-series regression problem with stateful LSTMs in Keras

为君一笑 提交于 2019-12-10 15:43:34
问题 I have time series of P processes, each of varying length but all having 5 variables (dimensions). I am trying to predict the estimated lifetime of a test process. I am approaching this problem with a stateful LSTM in Keras. But I am not sure if my training process is correct. I divide each sequence into batches of length 30 . So each sequence is of the shape (s_i, 30, 5) , where s_i is different for each of the P sequences ( s_i = len(P_i)//30 ). I append all sequences into my training data

How to implement LSTM network with vector input in each time step?

浪子不回头ぞ 提交于 2019-12-10 14:53:03
问题 I am trying to create a generative LSTM network in Tensorflow. I have input vectors like this: [[0 0 1 0 ... 1 0] [0 0 1 0 ... 1 0] ... [0 0 0 1 ... 0 1]] Each vector in this matrix is be one time step, or in other words, each vector should be one input to LSTM. Outputs would be the same, except they would be shifted by one time step to right (I am trying to predict next output). Then I have list of these matrices, say five of them - that is one batch. And finally I have list of those batches

Keras - Text Classification - LSTM - How to input text?

谁说我不能喝 提交于 2019-12-10 14:24:39
问题 Im trying to understand how to use LSTM to classify a certain dataset that i have. I researched and found this example of keras and imdb : https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py However, im confused about how the data set must be processed to input. I know keras has pre-processing text methods, but im not sure which to use. The x contain n lines with texts and the y classify the text by happiness/sadness. Basically, 1.0 means 100% happy and 0.0 means totally sad.

Why does my keras LSTM model get stuck in an infinite loop?

℡╲_俬逩灬. 提交于 2019-12-10 12:31:15
问题 I am trying to build a small LSTM that can learn to write code (even if it's garbage code) by training it on existing Python code. I have concatenated a few thousand lines of code together in one file across several hundred files, with each file ending in <eos> to signify "end of sequence". As an example, my training file looks like: setup(name='Keras', ... ], packages=find_packages()) <eos> import pyux ... with open('api.json', 'w') as f: json.dump(sign, f) <eos> I am creating tokens from

Python - RNN LSTM model low accuracy

删除回忆录丶 提交于 2019-12-10 12:09:47
问题 I have tried to build LSTM model with this sample of dataset (patient number, time in mill/sec., normalization of X Y and Z, kurtosis, skewness, pitch, roll and yaw, label) respectively. 1,15,-0.248010047716,0.00378335508419,-0.0152548459993,-86.3738760481,0.872322164158,-3.51314800063,0 1,31,-0.248010047716,0.00378335508419,-0.0152548459993,-86.3738760481,0.872322164158,-3.51314800063,0 1,46,-0.267422664673,0.0051143782875,-0.0191247001961,-85.7662354031,1.0928406847,-4.08015176908,0 1,62,-0

Testing the Keras sentiment classification with model.predict

百般思念 提交于 2019-12-10 11:09:11
问题 I have trained the imdb_lstm.py on my PC. Now I want to test the trained network by inputting some text of my own. How do I do it? Thank you! 回答1: So what you basically need to do is as follows: Tokenize sequnces: convert the string into words (features): For example: "hello my name is georgio" to ["hello", "my", "name", "is", "georgio"]. Next, you want to remove stop words (check Google for what stop words are). This stage is optional, it may lead to faulty results but I think it worth a try

keras cnn_lstm input layer not accepting 1-D input

南楼画角 提交于 2019-12-10 10:23:55
问题 I have sequences of long 1_D vectors (3000 digits) that I am trying to classify. I have previously implemented a simple CNN to classify them with relative success: def create_shallow_model(shape,repeat_length,stride): model = Sequential() model.add(Conv1D(75,repeat_length,strides=stride,padding='same', input_shape=shape, activation='relu')) model.add(MaxPooling1D(repeat_length)) model.add(Flatten()) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer=

How to structure an LSTM neural network for classification

让人想犯罪 __ 提交于 2019-12-10 09:25:25
问题 I have data that has various conversations between two people. Each sentence has some type of classification. I am attempting to use an NLP net to classify each sentence of the conversation. I tried a convolution net and get decent results (not ground breaking tho). I figured that since this a back and forth conversation, and LSTM net may produce better results, because what was previously said may have a large impact on what follows. If I follow the structure above, I would assume that I am

Bi-directional LSTM for variable-length sequence in Tensorflow

十年热恋 提交于 2019-12-10 04:33:56
问题 I want to train a bi-directional LSTM in tensorflow to perform a sequence classification problem (sentiment classification). Because sequences are of variable lengths, batches are normally padded with vectors of zero. Normally, I use the sequence_length parameter in the uni-directional RNN to avoid training on the padding vectors. How can this be managed with bi-directional LSTM. Does the "sequence_length" parameter work automatically starts from an advanced position in the sequence for the

《一种用于基于方面情感分析的深度分层网络模型》论文阅读笔记

余生长醉 提交于 2019-12-10 00:45:12
结合区域卷积神经网络和分层LSTM网络的深度分层模型来解决基于特定方面的情感极性分析问题,挖掘特定方面在整个评论的长距离依赖关系。通过词语层和句子层的分层注意力机制,更加有效的识别出句子中不同方面的情感极性 一个待分类句子在网络中的训练框架主要由以下三部分组成: (1) 区域CNN , 按目标词分割成固定长度的不同区域,一个区域对应一个区域CNN 提取不同区域的局部特征信息 (2)词语层LSTM 特定方面的向量和隐藏层输出结合作为词语层LSTM网络的序列化输入 (3)句子层LSTM 区域CNN和词语层LSTM网络的输出结合作为句子层LSTM的输入 来源: CSDN 作者: TtingZh 链接: https://blog.csdn.net/t_zht/article/details/103461013