Creating sequence vector from text in Python

风格不统一 提交于 2021-02-11 11:45:21

问题


I am now trying to prepare the input data for LSTM-based NN. I have some big number of text documents and what i want is to make sequence vectors for each document so i am able to feed them as train data to LSTM RNN.

My poor approach:

import re
import numpy as np
#raw data
train_docs = ['this is text number one', 'another text that i have']

#put all docs together
train_data = ''
for val in train_docs:
    train_data += ' ' + val

tokens = np.unique(re.findall('[a-zа-я0-9]+', train_data.lower()))
voc = {v: k for k, v in dict(enumerate(tokens)).items()}

and then brutforce replace each doc with a "voc" dict.

Is there any libs which can help with this task?


回答1:


Solved with Keras text preprocessing classes: http://keras.io/preprocessing/text/

done like this:

from keras.preprocessing.text import Tokenizer, text_to_word_sequence

train_docs = ['this is text number one', 'another text that i have']
tknzr = Tokenizer(lower=True, split=" ")
tknzr.fit_on_texts(train_docs)
#vocabulary:
print(tknzr.word_index)

Out[1]:
{'this': 2, 'is': 3, 'one': 4, 'another': 9, 'i': 5, 'that': 6, 'text': 1, 'number': 8, 'have': 7}

#making sequences:
X_train = tknzr.texts_to_sequences(train_docs)
print(X_train)

Out[2]:
[[2, 3, 1, 8, 4], [9, 1, 6, 5, 7]]



回答2:


You could use NLTK to tokenise the training documents. NLTK provides a standard word tokeniser or allows you to define your own tokeniser (e.g. RegexpTokenizer). Take a look here for more details about the different tokeniser functions available.

Here might also be helpful for pre-processing the text.

A quick demo using NLTK's pre-trained word tokeniser below:

from nltk import word_tokenize

train_docs = ['this is text number one', 'another text that i have']
train_docs = ' '.join(map(str, train_docs))

tokens = word_tokenize(train_docs)
voc = {v: k for k, v in dict(enumerate(tokens)).items()}


来源:https://stackoverflow.com/questions/38302280/creating-sequence-vector-from-text-in-python

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!