How to convert predicted sequence back to text in keras?

天涯浪子 提交于 2019-11-30 01:13:46

问题


I have a sequence to sequence learning model which works fine and able to predict some outputs. The problem is I have no idea how to convert the output back to text sequence.

This is my code.

from keras.preprocessing.text import Tokenizer,base_filter
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense

txt1="""What makes this problem difficult is that the sequences can vary in length,
be comprised of a very large vocabulary of input symbols and may require the model 
to learn the long term context or dependencies between symbols in the input sequence."""

#txt1 is used for fitting 
tk = Tokenizer(nb_words=2000, filters=base_filter(), lower=True, split=" ")
tk.fit_on_texts(txt1)

#convert text to sequence
t= tk.texts_to_sequences(txt1)

#padding to feed the sequence to keras model
t=pad_sequences(t, maxlen=10)

model = Sequential()
model.add(Dense(10,input_dim=10))
model.add(Dense(10,activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])

#predicting new sequcenc
pred=model.predict(t)

#Convert predicted sequence to text
pred=??

回答1:


Here is a solution I found:

reverse_word_map = dict(map(reversed, tokenizer.word_index.items()))



回答2:


I had to resolve the same problem, so here is how I ended up doing it (inspired by @Ben Usemans reversed dictionary).

# Importing library
from keras.preprocessing.text import Tokenizer

# My texts
texts = ['These are two crazy sentences', 'that I want to convert back and forth']

# Creating a tokenizer
tokenizer = Tokenizer(lower=True)

# Building word indices
tokenizer.fit_on_texts(texts)

# Tokenizing sentences
sentences = tokenizer.texts_to_sequences(texts)

>sentences
>[[1, 2, 3, 4, 5], [6, 7, 8, 9, 10, 11, 12, 13]]

# Creating a reverse dictionary
reverse_word_map = dict(map(reversed, tokenizer.word_index.items()))

# Function takes a tokenized sentence and returns the words
def sequence_to_text(list_of_indices):
    # Looking up words in dictionary
    words = [reverse_word_map.get(letter) for letter in list_of_indices]
    return(words)

# Creating texts 
my_texts = list(map(sequence_to_text, sentences))

>my_texts
>[['these', 'are', 'two', 'crazy', 'sentences'], ['that', 'i', 'want', 'to', 'convert', 'back', 'and', 'forth']]



回答3:


You can use directly the inverse tokenizer.sequences_to_texts function.

text = tokenizer.sequences_to_texts(<list of the integer equivalent encodings>)

I have tested the above and it works as expected.

PS.: Take extra care to make the argument be the list of the integer encodings and not the One Hot ones.




回答4:


You can make the dictionary that map index back to character.

index_word = {v: k for k, v in tk.word_index.items()} # map back
seqs = tk.texts_to_sequences(txt1)
words = []
for seq in seqs:
    if len(seq):
        words.append(index_word.get(seq[0]))
    else:
        words.append(' ')
print(''.join(words)) # output

>>> 'what makes this problem difficult is that the sequences can vary in length  
>>> be comprised of a very large vocabulary of input symbols and may require the model  
>>> to learn the long term context or dependencies between symbols in the input sequence '

However, in the question, you're trying to use sequence of characters to predict output of 10 classes which is not the sequence to sequence model. In this case, you cannot just turn prediction (or pred.argmax(axis=1)) back to sequence of characters.



来源:https://stackoverflow.com/questions/41971587/how-to-convert-predicted-sequence-back-to-text-in-keras

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!