how to have a LSTM Autoencoder model over the whole vocab prediction while presenting words as embedding
问题 So I have been working on LSTM Autoencoder model . I have also created various version of this model. 1. create the model using the already trained word embedding: in this scenario, I used the weights of already trained Glove vector, as the weight of features(text data). This is the structure: inputs = Input(shape=(SEQUENCE_LEN, EMBED_SIZE), name="input") encoded = Bidirectional(LSTM(LATENT_SIZE), merge_mode="sum", name="encoder_lstm")(inputs) encoded =Lambda(rev_entropy)(encoded) decoded =