PyTorch LSTM - using word embeddings instead of nn.Embedding()

左心房为你撑大大i 提交于 2021-02-18 18:15:16

问题


Is the nn.Embedding() essential for learning for an LSTM?

I am using an LSTM in PyTorch to predict NER - example of a similar task is here - https://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html

Code wise, I am using code almost identical to the code in the tutorial above.

The only detail is - I am using word2Vec instead of nn.Embedding().

So I remove the nn.Embedding() layer and provide the forward function the features from the word2Vec directly. The RNN does not learn.

Hence, Is the nn.Embedding() essential for learning for an LSTM?


回答1:


nn.Embedding provides an embedding layer for you.

This means that the layer takes your word token ids and converts these to word vectors.

You can learn the weights for your nn.Embedding layer during the training process, or you can alternatively load pre-trained embedding weights.

When you want to use a pre-trained word2vec (embedding) model, you just load the pre-trained weights into the nn.Embedding layer.

You can take a look here on how to load a word2vec embedding layer using gensim library.

I hope this helps.



来源:https://stackoverflow.com/questions/50340016/pytorch-lstm-using-word-embeddings-instead-of-nn-embedding

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!