问题
I'm trying to train an NER model using spaCy
to identify locations, (person) names, and organisations. I'm trying to understand how spaCy
recognises entities in text and I've not been able to find an answer. From this issue on Github and this example, it appears that spaCy uses a number of features present in the text such as POS tags, prefixes, suffixes, and other character and word-based features in the text to train an Averaged Perceptron.
However, nowhere in the code does it appear that spaCy
uses the GLoVe embeddings (although each word in the sentence/document appears to have them, if present in the GLoVe corpus).
My questions are -
- Are these used in the NER system now?
- If I were to switch out the word vectors to a different set, should I expect performance to change in a meaningful way?
- Where in the code can I find out how (if it all)
spaCy
is using the word vectors?
I've tried looking through the Cython code, but I'm not able to understand whether the labelling system uses word embeddings.
回答1:
spaCy does use word embeddings for its NER model, which is a multilayer CNN. There's a quite a nice video that Matthew Honnibal, the creator of spaCy made, about how its NER works here. All three English models use GloVe vectors trained on Common Crawl, but the smaller models "prune" the number of vectors by having similar words mapped to the same vector link.
It's quite doable to add custom vectors. There's an overview of the process in the spaCy docs, plus some example code on Github.
来源:https://stackoverflow.com/questions/44492430/how-does-spacy-use-word-embeddings-for-named-entity-recognition-ner