Implementing custom POS Tagger in Spacy over existing english model : NLP - Python

帅比萌擦擦* 提交于 2019-12-08 03:00:04

问题


I am trying to retrain the existing POS Tagger in spacy to display the proper tags for certain misclassified words using the code below. But it gives me this error :

Warning: Unnamed vectors -- this won't allow multiple vectors models to be loaded. (Shape: (0, 0))

from spacy.vocab import Vocab
from spacy.tokens import Doc
from spacy.gold import GoldParse


nlp = spacy.load('en_core_web_sm')
optimizer = nlp.begin_training()
vocab = Vocab(tag_map={})
doc = Doc(vocab, words=[word for word in ['ThermostatFailedOpen','ThermostatFailedClose','BlahDeBlah']])
gold = GoldParse(doc, tags=['NNP']*3)
nlp.update([doc], [gold], drop=0, sgd=optimizer)

Also, when i try to check again to see if the tags have been classified correctly using the code below

doc = nlp('If ThermostatFailedOpen moves from false to true, we are going to party')
for token in doc:
    print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
          token.shape_, token.is_alpha, token.is_stop)

ThermostatFailedOpen thermostatfailedopen VERB VB nsubj XxxxxXxxxxXxxx True False

The words are not classified correctly (as expected I guess)! Any insights on how to fix this?


回答1:


#!/usr/bin/env python
# coding: utf8


import random
from pathlib import Path
import spacy


# You need to define a mapping from your data's part-of-speech tag names to the
# Universal Part-of-Speech tag set, as spaCy includes an enum of these tags.
# See here for the Universal Tag Set:
# http://universaldependencies.github.io/docs/u/pos/index.html
# You may also specify morphological features for your tags, from the universal
# scheme.
TAG_MAP = {
    'N': {'pos': 'NOUN'},
    'V': {'pos': 'VERB'},
    'J': {'pos': 'ADJ'}
}

# Usually you'll read this in, of course. Data formats vary. Ensure your
# strings are unicode and that the number of tags assigned matches spaCy's
# tokenization. If not, you can always add a 'words' key to the annotations
# that specifies the gold-standard tokenization, e.g.:
# ("Eatblueham", {'words': ['Eat', 'blue', 'ham'] 'tags': ['V', 'J', 'N']})

TRAIN_DATA = [
    ("ThermostatFailedOpen", {'tags': ['V']}),
    ("EThermostatFailedClose", {'tags': ['V']})
]


def main(lang='en', output_dir=None, n_iter=25):
    """Create a new model, set up the pipeline and train the tagger. In order to
    train the tagger with a custom tag map, we're creating a new Language
    instance with a custom vocab.
    """
    nlp = spacy.blank(lang)
    # add the tagger to the pipeline
    # nlp.create_pipe works for built-ins that are registered with spaCy
    tagger = nlp.create_pipe('tagger')
    # Add the tags. This needs to be done before you start training.
    for tag, values in TAG_MAP.items():
        tagger.add_label(tag, values)
    nlp.add_pipe(tagger)
    nlp.vocab.vectors.name = 'spacy_pretrained_vectors'
    optimizer = nlp.begin_training()
    for i in range(n_iter):
        random.shuffle(TRAIN_DATA)
        losses = {}
        for text, annotations in TRAIN_DATA:
            nlp.update([text], [annotations], sgd=optimizer, losses=losses)
        print(losses)

    # test the trained model
    test_text = "If ThermostatFailedOpen moves from false to true, we are going to party"
    doc = nlp(test_text)
    print('Tags', [(t.text, t.tag_, t.pos_) for t in doc])

    # save model to output directory
    if output_dir is not None:
        output_dir = Path(output_dir)
        if not output_dir.exists():
            output_dir.mkdir()
        nlp.to_disk(output_dir)
        print("Saved model to", output_dir)

        # test the save model
        print("Loading from", output_dir)
        nlp2 = spacy.load(output_dir)
        doc = nlp2(test_text)
        print('Tags', [(t.text, t.tag_, t.pos_) for t in doc])


if __name__ == '__main__':
    main('en','customPOS')

NOTE : you will get following error if you try to append

 File "pipeline.pyx", line 550, in spacy.pipeline.Tagger.add_label
ValueError: [T003] Resizing pre-trained Tagger models is not currently supported.

Initially I tried this

nlp = spacy.load('en_core_web_sm')

    tagger = nlp.get_pipe('tagger')
    # Add the tags. This needs to be done before you start training.
    for tag, values in TAG_MAP.items():
        tagger.add_label(tag, values)

    other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'tagger']
    with nlp.disable_pipes(*other_pipes):  # only train TAGGER
        nlp.vocab.vectors.name = 'spacy_pretrained_vectors'
        optimizer = nlp.begin_training()
        for i in range(n_iter):
            random.shuffle(TRAIN_DATA)
            losses = {}
            for text, annotations in TRAIN_DATA:
                nlp.update([text], [annotations], sgd=optimizer, losses=losses)
            print(losses)



回答2:


If you are using the same labels, and just need to train it better, there is no need to add new labels. However, if you are using a different set of labels, you need to train a new model.

For the first case, you do get_pipe('tagger'), skip the add_label loop and keep going.

For the second case, you need to create a new tagger, train it, then add it to the pipeline. For this, you will need to also disable the tagger when loading the model (since you will be training a new one). I've also answered this here



来源:https://stackoverflow.com/questions/51715439/implementing-custom-pos-tagger-in-spacy-over-existing-english-model-nlp-pyth

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!