问题
I'm trying to integrate a custom PhraseMatcher()
component into my nlp pipeline in a way that will allow me to load the custom Spacy model without having to re-add my custom components to a generic model on each load.
How can I load a Spacy model containing custom pipeline components?
I create the component, add it to my pipeline and save it with the following:
import requests
from spacy.lang.en import English
from spacy.matcher import PhraseMatcher
from spacy.tokens import Doc, Span, Token
class RESTCountriesComponent(object):
name = 'countries'
def __init__(self, nlp, label='GPE'):
self.countries = [u'MyCountry', u'MyOtherCountry']
self.label = nlp.vocab.strings[label]
patterns = [nlp(c) for c in self.countries]
self.matcher = PhraseMatcher(nlp.vocab)
self.matcher.add('COUNTRIES', None, *patterns)
def __call__(self, doc):
matches = self.matcher(doc)
spans = []
for _, start, end in matches:
entity = Span(doc, start, end, label=self.label)
spans.append(entity)
doc.ents = list(doc.ents) + spans
for span in spans:
span.merge()
return doc
nlp = English()
rest_countries = RESTCountriesComponent(nlp)
nlp.add_pipe(rest_countries)
nlp.to_disk('myNlp')
I then attempt to load my model with,
nlp = spacy.load('myNlp')
But get this error message:
KeyError: u"[E002] Can't find factory for 'countries'. This usually happens when spaCy calls
nlp.create_pipe
with a component name that's not built in - for example, when constructing the pipeline from a model's meta.json. If you're using a custom component, you can write toLanguage.factories['countries']
or remove it from the model meta and add it vianlp.add_pipe
instead."
I can't just add my custom components to a generic pipeline in my programming environment. How can I do what I'm trying to do?
回答1:
When you save out your model, spaCy will serialize all data and store a reference to your pipeline in the model's meta.json
. For example: ["ner", "countries"]
. When you load your model back in, spaCy will check out the meta and initialise each pipeline component by looking it up in the so-called "factories": functions that tell spaCy how to construct a pipeline component. (The reason for that is that you usually don't want your model to store and eval arbitrary code when you load it back in – at least not by default.)
In your case, spaCy is trying to look up the component name 'countries'
in the factories and fails, because it's not built-in. The Language.factories
are a simple dictionary, though, so you can customise it and add your own entries:
from spacy.language import Language
Language.factories['countries'] = lambda nlp, **cfg: RESTCountriesComponent(nlp, **cfg)
A factory is a function that receives the shared nlp
object and optional keyword arguments (config parameters). It then initialises the component and returns it. If you add the above code before you load your model, it should load as expected.
More advanced approaches
If you want this taken care of automatically, you could also ship your component with your model. This requires wrapping it as a Python package using the spacy package command, which creates all required Python files. By default, the __init__.py
only includes a function to load your model – but you can also add custom functions to it or use it to add entries to spaCy's factories.
As of v2.1.0
(currently available as a nightly version for testing), spaCy will also support providing pipeline component factories via Python entry points. This is especially useful for production setups and/or if you want to modularise your individual components and split them into their own packages. For example, you could create a Python package for your countries component and its factory, upload it to PyPi, version it and test it separately. In its setup.py
, your package can define the spaCy factories it exposes and where to find them. spaCy will be able to detect them automatically – all you need to do is install the package in the same environment. Your model package could even require your component package as a dependency so it's installed automatically when you install your model.
回答2:
This same issue came up for me and these are the steps I used:
- 1) Save pipeline after running notebook containing all the different nlp pipeline components e.g. nlp.to_disc('pipeline_model_name')
- 2) Build Package saved pipeline with Spacy: run
python setup.py sdist
in this directory. - 3) Pip install the created package
- 4) Put custom components in
__init__.py
file of package as instructed above - 4) Load pipeline with:
- Import spacy
- nlp = spacy_package.load()
来源:https://stackoverflow.com/questions/51412095/spacy-save-custom-pipeline