spacy adding special case tokenization rules by regular expression or pattern

冷暖自知 提交于 2019-11-28 09:36:28

问题


I want to add special case for tokenization in spacy according to the documentation. The documentation shows how specific words can be considered as special cases. I want to be able to specify a pattern (e.g. a suffix). For example, I have a string like this

text = "A sample string with <word-1> and <word-2>"

where <word-i> specifies a single word.

I know I can have it for one special case at a time by the following code. But how can I specify a pattern for that?

import spacy
from spacy.symbols import ORTH
nlp = spacy.load('en', vectors=False,parser=False, entity=False) 
nlp.tokenizer.add_special_case(u'<WORD>', [{ORTH: u'<WORD>'}])

回答1:


You can use regex matches to find bounds of your special case strings, and then use spacy's merge method to merge them as single token. The add_special_case works only for defined words. Here is an example:

>>> import spacy
>>> import re
>>> nlp = spacy.load('en')
>>> my_str = u'Tweet hashtags #MyHashOne #MyHashTwo'
>>> parsed = nlp(my_str)
>>> [(x.text,x.pos_) for x in parsed]
[(u'Tweet', u'PROPN'), (u'hashtags', u'NOUN'), (u'#', u'NOUN'), (u'MyHashOne', u'NOUN'), (u'#', u'NOUN'), (u'MyHashTwo', u'PROPN')]
>>> indexes = [m.span() for m in re.finditer('#\w+',my_str,flags=re.IGNORECASE)]
>>> indexes
[(15, 25), (26, 36)]
>>> for start,end in indexes:
...     parsed.merge(start_idx=start,end_idx=end)
... 
#MyHashOne
#MyHashTwo
>>> [(x.text,x.pos_) for x in parsed]
[(u'Tweet', u'PROPN'), (u'hashtags', u'NOUN'), (u'#MyHashOne', u'NOUN'), (u'#MyHashTwo', u'PROPN')]
>>> 


来源:https://stackoverflow.com/questions/44594759/spacy-adding-special-case-tokenization-rules-by-regular-expression-or-pattern

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!