Tokenizing an HTML document

元气小坏坏 提交于 2021-02-07 14:23:38

问题


I have an HTML document and I'd like to tokenize it using spaCy while keeping HTML tags as a single token. Here's my code:

import spacy
from spacy.symbols import ORTH
nlp = spacy.load('en', vectors=False, parser=False, entity=False)

nlp.tokenizer.add_special_case(u'<i>', [{ORTH: u'<i>'}])
nlp.tokenizer.add_special_case(u'</i>', [{ORTH: u'</i>'}])

doc = nlp('Hello, <i>world</i> !')

print([e.text for e in doc])

The output is:

['Hello', ',', '<', 'i', '>', 'world</i', '>', '!']

If I put spaces around the tags, like this:

doc = nlp('Hello, <i> world </i> !')

The output is as I want it:

['Hello', ',', '<i>', 'world', '</i>', '!']

but I'd like avoiding complicated pre-processing to the HTML.

Any idea how can I approach this?


回答1:


You need to create a custom Tokenizer.

Your custom Tokenizer will be exactly as spaCy's tokenizer but it will have '<' and '>' symbols removed from prefixes and suffixes and also it will add one new prefix and one new suffix rule.

Code:

import spacy
from spacy.tokens import Token
Token.set_extension('tag', default=False)

def create_custom_tokenizer(nlp):
    from spacy import util
    from spacy.tokenizer import Tokenizer
    from spacy.lang.tokenizer_exceptions import TOKEN_MATCH
    prefixes =  nlp.Defaults.prefixes + ('^<i>',)
    suffixes =  nlp.Defaults.suffixes + ('</i>$',)
    # remove the tag symbols from prefixes and suffixes
    prefixes = list(prefixes)
    prefixes.remove('<')
    prefixes = tuple(prefixes)
    suffixes = list(suffixes)
    suffixes.remove('>')
    suffixes = tuple(suffixes)
    infixes = nlp.Defaults.infixes
    rules = nlp.Defaults.tokenizer_exceptions
    token_match = TOKEN_MATCH
    prefix_search = (util.compile_prefix_regex(prefixes).search)
    suffix_search = (util.compile_suffix_regex(suffixes).search)
    infix_finditer = (util.compile_infix_regex(infixes).finditer)
    return Tokenizer(nlp.vocab, rules=rules,
                     prefix_search=prefix_search,
                     suffix_search=suffix_search,
                     infix_finditer=infix_finditer,
                     token_match=token_match)



nlp = spacy.load('en_core_web_sm')
tokenizer = create_custom_tokenizer(nlp)
nlp.tokenizer = tokenizer
doc = nlp('Hello, <i>world</i> !')
print([e.text for e in doc])


来源:https://stackoverflow.com/questions/47549856/tokenizing-an-html-document

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!