Sentence Segmentation using Spacy

拥有回忆 提交于 2020-01-13 05:17:06

问题


I am new to Spacy and NLP. Facing the below issue while doing sentence segmentation using Spacy.

The text I am trying to tokenise into sentences contains numbered lists(with space between numbering and actual text) . Like below.

import spacy
nlp = spacy.load('en_core_web_sm')
text = "This is first sentence.\nNext is numbered list.\n1. Hello World!\n2. Hello World2!\n3. Hello World!"
text_sentences = nlp(text)
for sentence in text_sentences.sents:
    print(sentence.text)

Output (1.,2.,3. are considered as separate lines) is:

This is first sentence.

Next is numbered list.

1.
Hello World!

2.
Hello World2!

3.
Hello World!

But if there is no space between numbering and actual text, then sentence tokenisation is fine. Like below:

import spacy
nlp = spacy.load('en_core_web_sm')
text = "This is first sentence.\nNext is numbered list.\n1.Hello World!\n2.Hello World2!\n3.Hello World!"
text_sentences = nlp(text)
for sentence in text_sentences.sents:
    print(sentence.text)

Output(desired) is:

This is first sentence.

Next is numbered list.

1.Hello World!

2.Hello World2!

3.Hello World!

Please suggest whether we can customise sentence detector to this.


回答1:


When you use a pretrained model with spacy, the sentences get splitted based on training data that were provided during the training procedure of the model.

Of course, there are cases like yours, that may somebody want to use a custom sentence segmentation logic. This is possible by adding a component to spacy pipeline.

For your case, you can add a rule that prevents sentence splitting when there is a {number}. pattern.

A workaround for your problem:

import spacy
import re

nlp = spacy.load('en')
boundary = re.compile('^[0-9]$')

def custom_seg(doc):
    prev = doc[0].text
    length = len(doc)
    for index, token in enumerate(doc):
        if (token.text == '.' and boundary.match(prev) and index!=(length - 1)):
            doc[index+1].sent_start = False
        prev = token.text
    return doc

nlp.add_pipe(custom_seg, before='parser')
text = u'This is first sentence.\nNext is numbered list.\n1. Hello World!\n2. Hello World2!\n3. Hello World!'
doc = nlp(text)
for sentence in doc.sents:
    print(sentence.text)

Hope it helps!



来源:https://stackoverflow.com/questions/52205475/sentence-segmentation-using-spacy

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!