Stanford NER with python NLTK fails with strings containing multiple “!!”s?

大兔子大兔子 提交于 2019-12-11 13:02:26

问题


Suppose this is my filecontent:

When they are over 45 years old!! It would definitely help Michael Jordan.

Below is my code for tagging setences.

st = NERTagger('stanford-ner/classifiers/english.all.3class.distsim.crf.ser.gz', 'stanford-ner/stanford-ner.jar')
tokenized_sents = [word_tokenize(sent) for sent in sent_tokenize(filecontent)]  
taggedsents = st.tag_sents(tokenized_sents)

I would expect both tokenized_sents and taggedsents contain the same number of sentences.

But here is what they contain:

for ts in tokenized_sents:
    print "tok   ", ts

for ts in taggedsents:
    print "tagged    ",ts

>> tok    ['When', 'they', 'are', 'over', '45', 'years', 'old', '!', '!']
>> tok    ['It', 'would', 'definitely', 'help', '.']
>> tagged     [(u'When', u'O'), (u'they', u'O'), (u'are', u'O'), (u'over', u'O'), (u'45', u'O'), (u'years', u'O'), (u'old', u'O'), (u'!', u'O')]
>> tagged     [(u'!', u'O')]
>> tagged     [(u'It', u'O'), (u'would', u'O'), (u'definitely', u'O'), (u'help', u'O'), (u'Michael', u'PERSON'), (u'Jordan', u'PERSON'), (u'.', u'O')]

This is due to having doulbe "!" at the end of the supposed first sentence. Do I have to remove double "!"s before using st.tag_sents()

How should I resolve this?


回答1:


If you follow my solution from the other question instead of using nltk you will get JSON that properly splits this text into two sentences.

Link to previous question: how to speed up NE recognition with stanford NER with python nltk



来源:https://stackoverflow.com/questions/33755092/stanford-ner-with-python-nltk-fails-with-strings-containing-multiple-s

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!