Find multi-word terms in a tokenized text in Python

雨燕双飞 提交于 2019-12-29 01:49:07

问题


I have a text that I have tokenized, or in general a list of words is ok as well. For example:

   >>> from nltk.tokenize import word_tokenize
    >>> s = '''Good muffins cost $3.88\nin New York.  Please buy me
    ... two of them.\n\nThanks.'''
    >>> word_tokenize(s)
        ['Good', 'muffins', 'cost', '$', '3.88', 'in', 'New', 'York', '.',
        'Please', 'buy', 'me', 'two', 'of', 'them', '.', 'Thanks', '.']

If I have a Python dict that contains single word as well as multi-word keys, how can I efficiently and correctly check for their presence in the text? The ideal output would be key:location_in_text pairs, or something as convenient. Thanks in advance!

P.S. To explain "correctly" - If I have "lease" in my dict, I do not wish Please marked. Also, recognizing plurals is required. I am wondering if this can be elegantly solved without many if-else clauses.


回答1:


If you already have a list of Multi-Word Expressions gazetteers, you can use MWETokenizer, e.g.:

>>> from nltk.tokenize import MWETokenizer
>>> from nltk import sent_tokenize, word_tokenize

>>> s = '''Good muffins cost $3.88\nin New York.  Please buy me
...     ... two of them.\n\nThanks.'''

>>> mwe = MWETokenizer([('New', 'York'), ('Hong', 'Kong')], separator='_')


>>> [mwe.tokenize(word_tokenize(sent)) for sent in sent_tokenize(s)]
[['Good', 'muffins', 'cost', '$', '3.88', 'in', 'New_York', '.'], ['Please', 'buy', 'me', '...', 'two', 'of', 'them', '.'], ['Thanks', '.']]


来源:https://stackoverflow.com/questions/47663870/find-multi-word-terms-in-a-tokenized-text-in-python

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!