What is the best stemming method in Python?

核能气质少年 提交于 2019-11-27 10:07:56

问题


I tried all the nltk methods for stemming but it gives me weird results with some words.

Examples

It often cut end of words when it shouldn't do it :

  • poodle => poodl
  • article articl

or doesn't stem very good :

  • easily and easy are not stemmed in the same word
  • leaves, grows, fairly are not stemmed

Do you know other stemming libs in python, or a good dictionary?

Thank you


回答1:


Python implementations of the Porter, Porter2, Paice-Husk, and Lovins stemming algorithms for English are available in the stemming package




回答2:


The results you are getting are (generally) expected for a stemmer in English. You say you tried "all the nltk methods" but when I try your examples, that doesn't seem to be the case.

Here are some examples using the PorterStemmer

import nltk
ps = nltk.stemmer.PorterStemmer()
ps.stem('grows')
'grow'
ps.stem('leaves')
'leav'
ps.stem('fairly')
'fairli'

The results are 'grow', 'leav' and 'fairli' which, even if they are what you wanted, are stemmed versions of the original word.

If we switch to the Snowball stemmer, we have to provide the language as a parameter.

import nltk
sno = nltk.stem.SnowballStemmer('english')
sno.stem('grows')
'grow'
sno.stem('leaves')
'leav'
sno.stem('fairly')
'fair'

The results are as before for 'grows' and 'leaves' but 'fairly' is stemmed to 'fair'

So in both cases (and there are more than two stemmers available in nltk), words that you say are not stemmed, in fact, are. The LancasterStemmer will return 'easy' when provided with 'easily' or 'easy' as input.

Maybe you really wanted a lemmatizer? That would return 'article' and 'poodle' unchanged.

import nltk
lemma = nltk..wordnet.WordNetLemmatizer()
lemma.lemmatize('article')
'article'
lemma.lemmatize('leaves')
'leaf'



回答3:


All these stemmers that have been discussed here are algorithmic stemmer,hence they can always produce unexpected results such as

In [3]: from nltk.stem.porter import *

In [4]: stemmer = PorterStemmer()

In [5]: stemmer.stem('identified')
Out[5]: u'identifi'

In [6]: stemmer.stem('nonsensical')
Out[6]: u'nonsens'

To correctly get the root words one need a dictionary based stemmer such as Hunspell Stemmer.Here is a python implementation of it in the following link. Example code is here

>>> import hunspell
>>> hobj = hunspell.HunSpell('/usr/share/myspell/en_US.dic', '/usr/share/myspell/en_US.aff')
>>> hobj.spell('spookie')
False
>>> hobj.suggest('spookie')
['spookier', 'spookiness', 'spooky', 'spook', 'spoonbill']
>>> hobj.spell('spooky')
True
>>> hobj.analyze('linked')
[' st:link fl:D']
>>> hobj.stem('linked')
['link']



回答4:


Stemming is all about removing suffixes(usually only suffixes, as far as I have tried none of the nltk stemmers could remove a prefix, forget about infixes). So we can clearly call stemming as a dumb/ not so intelligent program. It doesn't check if a word has a meaning before or after stemming. For eg. If u try to stem "xqaing", although not a word, it will remove "-ing" and give u "xqa".

So, in order to use a smarter system, one can use lemmatizers. Lemmatizers uses well-formed lemmas (words) in form of wordnet and dictionaries. So it always returns and takes a proper word. However, it is slow because it goes through all words in order to find the relevant one.




回答5:


Stemmers vary in their aggressiveness. Porter is one of the monst aggressive stemmer for English. I find it usually hurts more than it helps. On the lighter side you can either use a lemmatizer instead as already suggested, or a lighter algorithmic stemmer. The limitation of lemmatizers is that they cannot handle unknown words.

Personally I like the Krovetz stemmer which is a hybrid solution, combing a dictionary lemmatizer and a light weight stemmer for out of vocabulary words. Krovetz also kstem or light_stemmer option in Elasticsearch. There is a python implementation on pypi https://pypi.org/project/KrovetzStemmer/, though that is not the one that I have used.

Another option is the the lemmatizer in spaCy. Afte processing with spaCy every token has a lemma_ attribute. (note the underscore lemma hold a numerical identifier of the lemma_) - https://spacy.io/api/token

Here are some papers comparing various stemming algorithms:

  • https://www.semanticscholar.org/paper/A-Comparative-Study-of-Stemming-Algorithms-Ms-.-Jivani/1c0c0fa35d4ff8a2f925eb955e48d655494bd167
  • https://www.semanticscholar.org/paper/Stemming-Algorithms%3A-A-Comparative-Study-and-their-Sharma/c3efc7d586e242d6a11d047a25b67ecc0f1cce0c?navId=citing-papers
  • https://www.semanticscholar.org/paper/Comparative-Analysis-of-Stemming-Algorithms-for-Web/3e598cda5d076552f4a9f89aaa9d79f237882afd
  • https://scholar.google.com/scholar?q=related:MhDEzHAUtZ8J:scholar.google.com/&scioq=comparative+stemmers&hl=en&as_sdt=0,5



回答6:


In my chatbot project I have used PorterStemmer However LancasterStemmer also serves the purpose. Ultimate objective is to stem the word to its root so that we can search and compare with the search words inputs.

For Example: from nltk.stem import PorterStemmer ps = PorterStemmer()

def SrchpattrnStmmed(self):
    KeyWords =[]
    SrchpattrnTkn = word_tokenize(self.input)
    for token in SrchpattrnTkn:
        if token not in stop_words:
            KeyWords.append(ps.stem(token))
            continue
    #print(KeyWords)
    return KeyWords

Hope this will help..



来源:https://stackoverflow.com/questions/24647400/what-is-the-best-stemming-method-in-python

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!