Is there a way to convert nltk featuresets into a scipy.sparse array?

故事扮演 提交于 2019-12-08 08:04:40

问题


I'm trying to use scikit.learn which needs numpy/scipy arrays for input. The featureset generated in nltk consists of unigram and bigram frequencies. I could do it manually, but that'll be a lot of effort. So wondering if there's a solution i've overlooked.


回答1:


Not that I know of, but note that scikit-learn can do n-gram frequency counting itself. Assuming word-level n-grams:

from sklearn.feature_extraction.text import CountVectorizer, WordNGramAnalyzer
v = CountVectorizer(analyzer=WordNGramAnalyzer(min_n=1, max_n=2))
X = v.fit_transform(files)

where files is a list of strings or file-like objects. After this, X is a scipy.sparse matrix of raw frequency counts.




回答2:


Jacob Perkins did a a bridge for training NLTK classifiers using scikit-learn classifiers that does exactly that here is the source:

https://github.com/japerk/nltk-trainer/blob/master/nltk_trainer/classification/sci.py

The package import lines should be updated if you are using version 0.9+.



来源:https://stackoverflow.com/questions/8394257/is-there-a-way-to-convert-nltk-featuresets-into-a-scipy-sparse-array

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!