How do I use sklearn CountVectorizer with both 'word' and 'char' analyzer? - python

烂漫一生 提交于 2019-11-29 02:23:34

You can pass a callable as the analyzer argument to get full control over the tokenization, e.g.

>>> from pprint import pprint
>>> import re
>>> x = ['this is a foo bar', 'you are a foo bar black sheep']
>>> def words_and_char_bigrams(text):
...     words = re.findall(r'\w{3,}', text)
...     for w in words:
...         yield w
...         for i in range(len(w) - 2):
...             yield w[i:i+2]
...             
>>> v = CountVectorizer(analyzer=words_and_char_bigrams)
>>> pprint(v.fit(x).vocabulary_)
{'ac': 0,
 'ar': 1,
 'are': 2,
 'ba': 3,
 'bar': 4,
 'bl': 5,
 'black': 6,
 'ee': 7,
 'fo': 8,
 'foo': 9,
 'he': 10,
 'hi': 11,
 'la': 12,
 'sh': 13,
 'sheep': 14,
 'th': 15,
 'this': 16,
 'yo': 17,
 'you': 18}

You can combine arbitrary feature extraction steps with the FeatureUnion estimator: http://scikit-learn.org/dev/modules/pipeline.html#featureunion-combining-feature-extractors

In this case this is probably less efficient than larsmans solution, but might be easier to use.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!