How do I properly combine numerical features with text (bag of words) in scikit-learn?

删除回忆录丶 提交于 2019-12-04 10:53:03

问题


I am writing a classifier for web pages, so I have a mixture of numerical features, and I also want to classify the text. I am using the bag-of-words approach to transform the text into a (large) numerical vector. The code ends up being like this:

from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
import numpy as np

numerical_features = [
  [1, 0],
  [1, 1],
  [0, 0],
  [0, 1]
]
corpus = [
  'This is the first document.',
  'This is the second second document.',
  'And the third one',
  'Is this the first document?',
]
bag_of_words_vectorizer = CountVectorizer(min_df=1)
X = bag_of_words_vectorizer.fit_transform(corpus)
words_counts = X.toarray()
tfidf_transformer = TfidfTransformer()
tfidf = tfidf_transformer.fit_transform(words_counts)

bag_of_words_vectorizer.get_feature_names()
combinedFeatures = np.hstack([numerical_features, tfidf.toarray()])

This works, but I'm concerned about the accuracy. Notice that there are 4 objects, and only two numerical features. Even the simplest text results in a vector with nine features (because there are nine distinct words in the corpus). Obviously, with real text, there will be hundreds, or thousands of distinct words, so the final feature vector would be < 10 numerical features but > 1000 words based ones.

Because of this, won't the classifier (SVM) be heavily weighting the words over the numerical features by a factor of 100 to 1? If so, how can I compensate to make sure the bag of words is weighted equally against the numerical features?


回答1:


You can weight the counts by using the Tf–idf:

import numpy as np
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer

np.set_printoptions(linewidth=200)

corpus = [
  'This is the first document.',
  'This is the second second document.',
  'And the third one',
  'Is this the first document?',
]

vectorizer = CountVectorizer(min_df=1)
X = vectorizer.fit_transform(corpus)

words = vectorizer.get_feature_names()
print(words)
words_counts = X.toarray()
print(words_counts)

transformer = TfidfTransformer()
tfidf = transformer.fit_transform(words_counts)
print(tfidf.toarray())

The output is this:

# words
[u'and', u'document', u'first', u'is', u'one', u'second', u'the', u'third', u'this']

# words_counts
[[0 1 1 1 0 0 1 0 1]
 [0 1 0 1 0 2 1 0 1]
 [1 0 0 0 1 0 1 1 0]
 [0 1 1 1 0 0 1 0 1]]

# tfidf transformation
[[ 0.          0.43877674  0.54197657  0.43877674  0.          0.          0.35872874  0.          0.43877674]
 [ 0.          0.27230147  0.          0.27230147  0.          0.85322574  0.22262429  0.          0.27230147]
 [ 0.55280532  0.          0.          0.          0.55280532  0.          0.28847675  0.55280532  0.        ]
 [ 0.          0.43877674  0.54197657  0.43877674  0.          0.          0.35872874  0.          0.43877674]]

With this representation you should be able to merge further binary features to train a SVC.



来源:https://stackoverflow.com/questions/39445051/how-do-i-properly-combine-numerical-features-with-text-bag-of-words-in-scikit

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!