how to force scikit-learn DictVectorizer not to discard features?

删除回忆录丶 提交于 2019-12-25 02:18:08

问题


Im trying to use scikit-learn for a classification task. My code extracts features from the data, and stores them in a dictionary like so:

feature_dict['feature_name_1'] = feature_1
feature_dict['feature_name_2'] = feature_2

when I split the data in order to test it using sklearn.cross_validation everything works as it should. The problem Im having is when the test data is a new set, not part of the learning set (although it has the same exact features for each sample). after I fit the classifier on the learning set, when I try to call clf.predict I get this error:

ValueError: X has different number of features than during model fitting.

I am assuming this has to do with this (out of the DictVectorizer docs):

Named features not encountered during fit or fit_transform will be silently ignored.

DictVectorizer has removed some of the features I guess... How do I disable/work around this feature?

Thanks

=== EDIT ===

The problem was as larsMans suggested that I was fitting the DictVectorizer twice.


回答1:


You should use fit_transform on the training set, and only transform on the test set.




回答2:


Are you making sure to call the previously built scaler and selector transforms on the test data?

scaler = preprocessing.StandardScaler().fit(trainingData)
selector = SelectPercentile(f_classif, percentile=90)
selector.fit(scaler.transform(trainingData), labelsTrain)
...
...
predicted = clf.predict(selector.transform(scaler.transform(testingData)))#


来源:https://stackoverflow.com/questions/19770147/how-to-force-scikit-learn-dictvectorizer-not-to-discard-features

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!