Why does classifier.predict() method expects the number of features in the test data to be the same as in training data?

↘锁芯ラ 提交于 2019-12-05 13:13:00
emiguevara

To ensure that you have the same feature representation, you should not fit_transform your test data, but only transform it.

x_train=vectorizer.fit_transform(f1)
x_test=vectorizer.transform(data2)

A similar transformation into homogeneous features should be applied to your labels.

SVM works by assuming all of your training data lives in an n-dimensional space and then performing a kind of geometric optimization on that set. To make that concrete, if n=2 then SVM is picking a line which optimally separates the (+) examples from the (-) examples.

What this means is that the result of training an SVM is tied to the dimensionality it was trained in. This dimensionality is exactly the size of your feature set (modulo kernels and other transformations, but in any case all of that information together uniquely sets the problem space). You thus cannot just apply this trained model to new data which exists in a space of a different dimensionality.

(You might suggest that we project or embed the training space into the test space—and that might work in some circumstances even—but it's invalid generally.)

This situation gets even trickier when you really analyze it, though. Not only does the test data dimensionality need to correspond with the training data dimensionality but the meaning of each dimension needs to be constant. For instance, back in our n=2 example, assume that we're classifying people's moods (happy/sad) and the x dimension is "enjoyment of life" and the y dimension is "time spent listening to sad music". We'd expect that greater x and lesser y values improve the likelihood of being happy, so a good discrimination boundary that SVM could find would be the y=x line as people closer to the x axis tend to be happy and closer to the y axis tend to be sad.

But then lets say someone bumbles and mixes up the x and y dimensions when they drop the test data in. Boom, suddenly you've got an incredibly inaccurate predictor.


So in particular, the observation space of the test data must match the observation space o the training data. Dimensionality is an important step in this regard, but the match must actually be perfect.

Which is a long way of saying that you need to either do some feature engineering or find an algorithm without this kind of dependency (which will also involve some feature engineering).

Do we have to explicitly set the no of features of the test data to be equal to 9451 in this case?

Yes you do. SVM needs to manage the same dimension as the training set. What people tend to do when working with documents is using a bag of words approach or select the first x less common words.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!