Using a support vector classifier with polynomial kernel in scikit-learn

半腔热情 提交于 2020-01-14 20:39:16

问题


I'm experimenting with different classifiers implemented in the scikit-learn package, to do some NLP task. The code I use to perform the classification is the following

def train_classifier(self, argcands):
        # Extract the necessary features from the argument candidates
        train_argcands_feats = []
        train_argcands_target = []

        for argcand in argcands:
            train_argcands_feats.append(self.extract_features(argcand))
            train_argcands_target.append(argcand["info"]["label"]) 

        # Transform the features to the format required by the classifier
        self.feat_vectorizer = DictVectorizer()
        train_argcands_feats = self.feat_vectorizer.fit_transform(train_argcands_feats)

        # Transform the target labels to the format required by the classifier
        self.target_names = list(set(train_argcands_target))
        train_argcands_target = [self.target_names.index(target) for target in train_argcands_target]

        # Train the appropriate supervised model
        self.classifier = LinearSVC()
        #self.classifier = SVC(kernel="poly", degree=2)

        self.classifier.fit(train_argcands_feats,train_argcands_target)

        return

def execute(self, argcands_test):
        # Extract features
        test_argcands_feats = [self.extract_features(argcand) for argcand in argcands_test]

        # Transform the features to the format required by the classifier
        test_argcands_feats = self.feat_vectorizer.transform(test_argcands_feats)

        # Classify the candidate arguments 
        test_argcands_targets = self.classifier.predict(test_argcands_feats)

        # Get the correct label names
        test_argcands_labels = [self.target_names[int(label_index)] for label_index in test_argcands_targets]

        return zip(argcands_test, test_argcands_labels)

As can be seen by the code, I'm testing two implementations of a Support Vectors Machine classifier: the LinearSVC and the SVC with a polynomial kernel. Now, for my "problem". When using the LinearSVC, I get a classification with no problems: the test instances are tagged with some labels. However, if I use the polynomial SVC, ALL test instances are tagged with the SAME label. I know that one possible explanation is that, simply, the polynomial SVC is not the appropriate classifier to use for my task, and that's fine. I just want to make sure that I'm using the polynomial SVC appropriately.

Thanks for all the help/advice you could give me.

UPDATE Following the recommendation given in the answers, I've changed the code that trains the classifier to do the following:

# Train the appropriate supervised model
parameters = [{'C': [1, 10, 100, 1000], 'gamma': [0.001, 0.0001], 'kernel': ['poly'], 'degree': [2]}]
self.classifier = GridSearchCV(SVC(C=1), parameters, score_func = f1_score)

Now I get the following message:

ValueError: The least populated class in y has only 1 members, which is too few. The minimum number of labels for any class cannot be less than k=3.

This has something to do with the uneven distribution of class' instances in my training data, right? Or am I calling the procedure incorrectly?


回答1:


In both cases you should tune the value of the regularization parameter C using grid search. You cannot compare the results otherwise as a good value for C for one might yield crappy results for the other model.

For the polynomial kernel you can also grid search the optimal value for the degree (e.g. 2 or 3 or more): in that case you should grid search both C and degree at the same time.

Edit:

This has something to do with the uneven distribution of class' instances in my training data, right? Or am I calling the procedure incorrectly?

Check that you have at least 3 samples per class to be able to do StratifiedKFold cross validation with k == 3 (I think this is the default CV used by GridSearchCV for classification). If you have less, don't expect the model to be able to predict anything useful. I would recommend at least 100 samples per class (as a somewhat arbitrary rule of thumb min bound, unless you work on toy problems with less than 10 features and a lot of regularity in the decision boundaries between classes).

BTW, please always paste the complete traceback in questions / bug reports. Otherwise one might not have the necessary info to diagnose the right cause.



来源:https://stackoverflow.com/questions/12163362/using-a-support-vector-classifier-with-polynomial-kernel-in-scikit-learn

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!