svm

Generate a confusion matrix for svm in e1071 for CV results

浪尽此生 提交于 2021-02-19 05:59:05
问题 I did a classification with svm using e1071 . The goal is to predict type through all other variables in dtm . dtm[140:145] %>% str() 'data.frame': 385 obs. of 6 variables: $ think : num 0 0 0 0 0 0 0 0 0 0 ... $ actually: num 0 0 0 0 0 0 0 0 0 0 ... $ comes : num 0 0 0 0 0 0 0 0 0 0 ... $ able : num 0 0 0 0 0 0 0 0 0 0 ... $ hours : num 0 0 0 0 0 0 0 0 0 0 ... $ type : Factor w/ 4 levels "-1","0","1","9": 4 3 3 3 4 1 4 4 4 3 ... To train/test the model, I used the 10-fold-cross-validation.

sklearn multiclass svm function

久未见 提交于 2021-02-18 08:30:48
问题 I have multi class labels and want to compute the accuracy of my model. I am kind of confused on which sklearn function I need to use. As far as I understood the below code is only used for the binary classification. # dividing X, y into train and test data X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25,random_state = 0) # training a linear SVM classifier from sklearn.svm import SVC svm_model_linear = SVC(kernel = 'linear', C = 1).fit(X_train, y_train) svm

What is the difference between LinearSVC and SVC(kernel=“linear”)?

孤人 提交于 2021-02-17 10:46:17
问题 I found sklearn.svm.LinearSVC and sklearn.svm.SVC(kernel='linear') and they seem very similar to me, but I get very different results on Reuters. sklearn.svm.LinearSVC: 81.05% in 28.87s train / 9.71s test sklearn.svm.SVC : 33.55% in 6536.53s train / 2418.62s test Both have a linear kernel. The tolerance of the LinearSVC is higher than the one of SVC: LinearSVC(C=1.0, tol=0.0001, max_iter=1000, penalty='l2', loss='squared_hinge', dual=True, multi_class='ovr', fit_intercept=True, intercept

What is the difference between LinearSVC and SVC(kernel=“linear”)?

半城伤御伤魂 提交于 2021-02-17 10:44:55
问题 I found sklearn.svm.LinearSVC and sklearn.svm.SVC(kernel='linear') and they seem very similar to me, but I get very different results on Reuters. sklearn.svm.LinearSVC: 81.05% in 28.87s train / 9.71s test sklearn.svm.SVC : 33.55% in 6536.53s train / 2418.62s test Both have a linear kernel. The tolerance of the LinearSVC is higher than the one of SVC: LinearSVC(C=1.0, tol=0.0001, max_iter=1000, penalty='l2', loss='squared_hinge', dual=True, multi_class='ovr', fit_intercept=True, intercept

OPencv SVM predict probability

别说谁变了你拦得住时间么 提交于 2021-02-16 10:30:22
问题 I am developing a image-classification project using BOW model and SVM. I want to find out the SVMs predict probability but there is no such function in opencv svm. Is there any way to do this? I want to find out the predict probability in n-class SVM. 回答1: No you can't do this with CvSVM. OpenCV's SVM implementation is based on a very old version of libsvm. Download the latest version of libsvm and use it instead. Of course you will have to write a wrapper to convert data formats. See http:/

OPencv SVM predict probability

一个人想着一个人 提交于 2021-02-16 10:30:06
问题 I am developing a image-classification project using BOW model and SVM. I want to find out the SVMs predict probability but there is no such function in opencv svm. Is there any way to do this? I want to find out the predict probability in n-class SVM. 回答1: No you can't do this with CvSVM. OpenCV's SVM implementation is based on a very old version of libsvm. Download the latest version of libsvm and use it instead. Of course you will have to write a wrapper to convert data formats. See http:/

SVM-OVO vs SVM-OVA in a very basic example

泄露秘密 提交于 2021-02-10 22:42:17
问题 Trying to understand how SVM-OVR (One-Vs-Rest) works, I was testing the following code: import matplotlib.pyplot as plt import numpy as np from sklearn.svm import SVC x = np.array([[1,1.1],[1,2],[2,1]]) y = np.array([0,100,250]) classifier = SVC(kernel='linear', decision_function_shape='ovr') classifier.fit(x,y) print(classifier.predict([[1,2]])) print(classifier.decision_function([[1,2]])) The outputs are: [100] [[ 1.05322128 2.1947332 -0.20488118]] It means that the sample [1,2] is

SVM-OVO vs SVM-OVA in a very basic example

♀尐吖头ヾ 提交于 2021-02-10 22:40:11
问题 Trying to understand how SVM-OVR (One-Vs-Rest) works, I was testing the following code: import matplotlib.pyplot as plt import numpy as np from sklearn.svm import SVC x = np.array([[1,1.1],[1,2],[2,1]]) y = np.array([0,100,250]) classifier = SVC(kernel='linear', decision_function_shape='ovr') classifier.fit(x,y) print(classifier.predict([[1,2]])) print(classifier.decision_function([[1,2]])) The outputs are: [100] [[ 1.05322128 2.1947332 -0.20488118]] It means that the sample [1,2] is

Getting probability of each new observation being an outlier when using scikit-learn OneClassSVM

大憨熊 提交于 2021-02-10 14:18:32
问题 I'm new to scikit-learn, and SVM methods in general. I've got my data set working well with scikit-learn OneClassSVM in order to detect outliers; I train the OneClassSVM using observation all of which are 'inliers' and then use predict() to generate binary inlier/outlier predictions on my testing set of data. However to continue further with my analysis I'd like to get the probabilities associated with each new observation in my test set. E.g. The probability of being an outlier associated

R - mlr: Is there a easy way to get the variable importance of tuned support vector machine models in nested resampling (spatial)?

烂漫一生 提交于 2021-02-09 11:46:24
问题 I am trying to get the variable importance for all predictors (or variables, or features) of a tuned support vector machine (svm) model using e1071::svm through the mlr -package in R . But I am not sure, if I am doing the assessment right. Well, at first the idea: To get an honest tuned svm-model, I am following the nested-resampling tutorial using spatial n-fold cross-validation ( SpRepCV ) in the outer loop and spatial cross-validation ( SpCV ) in the inner loop. As tuning parameter gamma