svm

Speed of SVM Kernels? Linear vs RBF vs Poly

十年热恋 提交于 2019-12-05 06:44:39
I'm using scikitlearn in Python to create some SVM models while trying different kernels. The code is pretty simple, and follows the form of: from sklearn import svm clf = svm.SVC(kernel='rbf', C=1, gamma=0.1) clf = svm.SVC(kernel='linear', C=1, gamma=0.1) clf = svm.SVC(kernel='poly', C=1, gamma=0.1) t0 = time() clf.fit(X_train, y_train) print "Training time:", round(time() - t0, 3), "s" pred = clf.predict(X_test) The data is 8 features and a little over 3000 observations. I was surprised to see that rbf was fitted in under a second, whereas linear took 90 seconds and poly took hours. I

Datasets to test Nonlinear SVM

喜欢而已 提交于 2019-12-04 21:08:13
问题 I'm implementing a nonlinear SVM and I want to test my implementation on a simple not linearly separable data. Google didn't help me find what I want. Can you please advise me where I can find such data. Or at least, how can I generate such data manually ? Thanks, 回答1: Well, SVMs are two-class classifiers--i.e., these classifiers place data on either side of a single decision boundary. Therefore, i would suggest a data set comprised of just two classes (that's not strictly necessary because

MultiClass using LIBSVM

给你一囗甜甜゛ 提交于 2019-12-04 20:19:11
I have a multiclass svm classification(6 class). I would like to classify it using LIBSVM. The following are the ones that i have tried and i have some questions regarding them. Method1( one vs one): model = svmtrain(TrainLabel, TrainVec, '-c 1 -g 0.00154 -b 0.9'); [predict_label, accuracy, dec_values] = svmpredict(TestLabel, TestVec, model); Two questions about this method: 1) is that all i need to do for multiclass problem 2) what value should it be for n in '-b n'. I m not sure Method 2( one vs rest): u=unique(TrainLabel); N=length(u); if(N>2) itr=1; classes=0; while((classes~=1)&&(itr<

labeling data in SVM opencv c++

戏子无情 提交于 2019-12-04 19:45:52
I'm trying to implement SVM in opencv for features that I have extracted features by using SIFT. I have extracted features for 2 different objects (each object has features of 10 different images which in total I got more than 3000 features for one object) and I put those features in one file (yaml file).. My problem is: I don't know how to label them? so I need to label these two files (as I said each file is the type of yaml and it contains matrix 3260*128 and the second yaml file for the second object is 3349*128)... So please help me to show me how to label these files in order to train

Find right features in multiclass svm without PCA

吃可爱长大的小学妹 提交于 2019-12-04 19:15:35
I'm classifing users with a multiclass svm (one-against-on), 3 classes. In binary, I would be able to plot the distribution of the weight of each feature in the hyperplan equation for different training sets. In this case, I don't really need a PCA to see stability of the hyperplan and relative importance of the features (reudced centered btw). What would the alternative be in multiclass svm, as for each training set you have 3 classifiers and you choose one class according to the result of the three classifiers (what is it already ? the class that appears the maximum number of times or the

How to import a trained SVM detector in OpenCV 2.4.13

流过昼夜 提交于 2019-12-04 18:11:30
So I have followed this guide to train my own pedestrian HOG detector. https://github.com/DaHoC/trainHOG/wiki/trainHOG-Tutorial And it was successful with 4 files generated. cvHOGClassifier.yaml descriptorvector.dat features.dat svmlightmodel.dat Does anyone know how to load the descriptorvector.dat file as a vector? I've tried this but failed. vector<float> detector; std::ifstream file; file.open("descriptorvector.dat"); file >> detector; file.close(); This is something I would like to use eventually. gpu::HOGDescriptor hog(Size(64, 128), Size(16, 16), Size(8, 8), Size(8, 8),9); hog

非线性支持向量机SVM

左心房为你撑大大i 提交于 2019-12-04 17:32:18
非线性支持向量机SVM 对于线性不可分的数据集, 我们引入了 核 (参考: 核方法·核技巧·核函数 ) 线性支持向量机的算法如下: 将线性支持向量机转换成 非线性支持向量机 只需要将 变为 核函数 即可: 非线性支持向量机的算法如下: 来源: https://www.cnblogs.com/hichens/p/11875506.html

support vector machines - a simple explanation?

做~自己de王妃 提交于 2019-12-04 16:39:56
问题 So, i'm trying to understand how the SVM algorithm works but i just cannot figure out how you transform some datasets in points of n-dimensional plane that would have a mathematical meaning in order to separate the points through a hyperplane and clasify them. There's an example here, they are trying to clasify pictures of tigers and elephants, they say "We digitize them into 100x100 pixel images, so we have x in n-dimensional plane, where n=10,000", but my question is how do they transform

Opencv: Train SVM with FAST keypoints and BRIEF features

坚强是说给别人听的谎言 提交于 2019-12-04 16:15:33
I want to train a SVM for object detection. At this point I have a python script which detects FAST keypoints and extracts BRIEF features at that location. Now I don't know how to use these descriptors to train a SVM. Would you tell me please: How to use the descriptors to train the SVM (As far as I know these descriptors should be my train data)? What are labels used for and how I can get them? To train a SVM you would need a matrix X with your features and a vector y with your labels. It should look like this for 3 images and two features: >>> from sklearn import svm >>> X = [[0, 0], <-

Can I get a list of wrong predictions in SVM score function in scikit-learn?

↘锁芯ラ 提交于 2019-12-04 15:11:47
We can use svm.SVC.score() to evaluate the accuracy of the SVM model. I want to get the predicted class and the actual class in case of wrong predictions. How can I achieve this in scikit-learn ? The simplest approach is just to iterate over your predictions (and correct classifications) and do whatever you want with the output (in the following example I will just print it to stdout). Lets assume that your data is in inputs, labels, and your trained SVM is in clf, then you can just do predictions = clf.predict(inputs) for input, prediction, label in zip(inputs, predictions, labels): if