svm

Get recall (sensitivity) and precision (PPV) values of a multi-class problem in PyML

你离开我真会死。 提交于 2019-12-06 13:55:55
问题 I am using PyML for SVM classification. However, I noticed that when I evaluate a multi-class classifier using LOO, the results object does not report the sensitivity and PPV values. Instead they are 0.0: from PyML import * from PyML.classifiers import multi mc = multi.OneAgainstRest(SVM()) data = VectorDataSet('iris.data', labelsColumn=-1) result = mc.loo(data) result.getSuccessRate() >>> 0.95333333333333337 result.getPPV() >>> 0.0 result.getSensitivity() >>> 0.0 I have looked at the code

Weird SVM prediction performance in scikit-learn (SVMLIB)

半腔热情 提交于 2019-12-06 13:21:45
I am using SVC from scikit-learn on a large dataset of 10000x1000 (10000 objects with 1000 features). I already saw in other sources that SVMLIB doesn't scale well beyond ~10000 objects and I indeed observe this: training time for 10000 objects: 18.9s training time for 12000 objects: 44.2s training time for 14000 objects: 92.7s You can imagine what happens when I trying to 80000. However, what I found very surprising is the fact that the SVM's predict() takes even more time than the training fit(): prediction time for 10000 objects (model was also trained on those objects): 49.0s prediction

Can I get a list of wrong predictions in SVM score function in scikit-learn?

纵饮孤独 提交于 2019-12-06 12:49:28
问题 We can use svm.SVC.score() to evaluate the accuracy of the SVM model. I want to get the predicted class and the actual class in case of wrong predictions. How can I achieve this in scikit-learn ? 回答1: The simplest approach is just to iterate over your predictions (and correct classifications) and do whatever you want with the output (in the following example I will just print it to stdout). Lets assume that your data is in inputs, labels, and your trained SVM is in clf, then you can just do

Opencv: Train SVM with FAST keypoints and BRIEF features

戏子无情 提交于 2019-12-06 11:34:42
问题 I want to train a SVM for object detection. At this point I have a python script which detects FAST keypoints and extracts BRIEF features at that location. Now I don't know how to use these descriptors to train a SVM. Would you tell me please: How to use the descriptors to train the SVM (As far as I know these descriptors should be my train data)? What are labels used for and how I can get them? 回答1: To train a SVM you would need a matrix X with your features and a vector y with your labels.

Find confidence of prediction in SVM

北慕城南 提交于 2019-12-06 09:45:04
问题 I am doing English digit classification using SVM classifier in opencv. I am able to predict the classes using predict() function. But I want get confidence of prediction between 0-1. Can somebody provide a method to do it using opencv //svm parameters used m_params.svm_type = CvSVM::C_SVC; m_params.kernel_type = CvSVM::RBF; m_params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 500, 1e-8); //for training svmob.train_auto(m_features, m_labels, cv::Mat(), cv::Mat(), m_params, 10); //for

SVM for Text Mining using scikit

拜拜、爱过 提交于 2019-12-06 09:22:37
Can someone share a code snippet that shows how to use SVM for text mining using scikit. I have seen an example of SVM on numerical data but not quite sure how to deal with text. I looked at http://scikit-learn.org/stable/auto_examples/document_classification_20newsgroups.html but couldn't find SVM. In text mining problems, text is represented by numeric values. Each feature represent a word and values are binary numbers. That gives a matrix with lots of zeros and a few 1s which means that the corresponding words exist in the text. Words can be given some weights according to their frequency

What's the difference between ANN, SVM and KNN classifiers?

牧云@^-^@ 提交于 2019-12-06 08:59:47
I know this is a very general question without specifics about my actual project, but my question is: I am doing remote sensing image classification. I am using the object-oriented method: first I segmented the image to different regions, then I extract the features from regions such as color, shape and texture. The number of all features in a region may be 30 and commonly there are 2000 regions in all, and I will choose 5 classes with 15 samples for every class. In summary: Sample data 1530 Test data 197530 How do I choose the proper classifier? If there are 3 classifiers (ANN, SVM, and KNN),

How to speed up svm.predict?

前提是你 提交于 2019-12-06 08:59:40
问题 I'm writing a sliding window to extract features and feed it into CvSVM's predict function. However, what I've stumbled upon is that the svm.predict function is relatively slow. Basically the window slides thru the image with fixed stride length, on number of image scales. The speed traversing the image plus extracting features for each window takes around 1000 ms (1 sec). Inclusion of weak classifiers trained by adaboost resulted in around 1200 ms (1.2 secs) However when I pass the features

OpenCV SVM train_auto Insufficient Memory

梦想的初衷 提交于 2019-12-06 06:15:51
This is my first post here so I hope to be able to ask my question properly :-) I want to do "elephant detection" by classifying color samples (I was inspired by this paper ). This is the pipeline of my "solution" until the training of the classifier: Loading a set of 4 training images (all containing an elephant), and then splitting them in two images: one containing the environment surrounding the elephant (the "background"), and one containing the elephant (the "foreground"); Mean shift segmentation of the backgrounds and the foregrounds; RGB -> Luv color space conversion and pixel values

kernel matrix computation outside SVM training in kernlab

丶灬走出姿态 提交于 2019-12-06 06:14:31
问题 I was developing a new algorithm that generates a modified kernel matrix for training with a SVM and encountered a strange problem. For testing purposes I was comparing the SVM models learned using kernelMatrix interface and normal kernel interface. For example, # Model with kernelMatrix computation within ksvm svp1 <- ksvm(x, y, type="C-svc", kernel=vanilladot(), scaled=F) # Model with kernelMatrix computed outside ksvm K <- kernelMatrix(vanilladot(), x) svp2 <- ksvm(K, y, type="C-svc")