svm

R using my own model in RFE(recursive feature elimination) to pick important feature

落花浮王杯 提交于 2019-12-13 21:01:13
问题 Using RFE, you can get a importance rank of the features, but right now I can only use the model and parameter inner the package like: lmFuncs(linear model),rfFuncs(random forest) it seems that caretFuncs can do some custom settings for your own model and parameter,but I don't know the details and the formal document didn't give detail, I want to apply svm and gbm to this RFE process,because this is the current model I used to train, anyone has any idea? 回答1: I tried to recreate working

The accuracy rate of SVM classifier varies in different machines

蓝咒 提交于 2019-12-13 17:07:34
问题 I am using the LIBSVM library, a library for Support vector Machines compatible with both Python and Matlab to perform classification in a digit recognition algorithm and also a face recognition algorithm. I am facing a very weird problem while performing SVM classification. The accuracy rate of both training and testing varies drastically when I run the program in different computers using the same code base, same interpreter (Python in my case) and same training and testing data. Here is

Which Support Vectors returned in Multiclass SVM SKLearn

两盒软妹~` 提交于 2019-12-13 15:33:21
问题 By default, SKLearn uses a One vs One classification scheme when training SVM's in the multiclass case. I'm a bit confused as to, when you call attributes such as svm.n_support_ or svm.support_vectors_, which support vectors you're getting? For instance, in the case of iris dataset, there are 3 classes, so there should be a total of 3*(3-1)/2 = 3 different SVM classifiers built. Of which classifier are you getting support vectors back? 回答1: Update : dual_coef_ is the key, giving you the

Want genuine suggestion to build Support Vector Machine in python without using Scikit-Learn [closed]

点点圈 提交于 2019-12-13 11:29:54
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed last year . As i know how to build a Support Vector Machine using Scikit-Learn but now i want to make it from scratch in python without using Scikit-Learn. As i am confused and having lack of knowledge about the internal processes i would be greatly pleased if getting help and make it out. 回答1:

Python: ValueError: The number of classes has to be greater than one; got 1

本秂侑毒 提交于 2019-12-13 09:35:40
问题 Following Tonechas suggestion from this post, the code to compute the red channel histogram of a set of images and then classify them to the correct type, is this: import cv2 import os import glob import numpy as np from skimage import io root = "C:/Users/joasa/data/train" folders = ["Type_1", "Type_2", "Type_3"] extension = "*.jpg" # skip errors caused by corrupted files def file_is_valid(filename): try: io.imread(filename) return True except: return False def compute_red_histogram(root,

Matlab predict function not working

狂风中的少年 提交于 2019-12-13 09:07:36
问题 I am trying to train a linear SVM on a data which has 100 dimensions. I have 80 instances for training. I train the SVM using fitcsvm function in MATLAB and check the function using predict on the training data. When I classify the training data with the SVM all the data points are being classified into only one class. SVM = fitcsvm(votes,b,'ClassNames',unique(b)'); predict(SVM,votes); This gives outputs as all 0's which corresponds to 0th class. b contains 1's and 0's indicating the class to

Opencv SVR auto_train error “Assertion failed (sv_count != 0) in do_train”

左心房为你撑大大i 提交于 2019-12-13 07:00:39
问题 I am using OpenCv SVM for regression and it worked fine, when I tune the parameters manually. But when I use the grid search method to optimize the parameters: 1.It kept giving me the "Assertion failed (sv_count != 0) in do_train" error when I used it on a large data set (about 10000 instances and 10 features). 2. I got good results when I tried the same code with a smaller data set (100 instances and 5 features). Could someone please help me figure out what the problem could be? Thank you.

Scaling test data to 0 and 1 using MinMaxScaler

こ雲淡風輕ζ 提交于 2019-12-13 06:38:53
问题 Using the MinMaxScaler from sklearn, I scale my data as below. min_max_scaler = preprocessing.MinMaxScaler() X_train_scaled = min_max_scaler.fit_transform(features_train) X_test_scaled = min_max_scaler.transform(features_test) However, when printing X_test_scaled.min(), I have some negative values (the values do not fall between 0 and 1). This is due to the fact that the lowest value in my test data was lower than the train data, of which the min max scaler was fit. How much effect does not

LIBSVM training data format (x values in svm_node for svm_problem)

那年仲夏 提交于 2019-12-13 06:27:08
问题 I am using LIBSVM to do a simple XOR classification programmatically, trying to understand how the functions work. I have set up the problem following the instructions in the Readme as close as possible. Still i get the wrong output when using svm_predict (always 1 or -1). In a related question somebody suggested that the problem might arise when using very few training examples. I tried increasing the number of examples to 20 but this did not help. I suspect that the problem is somewhere in

How to plot ROC and calculate AUC for binary classifier with no probabilities (svm)?

流过昼夜 提交于 2019-12-13 06:14:09
问题 I have some SVM classifier (LinearSVC) outputting final classifications for every sample in the test set, something like 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1 and so on. The "truth" labels is also something like 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1 I would like to run that svm with some parameters, and generate points for the roc curve, and calculate auc. I could do this by myself, but I am sure someone did it before me for cases like this. Unfortunately, everything I can find is for cases where