svm

how to find important factors in support vector machine

时光毁灭记忆、已成空白 提交于 2021-01-28 03:23:26
问题 The original data are large, so I cannot post it here. The question is that I use the package e1071 in R to do the support vector machine analysis. The original data have 100 factors and the prediction results is 1 or 0. for example, I generate a random data frame with 10 factors. for (i in 1:10){ factor<-c(factor,runif(10,5,10)) } value<-matrix(factor,nrow=10) y<-sample(0:1,10,replace=T) data<-as.data.frame(cbind(y,value)) I did the prediction pard, but I wonder how to determine which

How to plot a Python 3-dimensional level set?

≯℡__Kan透↙ 提交于 2021-01-27 08:58:51
问题 I have some trouble plotting the image which is in my head. I want to visualize the Kernel-trick with Support Vector Machines. So I made some two-dimensional data consisting of two circles (an inner and an outer circle) which should be separated by a hyperplane. Obviously this isn't possible in two dimensions - so I transformed them into 3D. Let n be the number of samples. Now I have an (n,3)-array (3 columns, n rows) X of data points and an (n,1)-array y with labels. Using sklearn I get the

How to plot a Python 3-dimensional level set?

主宰稳场 提交于 2021-01-27 08:58:09
问题 I have some trouble plotting the image which is in my head. I want to visualize the Kernel-trick with Support Vector Machines. So I made some two-dimensional data consisting of two circles (an inner and an outer circle) which should be separated by a hyperplane. Obviously this isn't possible in two dimensions - so I transformed them into 3D. Let n be the number of samples. Now I have an (n,3)-array (3 columns, n rows) X of data points and an (n,1)-array y with labels. Using sklearn I get the

SVM in R: “Predictor must be numeric or ordered.”

有些话、适合烂在心里 提交于 2020-12-31 06:23:30
问题 I'm new to R and I've ran into this problem: I want to compare two prediction techniques (Support Vector Machines and Neural Networks) applying them to some data and I would like to compare their performance. To do this, I use ROC curves. The code is supposed to compute the area under the ROC curve but it is not working. Neural networks code works fine, but when SVM part executes there was this error: > aucs <- auc((dtest$recid=="SI")*1, lr.pred) Error in roc.default(response, predictor, auc

Predict probabilities using SVM

眉间皱痕 提交于 2020-12-29 07:14:29
问题 I wrote this code and wanted to obtain probabilities of classification. from sklearn import svm X = [[0, 0], [10, 10],[20,30],[30,30],[40, 30], [80,60], [80,50]] y = [0, 1, 2, 3, 4, 5, 6] clf = svm.SVC() clf.probability=True clf.fit(X, y) prob = clf.predict_proba([[10, 10]]) print prob I obtained this output: [[0.15376986 0.07691205 0.15388546 0.15389275 0.15386348 0.15383004 0.15384636]] which is very weird because the probability should have been [0 1 0 0 0 0 0 0] (Observe that the sample

Predict probabilities using SVM

依然范特西╮ 提交于 2020-12-29 07:14:28
问题 I wrote this code and wanted to obtain probabilities of classification. from sklearn import svm X = [[0, 0], [10, 10],[20,30],[30,30],[40, 30], [80,60], [80,50]] y = [0, 1, 2, 3, 4, 5, 6] clf = svm.SVC() clf.probability=True clf.fit(X, y) prob = clf.predict_proba([[10, 10]]) print prob I obtained this output: [[0.15376986 0.07691205 0.15388546 0.15389275 0.15386348 0.15383004 0.15384636]] which is very weird because the probability should have been [0 1 0 0 0 0 0 0] (Observe that the sample

Predict probabilities using SVM

一曲冷凌霜 提交于 2020-12-29 07:14:13
问题 I wrote this code and wanted to obtain probabilities of classification. from sklearn import svm X = [[0, 0], [10, 10],[20,30],[30,30],[40, 30], [80,60], [80,50]] y = [0, 1, 2, 3, 4, 5, 6] clf = svm.SVC() clf.probability=True clf.fit(X, y) prob = clf.predict_proba([[10, 10]]) print prob I obtained this output: [[0.15376986 0.07691205 0.15388546 0.15389275 0.15386348 0.15383004 0.15384636]] which is very weird because the probability should have been [0 1 0 0 0 0 0 0] (Observe that the sample

Predict probabilities using SVM

℡╲_俬逩灬. 提交于 2020-12-29 07:12:52
问题 I wrote this code and wanted to obtain probabilities of classification. from sklearn import svm X = [[0, 0], [10, 10],[20,30],[30,30],[40, 30], [80,60], [80,50]] y = [0, 1, 2, 3, 4, 5, 6] clf = svm.SVC() clf.probability=True clf.fit(X, y) prob = clf.predict_proba([[10, 10]]) print prob I obtained this output: [[0.15376986 0.07691205 0.15388546 0.15389275 0.15386348 0.15383004 0.15384636]] which is very weird because the probability should have been [0 1 0 0 0 0 0 0] (Observe that the sample

Time Series Forecasting using Support Vector Machine (SVM) in R

余生长醉 提交于 2020-12-01 07:25:25
问题 I've tried searching but couldn't find a specific answer to this question. So far I'm able to realize that Time Series Forecasting is possible using SVM. I've gone through a few papers/articles who've performed the same but didn't mention any code, instead explained the algorithm (which I didn't quite understand). And some have done it using python. My problem here is that: I have a company data(say univariate) of sales from 2010 to 2017. And I need to forecast the sales value for 2018 using