svm

python sklearn non linear svm penalty

给你一囗甜甜゛ 提交于 2020-01-14 04:15:02
问题 I am using Python 2.7 with sklearn and using sklearn.svm.SVC with rbf kernel and suffer from over fitting. I tried using the C and Gamma as explained here and it did not do the trick If I understand correctly C and gamma are not l1 and l2 penalty, because C is the penalty for classifying wrong and gamma is the generalization parameter with respect to the data samples. i am looking for something that will penalize the model for complexity like l1 and l2. i want to use regularization and l1 or

Replication of scikit.svm.SRV.predict(X)

99封情书 提交于 2020-01-14 02:59:06
问题 I'm trying to replicate scikit-learn's svm.svr.predict(X) and don't know how to do it correctly. I want to do is, because after training the SVM with an RBF kernel I would like to implement the prediction on another programming language (Java) and I would need to be able to export the model's parameters to be able to perform predictions of unknown cases. On scikit's documentation page, I see that there are 'support_ and 'support_vectors_ attributes, but don't understand how to replicate the

OpenCV SVM train_auto Insufficient Memory

爱⌒轻易说出口 提交于 2020-01-13 19:34:15
问题 This is my first post here so I hope to be able to ask my question properly :-) I want to do "elephant detection" by classifying color samples (I was inspired by this paper). This is the pipeline of my "solution" until the training of the classifier: Loading a set of 4 training images (all containing an elephant), and then splitting them in two images: one containing the environment surrounding the elephant (the "background"), and one containing the elephant (the "foreground"); Mean shift

支持向量机(SVM)处理非线性函数

陌路散爱 提交于 2020-01-13 00:30:54
核心是通过一个优化函数,实现从高维到低维的转换。 用数学表达式进行定义: 最小化: 其中γ是学习率learning_rate 限制条件: 接下来用一个低维到高维的矩阵进行描述: 有 其中式子1.2属于C1,3,4属于C2 由此,通过一个 优化函数 ,我们将非线性的函数变为了线性的函数,之后按照线性函数继续进行处理即可。优化函数,我们称作核函数,常用的核函数有 高斯核、多项式核 sigmoid函数 多项式核 —————————————————————————————— 公式编辑器真的不好用,用Word编辑之后放过来好难看。讲究这看吧! 来源: CSDN 作者: wx-咸鱼 链接: https://blog.csdn.net/weixin_45885232/article/details/103949224

Plotting 3D Decision Boundary From Linear SVM

北慕城南 提交于 2020-01-12 16:26:11
问题 I've fit a 3 feature data set using sklearn.svm.svc(). I can plot the point for each observation using matplotlib and Axes3D. I want to plot the decision boundary to see the fit. I've tried adapting the 2D examples for plotting the decision boundary to no avail. I understand that clf.coef_ is a vector normal to the decision boundary. How can I plot this to see where it divides the points? 回答1: Here is an example on a toy dataset. Note that plotting in 3D is funky with matplotlib . Sometimes

Plotting 3D Decision Boundary From Linear SVM

徘徊边缘 提交于 2020-01-12 16:22:12
问题 I've fit a 3 feature data set using sklearn.svm.svc(). I can plot the point for each observation using matplotlib and Axes3D. I want to plot the decision boundary to see the fit. I've tried adapting the 2D examples for plotting the decision boundary to no avail. I understand that clf.coef_ is a vector normal to the decision boundary. How can I plot this to see where it divides the points? 回答1: Here is an example on a toy dataset. Note that plotting in 3D is funky with matplotlib . Sometimes

text classification methods? SVM and decision tree

爱⌒轻易说出口 提交于 2020-01-12 03:32:08
问题 i have a training set and i want to use a classification method for classifying other documents according to my training set.my document types are news and categories are sports,politics,economic and so on. i understand naive bayes and KNN completely but SVM and decision tree are vague and i dont know if i can implement this method by myself?or there is applications for using this methods? what is the best method i can use for classifying docs in this way? thanks! 回答1: Naive Bayes Though this

How to train an SVM with opencv based on a set of images?

那年仲夏 提交于 2020-01-12 02:05:43
问题 I have a folder of positives and another of negatives images in JPG format, and I want to train an SVM based on that images, I've done the following but I receive an error: Mat classes = new Mat(); Mat trainingData = new Mat(); Mat trainingImages = new Mat(); Mat trainingLabels = new Mat(); CvSVM clasificador; for (File file : new File(path + "positives/").listFiles()) { Mat img = Highgui.imread(file.getAbsolutePath()); img.reshape(1, 1); trainingImages.push_back(img); trainingLabels.push

How to train an SVM with opencv based on a set of images?

一个人想着一个人 提交于 2020-01-12 02:05:10
问题 I have a folder of positives and another of negatives images in JPG format, and I want to train an SVM based on that images, I've done the following but I receive an error: Mat classes = new Mat(); Mat trainingData = new Mat(); Mat trainingImages = new Mat(); Mat trainingLabels = new Mat(); CvSVM clasificador; for (File file : new File(path + "positives/").listFiles()) { Mat img = Highgui.imread(file.getAbsolutePath()); img.reshape(1, 1); trainingImages.push_back(img); trainingLabels.push

机器学习——svm应用

烈酒焚心 提交于 2020-01-12 01:37:27
本章着重对算法部分进行讲解,原理部分不过多叙述,有兴趣的小伙伴可以自行查阅其他文献/文章 一、什么是svm 支持向量机(Support Vector Machine, SVM)是一类按监督学习(supervised learning)方式对数据进行二元分类的 广义线性分类器(generalized linear classifier) ,其决策边界是对学习样本求解 最大边距超平面(maximum-margin hyperplane) 。 1、支持向量与超平面 在了解svm算法之前,我们首先需要了解一下线性分类器这个概念。比如给定一系列的数据样本,每个样本都有对应的一个标签。为了使得描述更加直观,我们采用二维平面进行解释,高维空间原理也是一样。 举个例子,假设在一个二维线性可分的数据集中,如下图所示,我们要找到 一条线 (称为 超平面 )把两组数据分开,这条直线可以是图中的直线H 1 ,也可以是直线H 2 ,或者H 3 ,但哪条直线才最好呢,也就是说哪条直线能够达到最好的分类效果呢?那就是一个能使两类之间的空间大小最大的一个超平面,即图二中的gap在两个分类之间 所占空间最大 。 这个超平面在二维平面上看到的就是一条直线,在三维空间中就是一个平面,高纬度下以此类推,因此,我们把这个划分数据的决策边界统称为 超平面 。 离这个超平面最近的点就叫做 支持向量 , 点到超平面的距离叫 间隔