pca

sklearn PCA.transform gives different results for different trials

醉酒当歌 提交于 2019-12-19 04:55:36
问题 I am doing some PCA using sklearn.decomposition.PCA. I found that if the input matrix X is big, the results of two different PCA instances for PCA.transform will not be the same. For example, when X is a 100x200 matrix, there will not be a problem. When X is a 1000x200 or a 100x2000 matrix, the results of two different PCA instances will be different. I am not sure what's the cause for this: I suppose there is no random elements in sklearn's PCA solver? I am using sklearn version 0.18.1. with

SVM Visualization in MATLAB

十年热恋 提交于 2019-12-19 01:27:09
问题 How do I visualize the SVM classification once I perform SVM training in Matlab? So far, I have only trained the SVM with: % Labels are -1 or 1 groundTruth = Ytrain; d = xtrain; model = svmtrain(groundTruth, d); 回答1: If you are using LIBSVM, you can plot classification results: % Labels are -1 or 1 groundTruth = Ytrain; d = xtrain; figure % plot training data hold on; pos = find(groundTruth==1); scatter(d(pos,1), d(pos,2), 'r') pos = find(groundTruth==-1); scatter(d(pos,1), d(pos,2), 'b') %

OpenCV PCA Compute in Python

こ雲淡風輕ζ 提交于 2019-12-18 13:16:48
问题 I'm loading a set of test images via OpenCV (in Python) which are 128x128 in size, reshape them into vectors (1, 128x128) and put them all together in a matrix to calculate PCA. I'm using the new cv2 libaries... The code: import os import cv2 as cv import numpy as np matrix_test = None for image in os.listdir('path_to_dir'): imgraw = cv.imread(os.path.join('path_to_dir', image), 0) imgvector = imgraw.reshape(128*128) try: matrix_test = np.vstack((matrix_test, imgvector)) except: matrix_test =

Dimensionality reduction in Matlab

血红的双手。 提交于 2019-12-18 09:08:49
问题 I want to reduce the dimension of data to ndim dimensions in MATLAB. I am using pcares to reduce dimension but the result (i.e. residuals,reconstructed) has the same dimensions as the data and not ndim . How can I project the residuals to ndim dimensions only. [residuals,reconstructed] = pcares(X,ndim) Sample code MU = [0 0]; SIGMA = [4/3 2/3; 2/3 4/3]; X = mvnrnd(MU,SIGMA,1000); [residuals,reconstructed] = pcares(X,1) Now I expect the residuals to have 1 dimensions i.e. the data X projected

PCA FactoMineR plot data

对着背影说爱祢 提交于 2019-12-18 02:59:33
问题 I'm running an R script generating plots of the PCA analysis using FactorMineR. I'd like to output the coordinates for the generated PCA plots but I'm having trouble finding the right coordinates. I found results1$ind$coord and results1$var$coord but neither look like the default plot. I found http://www.statistik.tuwien.ac.at/public/filz/students/seminar/ws1011/hoffmann_ausarbeitung.pdf and http://factominer.free.fr/classical-methods/principal-components-analysis.html but neither describe

What does selecting the largest eigenvalues and eigenvectors in the covariance matrix mean in data analysis?

那年仲夏 提交于 2019-12-17 18:33:40
问题 Suppose there is a matrix B , where its size is a 500*1000 double (Here, 500 represents the number of observations and 1000 represents the number of features). sigma is the covariance matrix of B , and D is a diagonal matrix whose diagonal elements are the eigenvalues of sigma . Assume A is the eigenvectors of the covariance matrix sigma . I have the following questions: I need to select the first k = 800 eigenvectors corresponding to the eigenvalues with the largest magnitude to rank the

How to implement ZCA Whitening? Python

你。 提交于 2019-12-17 12:32:53
问题 Im trying to implement ZCA whitening and found some articles to do it, but they are a bit confusing.. can someone shine a light for me? Any tip or help is appreciated! Here is the articles i read : http://courses.media.mit.edu/2010fall/mas622j/whiten.pdf http://bbabenko.tumblr.com/post/86756017649/learning-low-level-vision-feautres-in-10-lines-of I tried several things but most of them i didnt understand and i got locked at some step. Right now i have this as base to start again : dtype = np

How to implement ZCA Whitening? Python

微笑、不失礼 提交于 2019-12-17 12:32:11
问题 Im trying to implement ZCA whitening and found some articles to do it, but they are a bit confusing.. can someone shine a light for me? Any tip or help is appreciated! Here is the articles i read : http://courses.media.mit.edu/2010fall/mas622j/whiten.pdf http://bbabenko.tumblr.com/post/86756017649/learning-low-level-vision-feautres-in-10-lines-of I tried several things but most of them i didnt understand and i got locked at some step. Right now i have this as base to start again : dtype = np

PCA原理分析和Matlab实现方法(三)

╄→гoц情女王★ 提交于 2019-12-16 05:05:14
PCA主成分分析原理分析和Matlab实现方法(三) 【 尊重 原创,转载请注明出处 】http://blog.csdn.net/guyuealian/article/details/68487833 网上关于PCA(主成分分析)原理和分析的博客很多,本博客并不打算长篇大论推论PCA理论,而是用最精简的语言说明鄙人对PCA的理解,并在最后给出用Matlab计算PCA过程的三种方法,方便大家对PCA的理解。 PS:本博客所有源代码,都可以在附件中找到 下载 : http://download.csdn.net/detail/guyuealian/9799160 关于PCA原理的文章,可参考: [1]http://blog.csdn.net/guyuealian/article/details/68483384 [2]http://blog.csdn.net/guyuealian/article/details/68483213 [3] 张铮的《精通Matlab数字图像处理与识别 》 一、 PCA原理简要说明 PCA算法主要用于降维,就是将样本数据从高维空间投影到低维空间中,并尽可能的在低维空间中表示原始数据。 PCA的几何意义可简单解释为: 0维-PCA:将所有样本信息都投影到一个点,因此无法反应样本之间的差异;要想用一个点来尽可能的表示所有样本数据,则这个点必定是样本的均值。 1维

How to make a PCoA with 95% confidence polygons using ggplot2 in R?

荒凉一梦 提交于 2019-12-14 02:27:08
问题 I have a dataframe (site by species matrix)that looks like this: SP1 SP2 SP3 SP4 US 5 6 2 5 US 5 6 2 5 UK 5 6 2 5 AUS 5 6 2 5 I'm trying to create a PCoA plot (Principal Coordinate Analysis) with 95% confidence polygons/ellipses using ggplot2. I got the code for base package, but I want it in ggplot2. I need to uniquely color code each country along with each ellipse having the corresponding color code for the country and the legends. #My current code require(vegan) df <- as.matrix(df[,-1])