PCA Dimensionality Reduction

匿名 (未验证) 提交于 2019-12-03 02:50:02

问题:

I am trying to perform PCA reducing 900 dimensions to 10. So far I have:

covariancex = cov(labels); [V, d] = eigs(covariancex, 40);  pcatrain = (trainingData - repmat(mean(traingData), 699, 1)) * V; pcatest = (test - repmat(mean(trainingData), 225, 1)) * V; 

Where labels are 1x699 labels for chars (1-26). trainingData is 699x900, 900-dimensional data for the images of 699 chars. test is 225x900, 225 900-dimensional chars.

Basically I want to reduce this down to 225x10 i.e. 10 dimensions but am kind of stuck at this point.

回答1:

The covariance is supposed to implemented in your trainingData:

X = bsxfun(@minus, trainingData, mean(trainingData,1));            covariancex = (X'*X)./(size(X,1)-1);                   [V D] = eigs(covariancex, 10);   % reduce to 10 dimension  Xtest = bsxfun(@minus, test, mean(trainingData,1));   pcatest = Xtest*V; 


回答2:

From your code it seems like you are taking the covariance of the labels, not the trainingData. I believe the point of PCA is in determining the greatest variance in some N (N = 10 here) number of subspaces of your data.

Your covariance matrix should be 900x900 (if 900 is the dimension of each image, a result of having 30x30 pixel images I assume.) Where the diagonal elements [i,i] of covaraincex gives the variance of that pixel for all training samples, and off diagonal [i,j] give the covariance between pixel i and pixel j. This should be a diagonal matrix as [i,j] == [j,i].

Furthermore when calling eigs(covariancex,N), N should be 10 instead of 40 if you want to reduce the dimension to 10.



标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!