How to use scikit-learn PCA for features reduction and know which features are discarded

情到浓时终转凉″ 提交于 2019-12-02 16:18:42

The features that your PCA object has determined during fitting are in pca.components_. The vector space orthogonal to the one spanned by pca.components_ is discarded.

Please note that PCA does not "discard" or "retain" any of your pre-defined features (encoded by the columns you specify). It mixes all of them (by weighted sums) to find orthogonal directions of maximum variance.

If this is not the behaviour you are looking for, then PCA dimensionality reduction is not the way to go. For some simple general feature selection methods, you can take a look at sklearn.feature_selection

The projected features onto principal components will retain the important information (axes with maximum variances) and drop axes with small variances. This behavior is like to compression (Not discard).

And X_proj is the better name of X_new, because it is the projection of X onto principal components

You can reconstruct the X_rec as

X_rec = pca.inverse_transform(X_proj) # X_proj is originally X_new

Here, X_rec is close to X, but the less important information was dropped by PCA. So we can say X_rec is denoised.

In my opinion, I can say the noise is discard.

The answer marked above is incorrect. The sklearn site clearly states that the components_ array is sorted. so it can't be used to identify the important features.

components_ : array, [n_components, n_features] Principal axes in feature space, representing the directions of maximum variance in the data. The components are sorted by explained_variance_.

http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!