eigenvector

Python eigenvectors

夙愿已清 提交于 2019-12-04 03:42:04
eigenvalues, eigenvectors = linalg.eig(K) How can I print just eigenvectors of len(K) . So if there is K , 2x2 matrix, I get 4 eigenvectors, how can I print just 2 of them if there is len(K)=2 .... Many thanks You are getting two vectors of length two, not four vectors. For example: In [1]: import numpy as np In [2]: K=np.random.normal(size=(2,2)) In [3]: eigenvalues, eigenvectors = np.linalg.eig(K) In [4]: eigenvectors Out[4]: array([[ 0.83022467+0.j , 0.83022467+0.j ], [ 0.09133956+0.54989461j, 0.09133956-0.54989461j]]) In [5]: eigenvectors.shape Out[5]: (2, 2) The first vector is

Finding generalized eigenvectors numerically in Matlab

扶醉桌前 提交于 2019-12-04 03:09:14
I have a matrix such as this example (my actual matrices can be much larger) A = [-1 -2 -0.5; 0 0.5 0; 0 0 -1]; that has only two linearly-independent eigenvalues (the eigenvalue -1 is repeated). I would like to obtain a complete basis with generalized eigenvectors . One way I know how to do this is with Matlab's jordan function in the Symbolic Math toolbox, but I'd prefer something designed for numeric inputs (indeed, with two outputs, jordan fails for large matrices: "Error in MuPAD command: Similarity matrix too large."). I don't need the Jordan canonical form, which is notoriously unstable

What is the difference between 'eig' and 'eigs'?

风流意气都作罢 提交于 2019-12-04 00:27:04
问题 I've searched a lot for this but I can't find any answer about how the two methods 'eig' and 'eigs' differ. What is the difference between the eigenvalues and eigenvectors received from them? 回答1: They use different algorithms, tailored to different problems and different goals. eig is a good, fast, general use eigenvalue/vector solver. It is appropriate for use when your matrix is of a realistic size that fits well in memory, and when you need all of the eigenvalues/vectors. Sparse matrices

Quickly and efficiently calculating an eigenvector for known eigenvalue

痞子三分冷 提交于 2019-12-03 17:49:58
问题 Short version of my question : What would be the optimal way of calculating an eigenvector for a matrix A , if we already know the eigenvalue belonging to the eigenvector? Longer explanation : I have a large stochastic matrix A which, because it is stochastic, has a non-negative left eigenvector x (such that A^Tx=x ). I'm looking for quick and efficient methods of numerically calculating this vector. (Preferrably in MATLAB or numpy/scipy - since both of these wrap around ARPACK/LAPACK, any

C++ eigenvalue/vector decomposition, only need first n vectors fast

有些话、适合烂在心里 提交于 2019-12-03 15:33:25
问题 I have a ~3000x3000 covariance-alike matrix on which I compute the eigenvalue-eigenvector decomposition (it's a OpenCV matrix, and I use cv::eigen() to get the job done). However, I actually only need the, say, first 30 eigenvalues/vectors, I don't care about the rest. Theoretically, this should allow to speed up the computation significantly, right? I mean, that means it has 2970 eigenvectors less that need to be computed. Which C++ library will allow me to do that? Please note that OpenCV's

How to use princomp () function in R when covariance matrix has zero's?

南笙酒味 提交于 2019-12-03 13:00:25
While using princomp() function in R, the following error is encountered : "covariance matrix is not non-negative definite" . I think, this is due to some values being zero (actually close to zero, but becomes zero during rounding) in the covariance matrix. Is there a work around to proceed with PCA when covariance matrix contains zeros ? [FYI : obtaining the covariance matrix is an intermediate step within the princomp() call. Data file to reproduce this error can be downloaded from here - http://tinyurl.com/6rtxrc3] The first strategy might be to decrease the tolerance argument. Looks to me

Eigenvectors computed with numpy's eigh and svd do not match

元气小坏坏 提交于 2019-12-03 08:56:17
问题 Consider singular value decomposition M=USV*. Then the eigenvalue decomposition of M* M gives M* M= V (S* S) V*=VS* U* USV*. I wish to verify this equality with numpy by showing that the eigenvectors returned by eigh function are the same as those returned by svd function: import numpy as np np.random.seed(42) # create mean centered data A=np.random.randn(50,20) M= A-np.array(A.mean(0),ndmin=2) # svd U1,S1,V1=np.linalg.svd(M) S1=np.square(S1) V1=V1.T # eig S2,V2=np.linalg.eigh(np.dot(M.T,M))

What is the fastest way to calculate first two principal components in R?

拈花ヽ惹草 提交于 2019-12-03 07:08:11
问题 I am using princomp in R to perform PCA. My data matrix is huge (10K x 10K with each value up to 4 decimal points). It takes ~3.5 hours and ~6.5 GB of Physical memory on a Xeon 2.27 GHz processor. Since I only want the first two components, is there a faster way to do this? Update : In addition to speed, Is there a memory efficient way to do this ? It takes ~2 hours and ~6.3 GB of physical memory for calculating first two components using svd(,2,) . 回答1: You sometimes gets access to so-called

Quickly and efficiently calculating an eigenvector for known eigenvalue

我的梦境 提交于 2019-12-03 06:32:28
Short version of my question : What would be the optimal way of calculating an eigenvector for a matrix A , if we already know the eigenvalue belonging to the eigenvector? Longer explanation : I have a large stochastic matrix A which, because it is stochastic, has a non-negative left eigenvector x (such that A^Tx=x ). I'm looking for quick and efficient methods of numerically calculating this vector. (Preferrably in MATLAB or numpy/scipy - since both of these wrap around ARPACK/LAPACK, any one would be fine). I know that 1 is the largest eigenvalue of A , so I know that calling something like

What is the fastest way to calculate first two principal components in R?

筅森魡賤 提交于 2019-12-02 20:42:12
I am using princomp in R to perform PCA. My data matrix is huge (10K x 10K with each value up to 4 decimal points). It takes ~3.5 hours and ~6.5 GB of Physical memory on a Xeon 2.27 GHz processor. Since I only want the first two components, is there a faster way to do this? Update : In addition to speed, Is there a memory efficient way to do this ? It takes ~2 hours and ~6.3 GB of physical memory for calculating first two components using svd(,2,) . Dirk Eddelbuettel You sometimes gets access to so-called 'economical' decompositions which allow you to cap the number of eigenvalues /