eigenvector

Finding smallest eigenvectors of large sparse matrix, over 100x slower in SciPy than in Octave

左心房为你撑大大i 提交于 2020-06-13 19:11:21
问题 I am trying to compute few (5-500) eigenvectors corresponding to the smallest eigenvalues of large symmetric square sparse-matrices (up to 30000x30000) with less than 0.1% of the values being non-zero. I am currently using scipy.sparse.linalg.eigsh in shift-invert mode (sigma=0.0), which I figured out through various posts on the topic is the prefered solution. However, it takes up to 1h to solve the problem in most cases. On the other hand the function is very fast, if I ask for the largest

Eigendecomposition makes me wonder in numpy

丶灬走出姿态 提交于 2020-05-14 07:15:44
问题 I test the theorem that A = Q * Lambda * Q_inverse where Q the Matrix with the Eigenvectors and Lambda the Diagonal matrix having the Eigenvalues in the Diagonal. My code is the following: import numpy as np from numpy import linalg as lg Eigenvalues, Eigenvectors = lg.eigh(np.array([ [1, 3], [2, 5] ])) Lambda = np.diag(Eigenvalues) Eigenvectors @ Lambda @ lg.inv(Eigenvectors) Which returns : array([[ 1., 2.], [ 2., 5.]]) Shouldn't the returned Matrix be the same as the Original one that was

Eigendecomposition makes me wonder in numpy

牧云@^-^@ 提交于 2020-05-14 07:13:08
问题 I test the theorem that A = Q * Lambda * Q_inverse where Q the Matrix with the Eigenvectors and Lambda the Diagonal matrix having the Eigenvalues in the Diagonal. My code is the following: import numpy as np from numpy import linalg as lg Eigenvalues, Eigenvectors = lg.eigh(np.array([ [1, 3], [2, 5] ])) Lambda = np.diag(Eigenvalues) Eigenvectors @ Lambda @ lg.inv(Eigenvectors) Which returns : array([[ 1., 2.], [ 2., 5.]]) Shouldn't the returned Matrix be the same as the Original one that was

Tracking eigenvectors of a 1-parameter family of matrices

自作多情 提交于 2020-01-15 08:54:09
问题 My problem is this: I'm attempting a spectral decomposition of a random process via a (truncated) Karhunen-Loeve transform, but my covariance matrix is actually a 1-parameter family of matrices, and I need a way to estimate / visualize how my random process depends on this parameter. To do this, I need a way to track the eigenvectors produced by numpy.linalg.eigh(). To give you an idea of my issue, here's a sample toy problem: Suppose I have a set of points {xs}, and a random process R with

How to use eig with the nobalance option as in MATLAB?

南楼画角 提交于 2020-01-14 08:49:21
问题 In MATLAB I can issue the command: [X,L] = eig(A,'nobalance'); In order to compute the eigenvalues without the balance option. What is the equivalent command in NumPy? When I run the NumPy version of eig, it does not produce the same result as the MATLAB result with nobalance turned on. 回答1: NumPy can't currently do this. As horchler said, there has been an open ticket open for this for a while now. It is, however, possible to do it using external libraries. Here I write up how to do it using

compute eigenvector using a dominant eigenvalue

二次信任 提交于 2020-01-05 07:25:55
问题 I want to ask some question about eigenvector centrality. I have to compute a eigenvalue using power iteration. This is my code to compute eigenvalue : v=rand(165,1); for k=1:5 w = data_table*v; lamda = norm(w); v = w/lamda; end When I have get a single eigenvalue, I confused to compute eigenvector score using a single eigenvalue that I had get it. for example in my code to compute eigenvalue I get dominant eigenvalue = 78.50. With this eigenvalue score, I want compute eigenvector score.

Rfast hd.eigen() returns NAs but base eigen() does not

|▌冷眼眸甩不掉的悲伤 提交于 2020-01-05 04:12:12
问题 I am having problems with hd.eigen in Rfast . It gives extremely close results to eigen with most data, but sometimes hd.eign returns an empty $vector , NAs, or other undesirable results. For example: > set.seed(123) > bigm <- matrix(rnorm(2000*2000,mean=0,sd = 3), 2000, 2000) > > e3 = eigen(bigm) > length(e3$values) [1] 2000 > length(e3$vectors) [1] 4000000 > sum(is.na(e3$vectors) == TRUE) [1] 0 > sum(is.na(e3$vectors) == FALSE) [1] 4000000 > > e4 = hd.eigen(bigm, vectors = TRUE) > length(e4

Unexpected eigenvectors in numPy

狂风中的少年 提交于 2020-01-03 15:33:23
问题 I have seen this question, and it is relevant to my attempt to compute the dominant eigenvector in Python with numPy. I am trying to compute the dominant eigenvector of an n x n matrix without having to get into too much heavy linear algebra. I did cursory research on determinants, eigenvalues, eigenvectors, and characteristic polynomials, but I would prefer to rely on the numPy implementation for finding eigenvalues as I believe it is more efficient than my own would be. The problem I

How to use princomp () function in R when covariance matrix has zero's?

放肆的年华 提交于 2020-01-01 04:36:07
问题 While using princomp() function in R, the following error is encountered : "covariance matrix is not non-negative definite" . I think, this is due to some values being zero (actually close to zero, but becomes zero during rounding) in the covariance matrix. Is there a work around to proceed with PCA when covariance matrix contains zeros ? [FYI : obtaining the covariance matrix is an intermediate step within the princomp() call. Data file to reproduce this error can be downloaded from here -