matrix-decomposition

Cholesky decomposition failure for my correlation matrix

末鹿安然 提交于 2021-01-28 06:10:05
问题 I am trying to use chol() to find the Cholesky decomposition of the correlation matrix below. Is there a maximum size I can use that function on? I am asking because I get the following: d <-chol(corrMat) Error in chol.default(corrMat) : the leading minor of order 61 is not positive definite but, I can decompose it for less than 60 elements without a problem (even when it contains the 61st element of the original): > d <-chol(corrMat[10:69, 10:69]) > d <-chol(corrMat[10:70, 10:70]) Error in

Eigendecomposition makes me wonder in numpy

丶灬走出姿态 提交于 2020-05-14 07:15:44
问题 I test the theorem that A = Q * Lambda * Q_inverse where Q the Matrix with the Eigenvectors and Lambda the Diagonal matrix having the Eigenvalues in the Diagonal. My code is the following: import numpy as np from numpy import linalg as lg Eigenvalues, Eigenvectors = lg.eigh(np.array([ [1, 3], [2, 5] ])) Lambda = np.diag(Eigenvalues) Eigenvectors @ Lambda @ lg.inv(Eigenvectors) Which returns : array([[ 1., 2.], [ 2., 5.]]) Shouldn't the returned Matrix be the same as the Original one that was

Eigendecomposition makes me wonder in numpy

牧云@^-^@ 提交于 2020-05-14 07:13:08
问题 I test the theorem that A = Q * Lambda * Q_inverse where Q the Matrix with the Eigenvectors and Lambda the Diagonal matrix having the Eigenvalues in the Diagonal. My code is the following: import numpy as np from numpy import linalg as lg Eigenvalues, Eigenvectors = lg.eigh(np.array([ [1, 3], [2, 5] ])) Lambda = np.diag(Eigenvalues) Eigenvectors @ Lambda @ lg.inv(Eigenvectors) Which returns : array([[ 1., 2.], [ 2., 5.]]) Shouldn't the returned Matrix be the same as the Original one that was

Rfast hd.eigen() returns NAs but base eigen() does not

|▌冷眼眸甩不掉的悲伤 提交于 2020-01-05 04:12:12
问题 I am having problems with hd.eigen in Rfast . It gives extremely close results to eigen with most data, but sometimes hd.eign returns an empty $vector , NAs, or other undesirable results. For example: > set.seed(123) > bigm <- matrix(rnorm(2000*2000,mean=0,sd = 3), 2000, 2000) > > e3 = eigen(bigm) > length(e3$values) [1] 2000 > length(e3$vectors) [1] 4000000 > sum(is.na(e3$vectors) == TRUE) [1] 0 > sum(is.na(e3$vectors) == FALSE) [1] 4000000 > > e4 = hd.eigen(bigm, vectors = TRUE) > length(e4

Correct use of pivot in Cholesky decomposition of positive semi-definite matrix

主宰稳场 提交于 2020-01-02 03:42:13
问题 I don't understand how to use the chol function in R to factor a positive semi-definite matrix. (Or I do, and there's a bug.) The documentation states: If pivot = TRUE, then the Choleski decomposition of a positive semi-definite x can be computed. The rank of x is returned as attr(Q, "rank"), subject to numerical errors. The pivot is returned as attr(Q, "pivot"). It is no longer the case that t(Q) %*% Q equals x. However, setting pivot <- attr(Q, "pivot") and oo <- order(pivot), it is true

Inconsistent results between LU decomposition in R and Python

杀马特。学长 韩版系。学妹 提交于 2019-12-24 00:26:42
问题 I have the following matrix A in R: # [,1] [,2] [,3] [,4] # [1,] -1.1527778 0.4444444 0.375 0.3333333 # [2,] 0.5555556 -1.4888889 0.600 0.3333333 # [3,] 0.6250000 0.4000000 -1.825 0.8000000 # [4,] 0.6666667 0.6666667 0.200 -1.5333333 A <- structure(c(-1.15277777777778, 0.555555555555556, 0.625, 0.666666666666667, 0.444444444444444, -1.48888888888889, 0.4, 0.666666666666667, 0.375, 0.6, -1.825, 0.2, 0.333333333333333, 0.333333333333333, 0.8, -1.53333333333333), .Dim = c(4L, 4L), .Dimnames =

Are eigenvectors returned by R function eigen() wrong?

别说谁变了你拦得住时间么 提交于 2019-12-17 17:12:17
问题 #eigen values and vectors a <- matrix(c(2, -1, -1, 2), 2) eigen(a) I am trying to find eigenvalues and eigenvectors in R. Function eigen works for eigenvalues but there are errors in eigenvectors values. Is there any way to fix that? 回答1: Some paper work tells you the eigenvector for eigenvalue 3 is (-s, s) for any non-zero real value s ; the eigenvector for eigenvalue 1 is (t, t) for any non-zero real value t . Scaling eigenvectors to unit-length gives s = ± sqrt(0.5) = ±0.7071068 t = ± sqrt

MvNormal Error with Symmetric & Positive Semi-Definite Matrix

丶灬走出姿态 提交于 2019-12-13 01:43:32
问题 The summary of my problem is that I am trying to replicate the Matlab function: mvnrnd(mu', sigma, 200) into Julia using: rand( MvNormal(mu, sigma), 200)' and the result is a 200 x 7 matrix, essentially generating 200 random return time series data. Matlab works, Julia doesn't. My input matrices are: mu = [0.15; 0.03; 0.06; 0.04; 0.1; 0.02; 0.12] sigma = [0.0035 -0.0038 0.0020 0.0017 -0.0006 -0.0028 0.0009; -0.0038 0.0046 -0.0011 0.0001 0.0003 0.0054 -0.0024; 0.0020 -0.0011 0.0041 0.0068 -0

one of Eigenvalues of covariance matrix is negative in R

只谈情不闲聊 提交于 2019-12-13 00:11:57
问题 I have a data set x . And I use cov(x) to calculate the covariance of x . I want to calculate the inverse square root of cov(x) . But I get negative eigenvalue of cov(x) . Here is my code S11=cov(x) S=eigen(S11,symmetric=TRUE) R=solve(S$vectors %*% diag(sqrt(S$values)) %*% t(S$vectors)) This is the eigenvalue of S . c(0.897249923338732, 0.814314811717616, 0.437109871173458, 0.334921280373883, 0.291910583884559, 0.257388456770167, 0.166787180227719, 0.148268784967556, 0.121401731579852, 0

Time complexity of Cholesky Decomposition for the LDL form

谁说胖子不能爱 提交于 2019-12-12 14:53:02
问题 There are two different forms for Cholesky Decomposition: A = M * ctranspose (M) and the LDL form A = L * D * ctranspose (L) where ctranspose is the complex transpose. I want to know the number of floating point operations for each form . Wikipedia references a paper Matrix Inversion Using Cholesky Decomposition which says When efficiently implemented, the complexity of the LDL decomposition is same (sic) as Cholesky decomposition. The paper says Cholesky decomposition requires n^3/6 + O(n^2)