sparse-matrix

How can I calculate inverse of sparse matrix in Eigen library

爷,独闯天下 提交于 2019-12-09 12:33:40
问题 I have a question about Eigen library in C++. Actually, I want to calculate inverse matrix of sparse matrix. When I used Dense matrix in Eigen, I can use .inverse() operation to calculate inverse of dense matrix. But in Sparse matrix, I cannot find inverse operation anywhere. Does anyone who know to calculate inverse of sparse matrix? help me. 回答1: You cannot do it directly, but you can always calculate it, using one of the sparse solvers. The idea is to solve A*X=I , where I is the identity

Sparse Matrix: ValueError: matrix type must be 'f', 'd', 'F', or 'D'

早过忘川 提交于 2019-12-09 12:07:53
问题 I want to do SVD on a sparse matrix by using scipy: from svd import compute_svd print("The size of raw matrix: "+str(len(raw_matrix))+" * "+str(len(raw_matrix[0]))) from scipy.sparse import dok_matrix dok = dok_matrix(raw_matrix) matrix = compute_svd( dok ) The function compute_svd is my customized module like this: def compute_svd( matrix ): from scipy.sparse import linalg from scipy import dot, mat # e.g., matrix = [[2,1,0,0], [4,3,0,0]] # matrix = mat( matrix ); # print "Original matrix:"

Storing scipy sparse matrix as HDF5

倾然丶 夕夏残阳落幕 提交于 2019-12-09 10:55:12
问题 I want to compress and store a humongous Scipy matrix in HDF5 format. How do I do this? I've tried the below code: a = csr_matrix((dat, (row, col)), shape=(947969, 36039)) f = h5py.File('foo.h5','w') dset = f.create_dataset("init", data=a, dtype = int, compression='gzip') I get errors like these, TypeError: Scalar datasets don't support chunk/filter options IOError: Can't prepare for writing data (No appropriate function for conversion path) I can't convert it to numpy array as there will be

How to create fast and efficient filestream writes on large sparse files

北城以北 提交于 2019-12-09 09:50:00
问题 I have an application that writes large files in multiple segments. I use FileStream.Seek to position each wirte. It appears that when I call FileStream.Write at a deep position in a sparse file the write triggers a "backfill" operation (writeing 0s) on all preceding bytes which is slow. Is there a more efficient way of handling this situation? The below code demonstrates the problem. The initial write takes about 370 MS on my machine. public void WriteToStream() { DateTime dt; using

Sparse array support in HDF5

一笑奈何 提交于 2019-12-09 09:07:07
问题 I need to store a 512^3 array on disk in some way and I'm currently using HDF5. Since the array is sparse a lot of disk space gets wasted. Does HDF5 provide any support for sparse array ? 回答1: Chunked datasets (H5D_CHUNKED) allow sparse storage but depending on your data, the overhead may be important. Take a typical array and try both sparse and non-sparse and then compare the file sizes, then you will see if it is really worth. 回答2: One workaround is to create the dataset with a compression

Calculate the euclidean distance in scipy csr matrix

回眸只為那壹抹淺笑 提交于 2019-12-09 00:46:45
问题 I need to calculate the Euclidean Distance between all points that is stored in csr sparse matrix and some lists of points. It would be easier for me to convert the csr to a dense one, but I couldn't due to the lack of memory, so I need to keep it as csr. So for example I have this data_csr sparse matrix (view in both, csr and dense): data_csr (0, 2) 4 (1, 0) 1 (1, 4) 2 (2, 0) 2 (2, 3) 1 (3, 5) 1 (4, 0) 4 (4, 2) 3 (4, 3) 2 data_csr.todense() [[0, 0, 4, 0, 0, 0] [1, 0, 0, 0, 2, 0] [2, 0, 0, 1,

Doesn't Matlab optimize the following?

我是研究僧i 提交于 2019-12-08 19:31:52
问题 I have a very long vector 1xr v , and a very long vector w 1xs, and a matrix A rxs, which is sparse (but very big in dimensions). I was expecting the following to be optimized by Matlab so I won't run into trouble with memory: A./(v'*w) but it seems like Matlab is actually trying to generate the full v'*w matrix, because I am running into out of memory issue. Is there a way to overcome this? Note that there is no need to calculate all v'*w because many values of A are 0 . EDIT: If that were

Matlab: how can I perform row operations without brute-force for loop?

99封情书 提交于 2019-12-08 18:28:27
I need to do function that works like this : N1 = size(X,1); N2 = size(Xtrain,1); Dist = zeros(N1,N2); for i=1:N1 for j=1:N2 Dist(i,j)=D-sum(X(i,:)==Xtrain(j,:)); end end (X and Xtrain are sparse logical matrixes) It works fine and passes the tests, but I believe it's not very optimal and well-written solution. How can I improve that function using some built Matlab functions? I'm absolutely new to Matlab, so I don't know if there really is an opportunity to make it better somehow. You wanted to learn about vectorization, here some code to study comparing different implementations of this pair

How to iterate non-zeroes in a sparse matrix in Chapel

帅比萌擦擦* 提交于 2019-12-08 15:42:28
I have a matrix A still hanging around. It's large, sparse and new symmetric. I've created a sparse domain called spDom that contains the non-zero entries. Now, I want to iterate along row r and find the non-zero entries there, along with the index. My goal is to build another domain that is essentially row r 's non-zeroes. Here's an answer that will work with Chapel 1.15 as long as you're willing to store your sparse domain/array in CSR format: First, I'll establish my (small, non-symmetric) sparse matrix for demonstration purposes: use LayoutCS; // use the CSR/CSC layout module config const

Numpy matrix product - sparse matrices

纵然是瞬间 提交于 2019-12-08 15:09:28
Let us consider a matrix A as a diagonal matrix and a matrix B a random matrix, both with size N x N. We want to use the sparse properties of the matrix A to optimize the dot product, i.e. dot(B,A). However if we compute the product using the sparcity properties of the matrix A, we cannot see any advantage (and it is much slower). import numpy as np from scipy.sparse import csr_matrix # Matrix sizes N = 1000 #-- matrices generation -- A = np.zeros((N,N), dtype=complex) for i in range(N): A[i][i] = np.random.rand() B = np.random.rand(N,N) #product %time csr_matrix(B).dot(A) %time np.dot(B,A)