sparse-matrix

Add column to a sparse matrix

房东的猫 提交于 2019-12-21 09:12:07
问题 When I execute the following code I get a spares matrix: import numpy as np from scipy.sparse import csr_matrix row = np.array([0, 0, 1, 2, 2, 2]) col = np.array([0, 2, 2, 0, 1, 2]) data = np.array([1, 2, 3, 4, 5, 6]) sp = csr_matrix((data, (row, col)), shape=(3, 3)) print(sp) (0, 0) 1 (0, 2) 2 (1, 2) 3 (2, 0) 4 (2, 1) 5 (2, 2) 6 I want to add another column to this sparse matrix so the output is: (0, 0) 1 (0, 2) 2 (0, 3) 7 (1, 2) 3 (1, 3) 7 (2, 0) 4 (2, 1) 5 (2, 2) 6 (2, 3) 6 Basically I

Computing eigenvectors of a sparse matrix in R

我与影子孤独终老i 提交于 2019-12-21 05:21:15
问题 I am trying to compute the m first eigenvectors of a large sparse matrix in R. Using eigen() is not realistic because large means N > 10 6 here. So far I figured out that I should use ARPACK from the igraph package, which can deal with sparse matrices. However I can't get it to work on a very simple (3x3) matrix: library(Matrix) library(igraph) TestDiag <- Diagonal(3, 3:1) TestMatrix <- t(sparseMatrix(i = c(1, 1, 2, 2, 3), j = c(1, 2, 1, 2, 3), x = c(3/5, 4/5, -4/5, 3/5, 1)))

What is most efficient way of setting row to zeros for a sparse scipy matrix?

你。 提交于 2019-12-21 04:06:51
问题 I'm trying to convert the following MATLAB code to Python and am having trouble finding a solution that works in any reasonable amount of time. M = diag(sum(a)) - a; where = vertcat(in, out); M(where,:) = 0; M(where,where) = 1; Here, a is a sparse matrix and where is a vector (as are in/out). The solution I have using Python is: M = scipy.sparse.diags([degs], [0]) - A where = numpy.hstack((inVs, outVs)).astype(int) M = scipy.sparse.lil_matrix(M) M[where, :] = 0 # This is the slowest line M

Local maxima in a point cloud

旧巷老猫 提交于 2019-12-21 02:56:12
问题 I have a point cloud C, where each point has an associated value. Lets say the points are in 2-d space, so each point can be represented with the triplet (x, y, v). I'd like to find the subset of points which are local maxima. That is, for some radius R, I would like to find the subset of points S in C such that for any point Pi (with value vi) in S, there is no point Pj in C within R distance of Pi whose value vj is greater that vi. I see how I could do this in O(N^2) time, but that seems

Sparse constrained linear least-squares solver

放肆的年华 提交于 2019-12-20 20:19:12
问题 This great SO answer points to a good sparse solver for Ax=b , but I've got constraints on x such that each element in x is >=0 an <=N . Also, A is huge (around 2e6x2e6) but very sparse with <=4 elements per row. Any ideas/recommendations? I'm looking for something like MATLAB's lsqlin but with huge sparse matrices. I'm essentially trying to solve the large scale bounded variable least squares problem on sparse matrices: EDIT: In CVX: cvx_begin variable x(n) minimize( norm(A*x-b) ); subject

Correlation coefficients for sparse matrix in python?

╄→尐↘猪︶ㄣ 提交于 2019-12-20 18:35:32
问题 Does anyone know how to compute a correlation matrix from a very large sparse matrix in python? Basically, I am looking for something like numpy.corrcoef that will work on a scipy sparse matrix. 回答1: You can compute the correlation coefficients fairly straightforwardly from the covariance matrix like this: import numpy as np from scipy import sparse def sparse_corrcoef(A, B=None): if B is not None: A = sparse.vstack((A, B), format='csr') A = A.astype(np.float64) n = A.shape[1] # Compute the

Best way to store a sparse matrix in .NET

半世苍凉 提交于 2019-12-20 12:34:18
问题 We have an application that stores a sparse matrix. This matrix has entries that mostly exist around the main diagonal of the matrix. I was wondering if there were any efficient algorithms (or existing libraries) that can efficiently handle sparse matrices of this kind? Preferably, this would be a generic implementation where each matrix entry can be a user-defined type. Edit in response to a question/response: When I say mostly around the main diagonal I mean that the characteristics of most

another Game of Life question (infinite grid)?

倖福魔咒の 提交于 2019-12-20 11:49:23
问题 I have been playing around with Conway's Game of life and recently discovered some amazingly fast implementations such as Hashlife and Golly. (download Golly here - http://golly.sourceforge.net/) One thing that I cant get my head around is how do coders implement the infinite grid? We can't keep an infinite array of anything, if you run golly and get a few gliders to fly off past the edges, wait for a few mins and zoom right out, you will see the gliders still there out in space running away,

How to use tf.nn.embedding_lookup_sparse in TensorFlow?

独自空忆成欢 提交于 2019-12-20 09:38:14
问题 We have tried using tf.nn.embedding_lookup and it works. But it needs dense input data and now we need tf.nn.embedding_lookup_sparse for sparse input. I have written the following code but get some errors. import tensorflow as tf import numpy as np example1 = tf.SparseTensor(indices=[[4], [7]], values=[1, 1], shape=[10]) example2 = tf.SparseTensor(indices=[[3], [6], [9]], values=[1, 1, 1], shape=[10]) vocabulary_size = 10 embedding_size = 1 var = np.array([0.0, 1.0, 4.0, 9.0, 16.0, 25.0, 36.0

reshape scipy csr matrix

為{幸葍}努か 提交于 2019-12-20 07:16:09
问题 How can I reshape efficiently and scipy.sparse csr_matrix? I need to add zero rows at the end. Using: from scipy.sparse import csr_matrix data = [1,2,3,4,5,6] col = [0,0,0,1,1,1] row = [0,1,2,0,1,2] a = csr_matrix((data, (row, col))) a.reshape(3,5) I get this error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.5/dist-packages/scipy/sparse/base.py", line 129, in reshape self.__class__.__name__) NotImplementedError: Reshaping not