sparse-matrix

SparseArray vs HashMap

…衆ロ難τιáo~ 提交于 2019-11-29 19:23:34
I can think of several reasons why HashMap s with integer keys are much better than SparseArray s: The Android documentation for a SparseArray says "It is generally slower than a traditional HashMap ". If you write code using HashMap s rather than SparseArray s your code will work with other implementations of Map and you will be able to use all of the Java APIs designed for Maps. If you write code using HashMap s rather than SparseArray s your code will work in non-android projects. Map overrides equals() and hashCode() whereas SparseArray doesn't. Yet whenever I try to use a HashMap with

Scipy sparse matrix become dense matrix after assignment

蓝咒 提交于 2019-11-29 18:17:57
alpha = csr_matrix((1000,1000),dtype=np.float32) beta = csr_matrix((1,1000),dtype=np.float32) alpha[0,:] = beta After initiation, alpha and beta should be sparse matrixes with no element stored there. But after assigning beta to the first row of alpha, alpha become non-sparse, with 1000 zeros stored in alpha. I know I can use eliminate_zeros() to turn alpha back to sparse matrix but is there any better way to do this? When I copy your steps I get In [131]: alpha[0,:]=beta /usr/lib/python3/dist-packages/scipy/sparse/compressed.py:730: SparseEfficiencyWarning: Changing the sparsity structure of

Row Division in Scipy Sparse Matrix

谁说胖子不能爱 提交于 2019-11-29 14:52:21
I want to divide a sparse matrix's rows by scalars given in an array. For example : I have a csr_matrix C : C = [[2,4,6], [5,10,15]] D = [2,5] I want the result of C after division to be : result = [[1, 2, 3], [1, 2, 3]] I have tried this using the method that we use for numpy arrays : result = C / D[:,None] But this seems really slow. How to do this efficiently in sparse matrices ? Approach #1 Here's a sparse matrix solution using manual replication with indexing - from scipy.sparse import csr_matrix r,c = C.nonzero() rD_sp = csr_matrix(((1.0/D)[r], (r,c)), shape=(C.shape)) out = C.multiply

Fast computation of a gradient of an image in matlab

半世苍凉 提交于 2019-11-29 14:41:07
问题 I was trying to optimize my code and found that one of my code is a bottleneck. My code was : function [] = one(x) I = imread('coins.png'); I = double(I); I = imresize(I,[x x]); sig=.8; % scale parameter in Gaussian kernel G=fspecial('gaussian',15,sig); % Caussian kernel Img_smooth=conv2(I,G,'same'); % smooth image by Gaussiin convolution [Ix,Iy]=gradient(Img_smooth); f=Ix.^2+Iy.^2; g=1./(1+f); % edge indicator function. end I tried to run it like this : clear all;close all; x=4000;N=1; tic

How to get sparse matrices into H2O?

元气小坏坏 提交于 2019-11-29 13:59:49
I am trying to get a sparse matrix into H2O and I was wondering whether that was possible. Suppose we have the following: test <- Matrix(c(1,0,0,1,1,1,1,0,1), nrow = 3, sparse = TRUE) and assuming my local H2O is localH2O , I can't seem to do the following: as.h2o(test) It gives the error: cannot coerce class "structure("dgCMatrix", package = "Matrix")" to a data.frame . That seems to be pretty logical, however assuming that test is so big that I can't transform it into a dataframe, how am I suppose to load this into H2O? Using a sparse matrix representation it is only 500MB or so. How can I

How to perform efficient sparse matrix multiplication by using tf.matmul?

会有一股神秘感。 提交于 2019-11-29 12:30:16
I'm trying to perform a sparse matrix multiplication by using tf.matmul(). However, the inference speed is much more slower than dense matrix multiplication. According to the description in tf.sparse_matmul() : The breakeven for using this versus a dense matrix multiply on one platform was 30% zero values in the sparse matrix. Thus , I make the sparse matrix with 7/8 zero values. Here is my code: import tensorflow as tf import numpy as np import time a = tf.Variable(np.arange(1000).reshape(250,4) ,dtype=tf.float32) #dense matrix b = tf.Variable(np.array([0,0,0,0,0,0,0,1],dtype=np.float32)

Spark - How to create a sparse matrix from item ratings

我与影子孤独终老i 提交于 2019-11-29 10:10:16
问题 My question is equivalent to R-related post Create Sparse Matrix from a data frame, except that I would like to perform the same thing on Spark (preferably in Scala ). Sample of data in the data.txt file from which the sparse matrix is being created: UserID MovieID Rating 2 1 1 3 2 1 4 2 1 6 2 1 7 2 1 So in the end the columns are the movie IDs and the rows are the user IDs 1 2 3 4 5 6 7 1 0 0 0 0 0 0 0 2 1 0 0 0 0 0 0 3 0 1 0 0 0 0 0 4 0 1 0 0 0 0 0 5 0 0 0 0 0 0 0 6 0 1 0 0 0 0 0 7 0 1 0 0

Use coo_matrix in TensorFlow

时光总嘲笑我的痴心妄想 提交于 2019-11-29 08:55:26
I'm doing a Matrix Factorization in TensorFlow, I want to use coo_matrix from Spicy.sparse cause it uses less memory and it makes it easy to put all my data into my matrix for training data. Is it possible to use coo_matrix to initialize a variable in tensorflow? Or do I have to create a session and feed the data I got into tensorflow using sess.run() with feed_dict. I hope that you understand my question and my problem otherwise comment and i will try to fix it. The closest thing TensorFlow has to scipy.sparse.coo_matrix is tf.SparseTensor , which is the sparse equivalent of tf.Tensor . It

Set row of csr_matrix

Deadly 提交于 2019-11-29 08:23:39
I have a sparse csr_matrix, and I want to change the values of a single row to different values. I can't find an easy and efficient implementation however. This is what it has to do: A = csr_matrix([[0, 1, 0], [1, 0, 1], [0, 1, 0]]) new_row = np.array([-1, -1, -1]) print(set_row_csr(A, 2, new_row).todense()) >>> [[ 0, 1, 0], [ 1, 0, 1], [-1, -1, -1]] This is my current implementation of set_row_csr : def set_row_csr(A, row_idx, new_row): A[row_idx, :] = new_row return A But this gives me a SparseEfficiencyWarning . Is there a way of getting this done without manual index juggling, or is this

Efficiently Subtract Vector from Matrix (Scipy)

£可爱£侵袭症+ 提交于 2019-11-29 07:16:37
I've got a large matrix stored as a scipy.sparse.csc_matrix and want to subtract a column vector from each one of the columns in the large matrix. This is a pretty common task when you're doing things like normalization/standardization, but I can't seem to find the proper way to do this efficiently. Here's an example to demonstrate: # mat is a 3x3 matrix mat = scipy.sparse.csc_matrix([[1, 2, 3], [2, 3, 4], [3, 4, 5]]) #vec is a 3x1 matrix (or a column vector) vec = scipy.sparse.csc_matrix([1,2,3]).T """ I want to subtract `vec` from each of the columns in `mat` yielding... [[0, 1, 2], [0, 1, 2