sparse-matrix

Initializing Tensors

时光总嘲笑我的痴心妄想 提交于 2019-12-11 03:38:30
问题 tf_coo = tf.SparseTensor(indices=np.array([[0, 0, 0, 1, 1, 2, 3, 9], [1, 4, 9, 9, 9, 9, 9, 9]]).T, values=[1, 2, 3, 5,1,1,1,1], shape=[10, 10]) I get the error message InvalidArgumentError (see above for traceback): indices[4] = [1,9] is repeated [[Node: SparseToDense = SparseToDense[T=DT_INT32, Tindices=DT_INT64, validate_indices=true, _device="/job:localhost/replica:0/task:0/cpu:0"](SparseTensor/indices, SparseToDense/output_shape, SparseTensor/values, SparseToDense/default_value)]] Isn't

C++: How to implement sparse matrices with very large indices?

强颜欢笑 提交于 2019-12-11 03:34:19
问题 I am trying to implement the dynamic programming presented in this article to solve the Shortest Hamiltonian Path problem. This solution requires storing values in a 2d array called DP of size n x 2^n where n is the number of nodes of the graph. My graph has more than 100 nodes, but it is very sparse, so most of the elements of the matrix DP is +infinity. Therefore I can store it using a sparse matrix library (and see zero elements as +infinity). For example, using Eigen: Eigen::SparseMatrix

How can I determine which routines MATLAB uses to solve a sparse matrix?

无人久伴 提交于 2019-12-11 03:31:08
问题 I'm trying to solve a sparse matrix equation of the form A * x = b , where A is a known square, sparse matrix, and b is a known column vector, and x is the column vector to be determined. Standard MATLAB syntax for solving this is: x = A\b; Behind the scenes, the \ operator is shorthand for "use whatever algorithm seems best to solve this equation." MATLAB accordingly chooses what it thinks will be an optimal algorithm for solving that equation and solves the system of equations using that

Scikit-learn (sklearn) PCA throws Type Error on sparse matrix

我们两清 提交于 2019-12-11 03:25:38
问题 From the documentation of sklearn RandomizedPCA, sparse matrices are accepted as input. However when I called it with a sparse matrix, I got a TypeError : > sklearn.__version__ '0.16.1' > pca = RandomizedPCA(n_components=2) > pca.fit(my_sparce_mat) TypeError: A sparse matrix was passed, but dense data is required. Use X.toarray() to convert to a dense numpy array. I obtained the same error using fit_transform . Any suggestion on how to have it work? 回答1: The answer is that it is not possible

MATLAB Matching Pursuit wmpdictionary using Gabor or customized atoms

陌路散爱 提交于 2019-12-11 03:22:47
问题 I'm using MATLAB 2013 which now includes Matching Pursuit algorithm. It has a function called wmpdictionary for creating a dictionary. As far as I know is capable of using the next functions to create atoms in the dictionary: Discrete cosine transform-II basis Sine Cosine Polynomial The shifted Kronecker delta A valid orthogonal or biorthogonal wavelet family I want/need to use Gabor atoms. Does someone knows how to use Gabor in wmpdictionary or alternatively a way to customize new kinds of

How to combine or merge two sparse vectors in Spark using Java?

筅森魡賤 提交于 2019-12-11 03:18:05
问题 I used the Java's API, i.e. Apache-Spark 1.2.0, and created two parse vectors as follows. Vector v1 = Vectors.sparse(3, new int[]{0, 2}, new double[]{1.0, 3.0}); Vector v2 = Vectors.sparse(2, new int[]{0, 1}, new double[]{4,5}); How can I get a new vector v3 that is formed by combining v1 and v2 , so the result should be: (5, [0,2,3,4],[1.0, 3.0, 4.0, 5.0]) 回答1: I found the problem has been one year and is still pending. Here, I solved the problem by writing a helper function myself, as

Fastest way to create a sparse matrix of the form A.T * diag(b) * A + C?

大兔子大兔子 提交于 2019-12-11 03:16:07
问题 I'm trying to optimize a piece of code that solves a large sparse nonlinear system using an interior point method. During the update step, this involves computing the Hessian matrix H , the gradient g , then solving for d in H * d = -g to get the new search direction. The Hessian matrix has a symmetric tridiagonal structure of the form: A.T * diag(b) * A + C I've run line_profiler on the particular function in question: Line # Hits Time Per Hit % Time Line Contents ===========================

how does matlab solve large, symmetric and sparse linear systems

谁说胖子不能爱 提交于 2019-12-11 02:54:56
问题 That is, when I do A\b for a very large, symmetric and sparse A, what algorithm does matlab use? 回答1: The answer depends on the some properties of A (diagonal/square/banded? etc.). CHOLMOD, UMFPACK and qr factorization are some of the options. The documentation explains it. Here are links to online snapshots of the docs. This may be outdated. - http://amath.colorado.edu/computing/Matlab/OldTechDocs/ref/arithmeticoperators.html - http://www.maths.lth.se/na/courses/NUM115/NUM115-11/backslash

Getting “node stack overflow” when cbind multiple sparse matrices

瘦欲@ 提交于 2019-12-11 02:25:07
问题 I have 100,000 sparse matrices("dgCMatrix") store in a list object. The row number of every matrix is the same(8,000,000) and the size of the list is approximately 25 Gb. Now when I do: do.call(cbind, theListofMatrices) to combine all matrices into one big sparse matrix, I got "node stack overflow". Actually, I can't even do this with only 500 elements out of that list, which should output a sparse matrix with a size of only 100 Mb. My speculation for this is that the cbind() function

pyspark matrix accumulator

[亡魂溺海] 提交于 2019-12-11 02:22:35
问题 I want to additively populate a matrix with values inferred from an rdd using a pyspark accumulator; I found the docs a bit unclear. Adding a bit of background, just in case its relevant. My rddData contains lists of indexes for which one count has to be added to the matrix. For example this list maps to indices: [1,3,4] -> (11), (13), (14), (33), (34), (44) Now, here is my accumulator: from pyspark.accumulators import AccumulatorParam class MatrixAccumulatorParam(AccumulatorParam): def zero