matrix-multiplication

CSR Matrix - Matrix multiplication

*爱你&永不变心* 提交于 2019-12-04 07:52:04
I have two square matrices A and B I must convert B to CSR Format and determine the product C A * B_csr = C I have found a lot of information online regarding CSR Matrix - Vector multiplication . The algorithm is: for (k = 0; k < N; k = k + 1) result[i] = 0; for (i = 0; i < N; i = i + 1) { for (k = RowPtr[i]; k < RowPtr[i+1]; k = k + 1) { result[i] = result[i] + Val[k]*d[Col[k]]; } } However, I require Matrix - Matrix multiplication. Further, it seems that most algorithms apply A_csr - vector multiplication where I require A * B_csr . My solution is to transpose the two matrices before

Make efficient - A symmetric matrix multiplication with two vectors in c#

送分小仙女□ 提交于 2019-12-04 07:09:19
As per following the inital thread make efficient the copy of symmetric matrix in c-sharp from cMinor. I would be quite interesting with some inputs in how to build a symmetric square matrix multiplication with one line vector and one column vector by using an array implementation of the matrix, instead of the classical long s = 0; List<double> columnVector = new List<double>(N); List<double> lineVector = new List<double>(N); //- init. vectors and symmetric square matrix m for (int i=0; i < N; i++) { for(int j=0; j < N; j++){ s += lineVector[i] * columnVector[j] * m[i,j]; } } Thanks for your

Matrix power sum

安稳与你 提交于 2019-12-04 05:23:47
What is the best way to calculate sum of matrices such as A^i + A^(i+1) + A^i+2........A^n for very large n? I have thought of two possible ways: 1) Use logarithmic matrix exponentiation(LME) for A^i, then calculate the subsequent matrices by multiplying by A. Problem : Doesn't really take advantage of the LME algorithm as i am using it only for the lowest power!! 2)Use LME for finding A^n and memoize the intermediate calculations. Problem: Too much space required for large n. Is there a third way? Notice that: A + A^2 = A(I + A) A + A^2 + A^3 = A(I + A) + A^3 A + A^2 + A^3 + A^4 = (A + A^2)(I

Binary matrix multiplication bit twiddling hack

拟墨画扇 提交于 2019-12-04 05:21:08
Abstract Hi, suppose you have two different independent 64-bit binary matrices A and T ( T is a transposed version of itself, using the transposed version of matrix allows during multiplication to operate on T 's rows rather than columns which is super cool for binary arithmetic) and you want to multiply these matrices the only thing is that matrix multiplication result is truncated to 64-bits and if you yield to a value greater that 1 in some specific matrix cell the resulting matrix cell will contain 1 otherwise 0 Example A T 00000001 01111101 01010100 01100101 10010111 00010100 10110000

Matrix multiplication in R: requires numeric/complex matrix/vector arguments

守給你的承諾、 提交于 2019-12-04 04:09:17
问题 I'm using the dataset BreastCancer in the mlbench package, and I am trying to do the following matrix multiplication as a part of logistic regression. I got the features in the first 10 columns, and create a vector of parameters called theta: X <- BreastCancer[, 1:10] theta <- data.frame(rep(1, 10)) Then I did the following matrix multiplication: constant <- as.matrix(X) %*% as.vector(theta[, 1]) However, I got the following error: Error in as.matrix(X) %*% as.vector(theta[, 1]) : requires

Numpy efficient matrix self-multiplication (gram matrix)

旧城冷巷雨未停 提交于 2019-12-04 03:33:30
问题 I want to multiply B = A @ A.T in numpy. Obviously, the answer would be a symmetric matrix (i.e. B[i, j] == B[j, i] ). However, it is not clear to me how to leverage this easily to cut the computation time down in half (by only computing the lower triangle of B and then using that to get the upper triangle for free). Is there a way to perform this optimally? 回答1: As noted in @PaulPanzer's link, dot can detect this case. Here's the timing proof: In [355]: A = np.random.rand(1000,1000) In [356]

MATLAB - efficient way of computing distances between points in a graph/network using the adjacency matrix and coordinates

半腔热情 提交于 2019-12-04 02:11:38
问题 I have the network representation in a 2D coordinate space. I have an adjacency matrix Adj (which is sparse) and a coordinate matrix with the x,y values of all the points/nodes/vertices in the graph which are drawn. I would like to compute as efficiently as possible the distance between these points. I would like to avoid cycling through the entries in the matrix and computing the pairwise distances one by one. 回答1: [n, d] = size(coordinate); assert(d == 2); resi = sparse(Adj * diag(1:n));

How is convolution done with RGB channel?

半世苍凉 提交于 2019-12-03 23:49:46
Say we have a single channel image (5x5) A = [ 1 2 3 4 5 6 7 8 9 2 1 4 5 6 3 4 5 6 7 4 3 4 5 6 2 ] And a filter K (2x2) K = [ 1 1 1 1 ] An example of applying convolution (let us take the first 2x2 from A) would be 1*1 + 2*1 + 6*1 + 7*1 = 16 This is very straightforward. But let us introduce a depth factor to matrix A i.e., RGB image with 3 channels or even conv layers in a deep network (with depth = 512 maybe). How would the convolution operation be done with the same filter ? A similiar work out will be really helpful for an RGB case. They will be just the same as how you do with a single

Why did matrix multiplication using python's numpy become so slow after upgrading ubuntu from 12.04 to 14.04?

陌路散爱 提交于 2019-12-03 17:30:54
I used to have Ubuntu 12.04 and recently did a fresh installation of Ubuntu 14.04. The stuff I'm working on involves multiplications of big matrices (~2000 X 2000), for which I'm using numpy. The problem I'm having is that now the calculations are taking 10-15 times longer. Going from Ubuntu 12.04 to 14.04 implied going from Python 2.7.3 to 2.7.6 and from numpy 1.6.1 to 1.8.1. However, I think that the issue might have to do with the linear algebra libraries that numpy is linked to. Instead of libblas.so.3gf and liblapack.so.3gf , I can only find libblas.so.3 and liblapack.so.3 . I also

Does scipy support multithreading for sparse matrix multiplication when using MKL BLAS?

萝らか妹 提交于 2019-12-03 14:15:44
According to MKL BLAS documentation "All matrix-matrix operations (level 3) are threaded for both dense and sparse BLAS." http://software.intel.com/en-us/articles/parallelism-in-the-intel-math-kernel-library I have built Scipy with MKL BLAS. Using the test code below, I see the expected multithreaded speedup for dense, but not sparse, matrix multiplication. Are there any changes to Scipy to enable multithreaded sparse operations? # test dense matrix multiplication from numpy import * import time x = random.random((10000,10000)) t1 = time.time() foo = dot(x.T, x) print time.time() - t1 # test