matrix-multiplication

Optimizing NumPy with Cython

老子叫甜甜 提交于 2019-12-10 16:03:14
问题 I am currently trying to optimize the code that I had written in pure Python. This code uses NumPy very heavily as I am working with NumPy arrays. Below you can see the simplest of my classes that I converted to Cython. Which only does a multiplication of two Numpy arrays. Here: bendingForces = self.matrixPrefactor * membraneHeight My question is, if and how I can optimize this as, when I look at the C-code that "cython -a" generates has a lot of NumPy-callings, which does not look very

R right matrix division

杀马特。学长 韩版系。学妹 提交于 2019-12-10 15:29:50
问题 What's the most succinct, fastest, most numerically stable, most R-idiomatic way to do left and right matrix division in R? I understand left division inv(A)*B is usually done with solve(a,b) , but how about B*inv(A) ? Is the best way really to compute t(solve(t(A),t(B))) ? 回答1: I don't have a solution better than B %*% solve(A) , but I did want to point out that in general solve(A,B) is faster and more numerically stable than solve(A) %*% B . > A = matrix(rnorm(10000),100,100) > B = matrix

R: initialise empty dgCMatrix given by matrix multiplication of two Quanteda DFM sparse matrices?

隐身守侯 提交于 2019-12-10 11:57:03
问题 I have for loop like this, trying to implement the solution here, with dummy vars such that aaa <- DFM %*% t(DFM) #DFM is Quanteda dfm-sparse-matrix for(i in 1:nrow(aaa)) aaa[i,] <- aaa[i,][order(aaa[i,], decreasing = TRUE)] but now for(i in 1:nrow(mmm)) mmm[i,] <- aaa[i,][order(aaa[i,], decreasing = TRUE)] where mmm does not exist yet, the goal is to do the same thing as mmm <- t(apply(a, 1, sort, decreasing = TRUE)) . But now before the for loop I need to initialise the mmm otherwise Error:

SSE matrix-matrix multiplication

让人想犯罪 __ 提交于 2019-12-10 11:30:34
问题 I'm having trouble doing matrix-matrix multiplication with SSE in C. Here is what I got so far: #define N 1000 void matmulSSE(int mat1[N][N], int mat2[N][N], int result[N][N]) { int i, j, k; __m128i vA, vB, vR; for(i = 0; i < N; ++i) { for(j = 0; j < N; ++j) { vR = _mm_setzero_si128(); for(k = 0; k < N; k += 4) { //result[i][j] += mat1[i][k] * mat2[k][j]; vA = _mm_loadu_si128((__m128i*)&mat1[i][k]); vB = _mm_loadu_si128((__m128i*)&mat2[k][j]); //how well does the k += 4 work here? Should it

Finding index-positions after -spatial- matrix multiplication. bsxfun implemented

烂漫一生 提交于 2019-12-10 11:12:24
问题 I need help finding some index-positions of a matrix and two vectors after a complicated matrix multiplication, please bear with me and read what I have first, my question comes at the end. I have two matrices L1 and L2 : L1 = firstMatrix; L2 = secondMatrix; I need to compute the difference (column-wise) of every single value from L1 with all the values of L2 , again, in column-wise form, this is done as follows: step one lib1 = bsxfun(@minus, L1(:,1)',L2(:,1)); lib1=lib1(:); lib2 = bsxfun(

CUDA Matrix Multiplication write to wrong memory location

夙愿已清 提交于 2019-12-10 10:33:06
问题 The idea of my simple program that I've been trying to write is to take input from the user to see how large of a matrix to multiply. dd@cuda-Linux:~/Desktop/multi$ ./program What is the rowSize of a? 33 What is the colSize of a? 33 What is the rowSize of b? 33 What is the colSize of b? 33 Would you like to write the results to a file?(y or n) y Creating the random numbers now Writing Matrix A to file now... Writing Matrix B to file now... Starting it on the device Writing Matrix C to file

Multiply columns of a matrix with 2d matrix slices of a 3d matrix in MatLab

核能气质少年 提交于 2019-12-10 06:40:30
问题 Basically, I want to perform the following computation: G is m x n x k S is n x k Answer=zeros(m,d) for Index=1:k Answer(:,Index)=G(:,:,Index)*S(:,Index) end So, answer is a matrix, whose columns are the result of multiplying each layer of a 3d matrix with a column of another matrix. This really seems like a straightforward type of operation, and I was hoping to find out if there is a native or vectorized (or at least >> faster) way of performing this type of computation in Matlab. Thanks.

How to force tensorflow tensors to be symmetric?

﹥>﹥吖頭↗ 提交于 2019-12-10 03:21:17
问题 I have a set of MxM symmetric matrix Variables in a graph whose values I'd like to optimize. Is there a way to enforce the symmetric condition? I've thought about adding a term to the loss function to enforce it, but this seems awkward and roundabout. What I'd hoped for is something like tf.matmul(A,B,symmA=True) where only a triangular portion of A would be used and learned. Or maybe something like tf.upperTriangularToFull(A) which would create a dense matrix from a triangular part. 回答1:

Error using simple matrix multiplication

余生颓废 提交于 2019-12-10 02:04:37
问题 I stumbled upon an error during a simple multiplication that rather surprised me. What is happening here, I always assumed * was only for matrix multiplication. x = 2; y = zeros(1,4); y(1) = 1 *x; y(2) = x* 1; y(3) = (x *1); y(4) = x *1; y x *1 Will give the following output: y = 2 2 2 1 Error: "x" was previously used as a variable, conflicting with its use here as the name of a function or command. See MATLAB Programming, "How MATLAB Recognizes Function Calls That Use Command Syntax" for

Binary matrix multiplication bit twiddling hack

江枫思渺然 提交于 2019-12-09 17:35:08
问题 Abstract Hi, suppose you have two different independent 64-bit binary matrices A and T ( T is a transposed version of itself, using the transposed version of matrix allows during multiplication to operate on T 's rows rather than columns which is super cool for binary arithmetic) and you want to multiply these matrices the only thing is that matrix multiplication result is truncated to 64-bits and if you yield to a value greater that 1 in some specific matrix cell the resulting matrix cell