matrix-multiplication

From Matlab to C++ Eigen matrix operations - vector normalization

拜拜、爱过 提交于 2019-12-06 07:46:17
Converting some Matlab code to C++. Questions (how to in C++): Concatenate two vectors in a matrix. (already found the solution) Normalize each array ("pts" col) dividing it by its 3rd value Matlab code for 1 and 2: % 1. A 3x1 vector. d0, d1 double. B = [d0*A (d0+d1)*A]; % B is 3x2 % 2. Normalize a set of 3D points % Divide each col by its 3rd value % pts 3xN. C 3xN. % If N = 1 you can do: C = pts./pts(3); if not: C = bsxfun(@rdivide, pts, pts(3,:)); C++ code for 1 and 2: // 1. Found the solution for that one! B << d0*A, (d0 + d1)*A; // 2. for (int i=0, i<N; i++) { // Something like this, but

Row major vs Column Major Matrix Multiplication

爱⌒轻易说出口 提交于 2019-12-06 06:48:22
问题 I am currently working on a C program trying to compute Matrix Multiplication.. I have approached this task by looping through each column of the second matrix as seen below. I have set size to 1000. for(i=0;i<size;i++) { for(j=0;j<size;j++) { for(k=0;k<size;k++) { matC[i][j]+=matA[i][k]*matB[k][j]; } } } I wanted to know what problematic access pattern is in this implementation.. What makes row/column access more efficient than the other? I am trying to understand this in terms of logic from

tensorflow element-wise matrix multiplication

拥有回忆 提交于 2019-12-06 05:03:08
问题 Say I have two tensors in tensorflow, with the first dimension representing the index of a training example in a batch, and the others representing some vectors of matrices of data. Eg vector_batch = tf.ones([64, 50]) matrix_batch = tf.ones([64, 50, 50]) I'm curious what the most idiomatic way to perform a vector*matrix multiply, for each of the pairs of vectors, matrices that share an index along the first dimension. Aka a the most idiomatic way to write: result = tf.empty([64,50]) for i in

Multiplication of two arrays with dimension=5 in a vectorize way

假装没事ソ 提交于 2019-12-06 04:28:01
I have a three dimensional domain in MATLAB. For each point in the domain I have defined three arrays of size (NX,NY,NZ) at each point of the domain: A1; % size(A1) = [NX NY NZ] A2; % size(A2) = [NX NY NZ] A3; % size(A3) = [NX NY NZ] For each element, I am trying to construct an array which holds the value of A1 , A2 , and A3 . Would the following be a good candidate for having a 1×3 vector at each point? B = [A1(:) A2(:) A3(:)]; B = reshape(B, [size(A1) 1 3]); If the 1×3 array is named C , I am trying to find C'*C at each point. C = [A1(i,j,k) A2(i,j,k) A3(i,j,k)]; % size(C) = [1 3] D = C'*C;

Tensor multiplication in Tensorflow

大兔子大兔子 提交于 2019-12-06 04:19:50
问题 I am trying to carry out tensor multiplication in NumPy/Tensorflow. I have 3 tensors- A (M X h), B (h X N X s), C (s X T) . I believe that A X B X C should produce a tensor D (M X N X T) . Here's the code (using both numpy and tensorflow). M = 5 N = 2 T = 3 h = 2 s = 3 A_np = np.random.randn(M, h) C_np = np.random.randn(s, T) B_np = np.random.randn(h, N, s) A_tf = tf.Variable(A_np) C_tf = tf.Variable(C_np) B_tf = tf.Variable(B_np) # Tensorflow with tf.Session() as sess: sess.run(tf.global

Where is strassen's matrix multiplication useful?

我的未来我决定 提交于 2019-12-06 04:01:24
问题 Strassen's algorithm for matrix multiplication just gives a marginal improvement over the conventional O(N^3) algorithm. It has higher constant factors and is much harder to implement. Given these shortcomings, is strassens algorithm actually useful and is it implemented in any library for matrix multiplication? Moreover, how is matrix multiplication implemented in libraries? 回答1: Generally Strassen’s Method is not preferred for practical applications for following reasons. The constants used

Tensor contraction in Matlab [duplicate]

梦想与她 提交于 2019-12-06 03:02:39
问题 This question already has answers here : Closed 8 years ago . Possible Duplicate: MATLAB: How to vector-multiply two arrays of matrices? Is there a way to contract higher-dimensional tensors in Matlab? For example, suppose I have two 3-dimensional arrays, with these sizes: size(A) == [M,N,P] size(B) == [N,Q,P] I want to contract A and B on the second and first indices, respectively. In other words, I want to consider A to be an array of matrices of size [M,N] and B to be equal length array of

Matrix power sum

两盒软妹~` 提交于 2019-12-06 01:57:13
问题 What is the best way to calculate sum of matrices such as A^i + A^(i+1) + A^i+2........A^n for very large n? I have thought of two possible ways: 1) Use logarithmic matrix exponentiation(LME) for A^i, then calculate the subsequent matrices by multiplying by A. Problem : Doesn't really take advantage of the LME algorithm as i am using it only for the lowest power!! 2)Use LME for finding A^n and memoize the intermediate calculations. Problem: Too much space required for large n. Is there a

Fortran matrix multiplication performance in different optimization

别来无恙 提交于 2019-12-06 00:45:01
问题 I'm reading the book "Scientific Software Development with Fortran", and there is an exercise in it I think very interesting: "Create a Fortran module called MatrixMultiplyModule. Add three subroutines to it called LoopMatrixMultiply, IntrinsicMatrixMultiply, and MixMatrixMultiply. Each routine should take two real matrices as argument, perform a matrix multiplication, and return the result via a third argument. LoopMatrixMultiply should be written entirely with do loops, and no array

Scipy sparse matrices element wise multiplication

梦想的初衷 提交于 2019-12-05 18:15:10
I am trying to do an element-wise multiplication for two large sparse matrices. Both are of size around (400K X 500K), with around 100M elements. However, they might not have non-zero elements in the same positions, and they might not have the same number of non-zero elements. In either situation, Im okay with multiplying the non-zero value of one matrix and the zero value in the other matrix to zero. I keep running out of memory (8GB) in every approach, which doesnt make much sense. I shouldnt be. These are what I've tried. A and B are sparse matrices (Ive tried with COO and CSC formats). # I