matrix-multiplication

Non Square Matrix Multiplication in CUDA

房东的猫 提交于 2019-12-02 01:47:40
问题 The code I use for matrix multiplications in CUDA lets me multiply both square and non square matrices, however, both Width and Height MUST be multiples of blocksize. So, for example, I can multiply [3][6] * [6][3] (using blocksize=3), but I can't multiply [3][2]*[2][3]. Does anyone knows a way to do that? This is my kernel: #include <stdio.h> #include <limits.h> #include <stdlib.h> #define blocksize 3 #define HM (1*blocksize) #define WM (2*blocksize) #define WN (1*blocksize) #define HN WM

Opencv Matrix multiplication

穿精又带淫゛_ 提交于 2019-12-02 00:38:18
i need to multiply a matrix and its transpose but i get the following error : "OpenCV Error: Assertion failed (type == B.type() && (type == CV_32FC1 || type == CV_64FC1 || type == CV_32FC2 || type == CV_64FC2)) in unknown function, file .. ....\src\opencv\modules\core\src\matmul.cpp, line 711" here is the code: int dA[] = { 1, 2, 3, 4, 5, 6, 6, 5, 4, }; Mat A = Mat(3,3, CV_32S, dA ); Mat C = A.t()* A; OpenCV only supports matrix multiplication for matrices of floating point real or complex types. You are creating matrix of signed integer type. Supported types are: CV_32FC1 //real float CV

Matrix multiplication resulting in different values in MATLAB and NUMPY(?) [duplicate]

旧城冷巷雨未停 提交于 2019-12-01 22:58:41
问题 This question already has answers here : Matrix multiplication problems - Numpy vs Matlab? (2 answers) Closed 3 years ago . Here's the matrix >> x = [2 7 5 9 2; 8 3 1 6 10; 4 7 3 10 1; 6 7 10 1 8;2 8 2 5 9] Matlab gives me >> mtimes(x',x) ans = 124 124 94 122 154 124 220 145 198 179 94 145 139 101 121 122 198 101 243 141 154 179 121 141 250 However, the same operation(on same data) in python(numpy) produces different result. I'm unable to understand why? import numpy as np a = [[2, 7, 5, 9, 2

Matrix multiplication in R: requires numeric/complex matrix/vector arguments

房东的猫 提交于 2019-12-01 21:59:54
I'm using the dataset BreastCancer in the mlbench package, and I am trying to do the following matrix multiplication as a part of logistic regression. I got the features in the first 10 columns, and create a vector of parameters called theta: X <- BreastCancer[, 1:10] theta <- data.frame(rep(1, 10)) Then I did the following matrix multiplication: constant <- as.matrix(X) %*% as.vector(theta[, 1]) However, I got the following error: Error in as.matrix(X) %*% as.vector(theta[, 1]) : requires numeric/complex matrix/vector arguments Do I need to cast the matrix to double using as.numeric(X) first?

Matrix multiplication resulting in different values in MATLAB and NUMPY(?) [duplicate]

旧城冷巷雨未停 提交于 2019-12-01 21:55:57
This question already has an answer here: Matrix multiplication problems - Numpy vs Matlab? 2 answers Here's the matrix >> x = [2 7 5 9 2; 8 3 1 6 10; 4 7 3 10 1; 6 7 10 1 8;2 8 2 5 9] Matlab gives me >> mtimes(x',x) ans = 124 124 94 122 154 124 220 145 198 179 94 145 139 101 121 122 198 101 243 141 154 179 121 141 250 However, the same operation(on same data) in python(numpy) produces different result. I'm unable to understand why? import numpy as np a = [[2, 7, 5, 9, 2],[8,3,1,6,10],[4,7,3,10,1],[6,7,10,1,8],[2,8,2,5,9]] x = np.array(a) print 'A : ',type(x),'\n',x,'\n\n' # print np.transpose

Numpy efficient matrix self-multiplication (gram matrix)

混江龙づ霸主 提交于 2019-12-01 19:54:43
I want to multiply B = A @ A.T in numpy. Obviously, the answer would be a symmetric matrix (i.e. B[i, j] == B[j, i] ). However, it is not clear to me how to leverage this easily to cut the computation time down in half (by only computing the lower triangle of B and then using that to get the upper triangle for free). Is there a way to perform this optimally? As noted in @PaulPanzer's link, dot can detect this case. Here's the timing proof: In [355]: A = np.random.rand(1000,1000) In [356]: timeit A.dot(A.T) 57.4 ms ± 960 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [357]: B = A.T

element by element matrix multiplication in Matlab

自古美人都是妖i 提交于 2019-12-01 14:00:28
So I have the following matrices: A = [1 2 3; 4 5 6]; B = [0.5 2 3]; I'm writing a function in MATLAB that will allow me to multiply a vector and a matrix by element as long as the number of elements in the vector matches the number of columns. In A there are 3 columns: 1 2 3 4 5 6 B also has 3 elements so this should work. I'm trying to produce the following output based on A and B : 0.5 4 9 2 10 18 My code is below. Does anyone know what I'm doing wrong? function C = lab11(mat, vec) C = zeros(2,3); [a, b] = size(mat); [c, d] = size(vec); for i = 1:a for k = 1:b for j = 1 C(i,k) = C(i,k) + A

element by element matrix multiplication in Matlab

谁说我不能喝 提交于 2019-12-01 13:01:09
问题 So I have the following matrices: A = [1 2 3; 4 5 6]; B = [0.5 2 3]; I'm writing a function in MATLAB that will allow me to multiply a vector and a matrix by element as long as the number of elements in the vector matches the number of columns. In A there are 3 columns: 1 2 3 4 5 6 B also has 3 elements so this should work. I'm trying to produce the following output based on A and B : 0.5 4 9 2 10 18 My code is below. Does anyone know what I'm doing wrong? function C = lab11(mat, vec) C =

MATLAB - efficient way of computing distances between points in a graph/network using the adjacency matrix and coordinates

狂风中的少年 提交于 2019-12-01 12:32:41
I have the network representation in a 2D coordinate space. I have an adjacency matrix Adj (which is sparse) and a coordinate matrix with the x,y values of all the points/nodes/vertices in the graph which are drawn. I would like to compute as efficiently as possible the distance between these points. I would like to avoid cycling through the entries in the matrix and computing the pairwise distances one by one. [n, d] = size(coordinate); assert(d == 2); resi = sparse(Adj * diag(1:n)); resj = sparse(diag(1:n) * Adj); res = sparse(zeros(n)); f = find(Adj) res(f) = sqrt((coordinate(resi(f), 1) -

Numpy tensor: Tensordot over frontal slices of tensor

烂漫一生 提交于 2019-12-01 12:29:25
I'm trying to perform a matrix multiplication with frontal slices of a 3D tensor, shown below. If X.shape == (N, N) , and Y.shape == (N, N, Y) , the resulting tensor should be of shape (N, N, Y) . What's the proper np.tensordot syntax to achieve this? I'm trying to limit myself to np.tensordot , and not np.einsum , because I want to later translate this solution to Theano. Unfortunately, Theano does not have np.einsum implemented yet. Graphics adapted from this paper about tensor multiplication. The non-tensordot answer is equivalent to the following tensor = np.random.rand(3, 3, 2) X = np