matrix-multiplication

R CVXR matrix multiplication %*% Error in mul_dims_promote(lh_dim, rh_dim) : Incompatible dimensions

爱⌒轻易说出口 提交于 2020-04-14 11:52:46
问题 Hello I am trying to run the example from here: http://rtutorial.altervista.org/lp_solvers.html A snippet and test where it goes wrong: library(CVXR) #create Variable objects that can be manipulated by the solver. x<-Variable(3) #coefficients for objective function C<-c(2,4,3) #problem: C %*% x Error: Error in mul_dims_promote(lh_dim, rh_dim) : Incompatible dimensions > x [1] "Variable((3, 1), nonneg=FALSE, nonpos=FALSE, pos=FALSE, neg=FALSE, complex=FALSE, imag=FALSE, symmetric=FALSE, diag

Why can GPU do matrix multiplication faster than CPU?

谁都会走 提交于 2020-04-10 04:00:46
问题 I've been using GPU for a while without questioning it but now I'm curious. Why can GPU do matrix multiplication much faster than CPU? Is it because of the parallel processing? But I didn't write any parallel processing code. Does it do it automatically by itself? Any intuition / high-level explanation will be appreciated! Thanks. 回答1: How do you parallelize the computations? GPU's are able to do a lot of parallel computations. A Lot more than a CPU could do. Look at this example of vector

Reducing NbClust memory usage

ぐ巨炮叔叔 提交于 2020-03-23 17:49:30
问题 I need some help with massive usage of memory by the NbClust function. On my data, memory balloons to 56GB at which point R crashes with a fatal error. Using debug() , I was able to trace the error to these lines: if (any(indice == 23) || (indice == 32)) { res[nc - min_nc + 1, 23] <- Index.sPlussMoins(cl1 = cl1, md = md)$gamma Debugging of Index.sPlussMoins revealed that the crash happens during a for loop. The iteration that it crashes at varies, and during the loop memory usage varies

What is difference between the function numpy.dot(), @, and method .dot() for matrix-matrix multiplication?

…衆ロ難τιáo~ 提交于 2020-03-23 08:19:35
问题 Is there any difference? If not, what is preferred by convention? The performance seems to be almost the same. a=np.random.rand(1000,1000) b=np.random.rand(1000,1000) %timeit a.dot(b) #14.3 ms ± 374 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %timeit np.dot(a,b) #14.7 ms ± 315 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) %timeit a @ b #15.1 ms ± 779 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) 回答1: They are all basically doing the same thing. In terms of

Difference between numpy.dot and a.dot(b)

偶尔善良 提交于 2020-02-26 14:08:02
问题 Is there a difference between import numpy as np np.dot(a,b) and a.dot(b) internally? I wasn't able to find any documentation on the latter method. 回答1: If a is an array, they're equivalent. The docs you couldn't find for the dot method are here, and they boil down to "see numpy.dot". If type(a) is not numpy.ndarray , then numpy.dot will convert a to an array and use the array for the multiplication, while a.dot will do whatever a 's type says it does, or raise an AttributeError if a doesn't

Difference between numpy.dot and a.dot(b)

别来无恙 提交于 2020-02-26 14:07:46
问题 Is there a difference between import numpy as np np.dot(a,b) and a.dot(b) internally? I wasn't able to find any documentation on the latter method. 回答1: If a is an array, they're equivalent. The docs you couldn't find for the dot method are here, and they boil down to "see numpy.dot". If type(a) is not numpy.ndarray , then numpy.dot will convert a to an array and use the array for the multiplication, while a.dot will do whatever a 's type says it does, or raise an AttributeError if a doesn't

How to multiply multi-dimensional arrays/matrices in Julia

半腔热情 提交于 2020-02-05 13:12:27
问题 Multiplying two multi-dimensional arrays, say, a 1-dimensional with a 3-dimensional array: [1 2] * reshape(1:8,2,2,2) gives me the error message: LoadError: MethodError: `*` has no method matching *(::Array{Int64,2}, ::Array{Int64,3}) Closest candidates are: *(::Any, ::Any, !Matched::Any, !Matched::Any...) *{TA,TB}(::Union{DenseArray{TA,1},DenseArray{TA,2},SubArray{TA,1,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD},SubArray{TA,2,A<:DenseArray{T,N},I<:Tuple{Vararg

Efficient matrix transpose matrix multiplication in Eigen

半城伤御伤魂 提交于 2020-01-30 05:42:09
问题 I have access to a number of matrix libraries, but for this project I am using Eigen, due to its compile time definition and its inclusion of SVD. Now, I am doing the following operation: Eigen::Matrix<double,M,N> A; // populated in the code Eigen::Matrix<double,N,N> B = A.transpose() * A; As I understand, this makes a copy of A and forms the transpose, which is multiplied by A again. This operation is being performed on relatively small matrices (M=20-30,N=3), but many millions of times per

Numpy - Finding Nearest Neighbors of a Matrix Multiplication

怎甘沉沦 提交于 2020-01-29 05:40:26
问题 I have a dataset of a thousand 128 dimensional features in the shape of e.g. (1000,128). I want to find the sorted nearest neighbors of a 128 dimensional feature in the shape of (128,1). The distance in calculated via a Matrix Multiplication between dataset (1000,128) and feature (128,1) which would give an array of similarities in the shape of (1000,1) : DATASET (1000,128) x FEATURE (128,1) = SIMILARITIES (1000,1) This is done via: # features.shape=(1000,128) ; feature.shape=(128,1) ;

Matrix Multiplication giving wrong output [duplicate]

不羁的心 提交于 2020-01-25 07:13:11
问题 This question already has an answer here : Unable to execute device kernel in CUDA (1 answer) Closed 4 years ago . What I am attempting to do is Multiply Matrix A & Matrix B and then from the product matrix I get the index of the maximum value per column. But unfortunately, only the first 128*128 values of the matrix multiplication are correct while others are just garbage. I do not quite understand how this works. I request you to kindly guide me with this .. #include<stdio.h> #include "cuda