matrix-multiplication

Numpy ndarray multiplication switching to matrix multiplication

喜欢而已 提交于 2019-12-13 05:31:54
问题 This is the error I am getting: File "/data/eduardoj/linear.py", line 305, in _fit_model de_dl = (dl_dt + de_dt) * dt_dl File "/data/eduardoj/MSc-env/lib/python3.4/site-packages/numpy/matrixlib/defmatrix.py", line 343, in __mul__ return N.dot(self, asmatrix(other)) ValueError: shapes (1,53097) and (1,53097) not aligned: 53097 (dim 1) != 1 (dim 0) And this is the piece of code of numpy where it is crashing: 340 def __mul__(self, other): >* 341 if isinstance(other, (N.ndarray, list, tuple)) :

Creating a .mat file of v7.3 in python

*爱你&永不变心* 提交于 2019-12-13 02:21:20
问题 I need to perform multiplication involving 60000X70000 matrix either in python or matlab. I have a 16GB RAM and am able to load each row of the matrix easily (which is what I require). I am able to create the matrix as a whole in python but not in matlab. Is there anyway I can save the array as .mat file of v7.3 using h5py or scipy so that I can load each row separately? 回答1: For MATLAB v7.3 you can use hdf5storage which requires h5py , download the file here, extract, then type: python setup

Rcpp Parallel or openmp for matrixvector product

大兔子大兔子 提交于 2019-12-12 12:27:05
问题 I am trying to program the naive parallel version of Conjugate gradient, so I started with the simple Wikipedia algorithm, and I want to change the dot-products and MatrixVector products by their appropriate parallel version, The Rcppparallel documentation has the code for the dot-product using parallelReduce; I think I'm gonna use that version for my code, but I'm trying to make the MatrixVector multiplication, but I haven't achieved good results compared to R base (no parallel) Some

slower to mix logical variables with double?

爷,独闯天下 提交于 2019-12-12 12:26:44
问题 I have 0-1 valued vectors that I need to do some matrix operations on. They are not very sparse (only half of the values are 0) but saving them as a logical variable instead of double saves 8 times the memory: 1 byte for a logical, and 8 for double floating point. Would it be any slower to do matrix multiplications of a logical vector and a double matrix than to use both as double? See my preliminary results below: >> x = [0 1 0 1 0 1 0 1]; A = rand(numel(x)); xl = logical(x); >> tic; for k =

Type-safe matrix multiplication

旧巷老猫 提交于 2019-12-12 07:44:55
问题 After the long-winded discussion at Write this Scala Matrix multiplication in Haskell, I was left wondering...what would a type-safe matrix multiplication look like? So here's your challenge: either link to a Haskell implementation, or implement yourself, the following: data Matrix ... = ... matrixMult :: Matrix ... -> Matrix ... -> Matrix ... matrixMult ... = ... Where matrixMult produces a type error at compile time if you try to multiply two matricies with incompatible dimensions. Brownie

Iteration of matrix-vector multiplication which stores specific index-positions

一笑奈何 提交于 2019-12-12 06:27:08
问题 I need to solve a min distance problem, to see some of the work which has being tried take a look at: link: click here I have four elements : two column vectors : alpha of dim (px1) and beta of dim (qx1) . In this case p = q = 50 giving two column vectors of dim (50x1) each. They are defined as follows: alpha = alpha = 0:0.05:2; beta = beta = 0:0.05:2; and I have two matrices : L1 and L2 . L1 is composed of three column-vectors of dimension (kx1) each. L2 is composed of three column-vectors

Numpy/Scipy broadcast calculating scalar product for a certain elements

谁都会走 提交于 2019-12-12 05:12:44
问题 I've a sparse matrix like A and a dataframe(df) with rows that should be taken to calculate scalar product. Row1 Row2 Value 2 147 scalar product of vectors at Row1 and Raw2 in matrix A Can I do it in broadcasting manner without looping etc? In my case A like 1m*100k size and the dataframe 10M 回答1: Start with a small 'sparse' matrix (csr is the best for math): In [167]: A=sparse.csr_matrix([[1, 2, 3], # Vadim's example [2, 1, 4], [0, 2, 2], [3, 0, 3]]) In [168]: AA=A.A # dense equivalent In

R error message when using t()%*% “requires numeric/complex matrix/vector arguments”

孤街浪徒 提交于 2019-12-12 04:46:39
问题 I am working on a social network analysis assignment where I need to create a network from a matrix. I’m trying to create a matrix which shows what students are linked by classes they have in common, or not (a person-person matrix). I have wrangled the original data into the first iteration of a matrix and now want to multiply the matrix. My dataset and current matrix is a bigger version of the below: names <- c("Tom", "Dick", "And", "Harry") class <- c("cs1", "cs2", "cs3", "cs1") count <- c

does eigen have self transpose multiply optimization like H.transpose()*H

末鹿安然 提交于 2019-12-12 04:28:09
问题 I have browsed the tutorial of eigen at https://eigen.tuxfamily.org/dox-devel/group__TutorialMatrixArithmetic.html it said "Note: for BLAS users worried about performance, expressions such as c.noalias() -= 2 * a.adjoint() * b; are fully optimized and trigger a single gemm-like function call." but how about computation like H.transpose() * H , because it's result is a symmetric matrix so it should only need half time as normal A*B, but in my test, H.transpose() * H spend same time as H