matrix-multiplication

Matrix Multiplication in pure Python?

霸气de小男生 提交于 2019-11-26 05:28:01
问题 I\'m trying to multiply two matrices together using pure Python. Input ( X1 is a 3x3 and Xt is a 3x2): X1 = [[1.0016, 0.0, -16.0514], [0.0, 10000.0, -40000.0], [-16.0514, -40000.0, 160513.6437]] Xt = [(1.0, 1.0), (0.0, 0.25), (0.0, 0.0625)] where Xt is the zip transpose of another matrix. Now here is the code: def matrixmult (A, B): C = [[0 for row in range(len(A))] for col in range(len(B[0]))] for i in range(len(A)): for j in range(len(B[0])): for k in range(len(B)): C[i][j] += A[i][k]*B[k]

how does multiplication differ for NumPy Matrix vs Array classes?

▼魔方 西西 提交于 2019-11-26 03:36:45
问题 The numpy docs recommend using array instead of matrix for working with matrices. However, unlike octave (which I was using till recently), * doesn\'t perform matrix multiplication, you need to use the function matrixmultipy(). I feel this makes the code very unreadable. Does anybody share my views, and has found a solution? 回答1: The main reason to avoid using the matrix class is that a) it's inherently 2-dimensional, and b) there's additional overhead compared to a "normal" numpy array. If

Matrix multiplication: Small difference in matrix size, large difference in timings

半城伤御伤魂 提交于 2019-11-26 03:06:01
问题 I have a matrix multiply code that looks like this: for(i = 0; i < dimension; i++) for(j = 0; j < dimension; j++) for(k = 0; k < dimension; k++) C[dimension*i+j] += A[dimension*i+k] * B[dimension*k+j]; Here, the size of the matrix is represented by dimension . Now, if the size of the matrices is 2000, it takes 147 seconds to run this piece of code, whereas if the size of the matrices is 2048, it takes 447 seconds. So while the difference in no. of multiplications is (2048*2048*2048)/(2000

What is the &#39;@=&#39; symbol for in Python?

为君一笑 提交于 2019-11-26 02:32:57
问题 I know @ is for decorators, but what is @= for in Python? Is it just reservation for some future idea? This is just one of my many questions while reading tokenizer.py . 回答1: From the documentation: The @ (at) operator is intended to be used for matrix multiplication. No builtin Python types implement this operator. The @ operator was introduced in Python 3.5. @= is matrix multiplication followed by assignment, as you would expect. They map to __matmul__ , __rmatmul__ or __imatmul__ similar

Why is MATLAB so fast in matrix multiplication?

一笑奈何 提交于 2019-11-25 22:26:29
问题 I am making some benchmarks with CUDA, C++, C#, and Java, and using MATLAB for verification and matrix generation. But when I multiply with MATLAB, 2048x2048 and even bigger matrices are almost instantly multiplied. 1024x1024 2048x2048 4096x4096 --------- --------- --------- CUDA C (ms) 43.11 391.05 3407.99 C++ (ms) 6137.10 64369.29 551390.93 C# (ms) 10509.00 300684.00 2527250.00 Java (ms) 9149.90 92562.28 838357.94 MATLAB (ms) 75.01 423.10 3133.90 Only CUDA is competitive, but I thought that