dot-product

Vectorized way of calculating row-wise dot product two matrices with Scipy

ε祈祈猫儿з 提交于 2019-11-27 07:48:40
I want to calculate the row-wise dot product of two matrices of the same dimension as fast as possible. This is the way I am doing it: import numpy as np a = np.array([[1,2,3], [3,4,5]]) b = np.array([[1,2,3], [1,2,3]]) result = np.array([]) for row1, row2 in a, b: result = np.append(result, np.dot(row1, row2)) print result and of course the output is: [ 26. 14.] Check out numpy.einsum for another method: In [52]: a Out[52]: array([[1, 2, 3], [3, 4, 5]]) In [53]: b Out[53]: array([[1, 2, 3], [1, 2, 3]]) In [54]: einsum('ij,ij->i', a, b) Out[54]: array([14, 26]) Looks like einsum is a bit

Understanding tensordot

て烟熏妆下的殇ゞ 提交于 2019-11-26 11:21:01
After I learned how to use einsum , I am now trying to understand how np.tensordot works. However, I am a little bit lost especially regarding the various possibilities for the parameter axes . To understand it, as I have never practiced tensor calculus, I use the following example: A = np.random.randint(2, size=(2, 3, 5)) B = np.random.randint(2, size=(3, 2, 4)) In this case, what are the different possible np.tensordot and how would you compute it manually? The idea with tensordot is pretty simple - We input the arrays and the respective axes along which the sum-reductions are intended. The

Understanding tensordot

自古美人都是妖i 提交于 2019-11-26 01:59:30
问题 After I learned how to use einsum , I am now trying to understand how np.tensordot works. However, I am a little bit lost especially regarding the various possibilities for the parameter axes . To understand it, as I have never practiced tensor calculus, I use the following example: A = np.random.randint(2, size=(2, 3, 5)) B = np.random.randint(2, size=(3, 2, 4)) In this case, what are the different possible np.tensordot and how would you compute it manually? 回答1: The idea with tensordot is