Vectorization and matrix multiplication by scalars

杀马特。学长 韩版系。学妹 提交于 2019-12-13 03:19:12

问题


I am new to python/numpy. I need to do the following calculation: for an array of discrete times t, calculate $e^{At}$ for a $2\times 2$ matrix $A$

What I did:

def calculate(t_,x_0,v_0,omega_0,c):

    # define A
    a_11,a_12, a_21, a_22=0,1,-omega_0^2,-c
    A =np.matrix([[a_11,a_12], [a_21, a_22]]) 
    print A
    # use vectorization 
    temps = np.array(t_)
    A_ = np.array([A  for k in  range (1,n+1,1)])
    temps*A_
    x_=scipy.linalg.expm(temps*A)
    v_=A*scipy.linalg.expm(temps*A)
    return x_,v_

n=10
omega_0=1
c=1
x_0=1
v_0=1
t_ = [float(5*k*np.pi/n)  for k in  range (1,n+1,1)]
x_, v_ = calculate(t_,x_0,v_0,omega_0,c)

However, I get this error when multiplying A_ (array containing n times A ) and temps (containg the times for which I want to calculate exp(At) :

ValueError: operands could not be broadcast together with shapes (10,) (10,2,2)

As I understand vectorization, each element in A_ would be multiplied by element at the same index from temps; but I think i don't get it right. Any help/ comments much appreciated


回答1:


A pure numpy calculation of t_ is (creates an array instead of a list):

In [254]: t = 5*np.arange(1,n+1)*np.pi/n
In [255]: t
Out[255]: 
array([ 1.57079633,  3.14159265,  4.71238898,  6.28318531,  7.85398163,
        9.42477796, 10.99557429, 12.56637061, 14.13716694, 15.70796327])

In [256]: a_11,a_12, a_21, a_22=0,1,-omega_0^2,-c
In [257]: a_11
Out[257]: 0
In [258]: A = np.array([[a_11,a_12], [a_21, a_22]]) 
In [259]: A
Out[259]: 
array([[ 0,  1],
       [-3, -1]])
In [260]: t.shape
Out[260]: (10,)
In [261]: A.shape
Out[261]: (2, 2)
In [262]: A_ = np.array([A  for k in  range (1,n+1,1)])
In [263]: A_.shape
Out[263]: (10, 2, 2)

A_ is np.ndarray. I made A a np.ndarray as well; yours is np.matrix, but your A_ will still be np.ndarray. np.matrix can only be 2d, where as A_ is 3d.

So t * A will be array elementwise multiplication, hence the broadcasting error, (10,) (10,2,2).

To do that elementwise multiplication right you need something like

In [264]: result = t[:,None,None]*A[None,:,:]
In [265]: result.shape
Out[265]: (10, 2, 2)

But if you want matrix multiplication of the (10,) with (10,2,2), then einsum does it easily:

In [266]: result1 = np.einsum('i,ijk', t, A_)
In [267]: result1
Out[267]: 
array([[   0.        ,   86.39379797],
       [-259.18139392,  -86.39379797]])

np.dot can't do it because its rule is 'last with 2nd to last'. tensordot can, but I'm more comfortable with einsum.

But that einsum expression makes it obvious (to me) that I can get the same thing from the elementwise *, by summing on the 1st axis:

In [268]: (t[:,None,None]*A[None,:,:]).sum(axis=0)
Out[268]: 
array([[   0.        ,   86.39379797],
       [-259.18139392,  -86.39379797]])

Or (t[:,None,None]*A[None,:,:]).cumsum(axis=0) to get a 2x2 for each time.




回答2:


This is what I would do.

import numpy as np
from scipy.linalg import expm
A = np.array([[1, 2], [3, 4]])
for t in np.linspace(0, 5*np.pi, 20):
    print(expm(t*A))

No attempt at vectorization here. The expm function applies to one matrix at a time, and it surely takes the bulk of computation time. No need to worry about the cost of multiplying A by a scalar.



来源:https://stackoverflow.com/questions/48632343/vectorization-and-matrix-multiplication-by-scalars

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!