Questions regarding the dimension initialization of multiple numpy arrays within a single numpy array

夙愿已清 提交于 2019-12-24 20:46:47

问题


Given that we have 3 Pauli matrices, each with dimension (2x2). As shown below:

X = np.array([[0, 1], [1, 0]], dtype=complex)
Y = np.array([[0, -1j], [1j, 0]], dtype=complex)
Z = np.array([[1, 0], [0, -1]], dtype=complex)

Now if I put these each individual (2x2) matrices as entries to another (2x2) matrices. Say:

A = np.array([[X, 0], [0, Y]])
B = np.array([[X, X], [Y, Y]])

Weirdly, A has a dim of (2x2) - which is ideally what I want - and B has a dim of (2, 2, 2, 2)whatever this is, as show below

A = np.array([[X, 0], [0, Y]])

A.shape
Out[131]: (2, 2)

B = np.array([[X, X], [Y, Y]])

B.shape
Out[133]: (2, 2, 2, 2)

On the other hand, say let C be a (1x3) matrix and D be a (1x2) matrix, e.g.

C = np.array([[X, Y, 0]])
D = np.array([[X, Y]])

again if we look at the dimensions of the initialized matrices

C = np.array([[X, Y, 0]])

C.shape
Out[136]: (1, 3)

D = np.array([[X, Y]])

D.shape
Out[138]: (1, 2, 2, 2)

So it seems that whenever I initialize arrays in an array like these, if there's mixed data type as entries i.e. matrices and integers like in AandC, it gives me the sensible shape that I want i.e. dimension(2,2), with "hidden" dimensions of (2x2) for each entries. But as soon as the entries are just strictly matrices like in BandD, it gives me insensible dimension such as (2, 2, 2, 2). So my question is:

How do I initialize an (n, n) numpy array(matrix) with strictly (2, 2) matrices as entries, and still preserve its (n, n) dimensionality i.e. instead of giving me some weird numpy dimension (w, x, y, z)?

The reason why I want this is because I'm doing computations with operators in quantum mechanics, with these Pauli matrices, such as the X, Y and Z, as quantum gates in quantum computation. So if I have some state rho which is also a (2x2) matrix. Let

rho = np.array([[1, 0],
                [0, 0]])

And let RHO be the (2x2) diagonal matrix who's entries are the (2x2) rho matrices.

RHO = np.array([[rho, 0],
                [0, rho]])

I wish to compute something like np.dot(D, RHO) such that it gives

np.array([np.dot(X, rho), 0],
         [0, np.dot(Y, rho)])

And I've checked on python that the dot products of two (2x2) matrices with (2x2) matrices as entries, its entries multiplications will all be dot product too.

My motivation for all of the stuff I've talked about above is that I'm hoping to use these properties as means to vectorize my algorithm. Currently a very crude example of my algorithm looks something like this:

for i in (integer_1):
    for j in (integer_2):
        #do something that involves operations with sums of dot products of many matrices#

and vectorize it so that it could potentially become

for i in (integer_1):
        #do something that involves operations with multiples of matrices with dot product of matrices as its entries#

Which might potentially work, or not! But I'm curious to see if this method of mine will produce a speed up. I hope I've explained my problems well. Thanks in advance.

Edit(1)

I've added latex formatted maths so hopefully you can understand what I'm trying to do.

def compute_channel_operation(rho, operators):
    """
    Given a quantum state's density function rho, the effect of the
    channel on this state is:
    rho -> sum_{i=1}^n E_i * rho * E_i^dagger
    Args:
        rho (2x2 matrix): Density function
        operators (list): List of operators(matrices)
    Returns:
        number: The result of applying the list of operators
    """
    operation = [E@rho@E.conj().T for i, E in enumerate(operators)]
    return np.sum(operation, axis=0)

So then I was hoping that instead of using loops, this direct multiplication of tensor method might be quicker as I scale up my simulation, say having to do this 1 million times The thing is K here should be a list of any (1xn) dimensions i.e. [I] or [I, X] or [I, X, Y] or [I, X, Y, Z]. I understand that here X = X^{\dagger} and so will Y and Z, but I'll have situations in my simulation where that wont be the case.

I hope I've now explained it clearly.


回答1:


(2, 2, 2, 2) is not a weird dimension, it is just a 4D tensor of shape 2x2x2x2

The reason why you are seing different shapes for A and B is because you are setting A with a scalar 0 instead of a 2x2 zero matrix. Change it to

A = np.array([[X, np.zeros((2, 2))], [np.zeros((2, 2)), Y]])
B = np.array([[X, X], [Y, Y]])

And you will get 2x2x2x2 tensors for both.

Or change it to

C = np.vstack([
    np.hstack([X, np.zeros((2, 2))]),
    np.hstack([np.zeros((2, 2)), Y])
])
D = np.vstack([
    np.hstack([X, X]),
    np.hstack([Y, Y])
])

And you will get 4x4 matrices.

You can also transform from one form to the other using

E = A.transpose(0, 2, 1, 3).reshape(4, 4)
F = B.transpose(0, 2, 1, 3).reshape(4, 4)

np.allclose(C, E)  # True
np.allclose(D, F)  # True

and back

G = E.reshape(2, 2, 2, 2).transpose(0, 2, 1, 3)
H = F.reshape(2, 2, 2, 2).transpose(0, 2, 1, 3)

np.allclose(A, G)  # True
np.allclose(B, H)  # True

EDIT: Regarding your function compute_channel_operation(), you can speed it up considerably if you don't perform a list comprehension but vectorize the operation and pass in a 3D tensor with all your operations

rho = np.random.rand(2, 2)
operators = [np.random.rand(2, 2) for _ in range(1000)]
operators_tensor = np.asarray(operators)  # same as np.random.rand(1000, 2, 2)

def compute_channel_operation(rho, operators):
    operation = [E@rho@E.conj().T for i, E in enumerate(operators)]
    return np.sum(operation, axis=0)

def compute_channel_operation2(rho, operators):
    return np.sum(operators @ rho @ operators.transpose(0, 2, 1).conj(), axis=0)

A = compute_channel_operation(rho, operators)
B = compute_channel_operation(rho, operators_tensor)
C = compute_channel_operation2(rho, operators_tensor)

np.allclose(A, B) # True
np.allclose(A, C) # True

%timeit compute_channel_operation(rho, operators)
# 6.95 ms ± 103 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit compute_channel_operation(rho, operators_tensor)
# 7.53 ms ± 141 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit compute_channel_operation2(rho, operators_tensor)
# 416 µs ± 12 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)



回答2:


In [343]: X = np.array([[0, 1], [1, 0]], dtype=complex) 
     ...: Y = np.array([[0, -1j], [1j, 0]], dtype=complex) 
     ...: Z = np.array([[1, 0], [0, -1]], dtype=complex)                                                        
In [344]: X                                                                                                     
Out[344]: 
array([[0.+0.j, 1.+0.j],
       [1.+0.j, 0.+0.j]])
In [345]: Y                                                                                                     
Out[345]: 
array([[ 0.+0.j, -0.-1.j],
       [ 0.+1.j,  0.+0.j]])
In [346]: Z                                                                                                     
Out[346]: 
array([[ 1.+0.j,  0.+0.j],
       [ 0.+0.j, -1.+0.j]])

np.array tries to make a multidimensional numeric array, and failing that makes an object dtype array (or raises an error).

In [347]: np.array([X,0])                                                                                       
Out[347]: 
array([array([[0.+0.j, 1.+0.j],
       [1.+0.j, 0.+0.j]]), 0], dtype=object)

This array is basically the same as the list [X,0] - it contains pointers or references to the objects X and the number 0.

But given 2 (or more) arrays that match in size, it makes a higher dimensional numeric array. For example a (2,2,2) array with complex dtype.

In [348]: np.array([X,Y])                                                                                       
Out[348]: 
array([[[ 0.+0.j,  1.+0.j],
        [ 1.+0.j,  0.+0.j]],

       [[ 0.+0.j, -0.-1.j],
        [ 0.+1.j,  0.+0.j]]])

block,or some combination of concatenate/stack can make a 2d array. For example a (2,4) complex array:

In [349]: np.block([X,Y])                                                                                       
Out[349]: 
array([[ 0.+0.j,  1.+0.j,  0.+0.j, -0.-1.j],
       [ 1.+0.j,  0.+0.j,  0.+1.j,  0.+0.j]])

To make an object dtype array containing X and Y`, I can do:

In [356]: xy = np.empty((2,), object)                                                                           
In [357]: xy[0]= X                                                                                              
In [358]: xy[1]= Y                                                                                              
In [359]: xy                                                                                                    
Out[359]: 
array([array([[0.+0.j, 1.+0.j],
       [1.+0.j, 0.+0.j]]),
       array([[ 0.+0.j, -0.-1.j],
       [ 0.+1.j,  0.+0.j]])], dtype=object)

This empty followed by individual assignment is the most robust way of building an object dtype array. It gets around multidimensional result shown in Out[348].

I don't know whether any of these approaches helps with your calculation goals. I haven't studied your description enough. But keep in mind that the fast numpy code works with multidimensional numeric (including complex) arrays (such as Out[348]). Math using object dtype arrays is hit-or-miss, and nearly always slower.

@ matrix multiplication works with X, Y, Out[348], rho etc. but not with Out[347] or RHO. + works with the object dtype arrays, provided the elements themselves support addition.



来源:https://stackoverflow.com/questions/56902633/questions-regarding-the-dimension-initialization-of-multiple-numpy-arrays-within

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!