numpy-ndarray

Is there a way, with numpy, to compute something for all combinations of n lines in an array (the simple case being all pairs i.e. n=2)

微笑、不失礼 提交于 2019-12-11 17:24:44
问题 I'm currently playing in python with Runge-Kutta methods for differential equations systems numerical integration, and the scope is (as told in the title) the simulation of planetary orbits. I'm investigating (comparing) the different ways to accelerate the calculations, and currently I've tried using a C module which quite efficient and I wanted to try with numpy In this calculation, I need to compute mutual attraction for each pair of planets. Currently, I'm doing this : import numpy as np

Numpy Array2string just writing … in string? [duplicate]

为君一笑 提交于 2019-12-11 16:59:45
问题 This question already has answers here : How to print the full NumPy array, without truncation? (16 answers) Closed last year . I have a simple thing to do, read some vectors and write them in a file. The vectors are 1024 dimensional. for emb in src: print(len(emb[0].detach().cpu().numpy())) #--> prints 1024! f.write(np.array2string(emb[0].detach().cpu().numpy(), separator=', ') + " \n") My file looks like this: [-0.18077464, -0.02889516, 0.33970496, ..., -0.28685367, 0.00343359, -0.00380083]

What is the fastest way of converting a numpy array to a ctype array?

笑着哭i 提交于 2019-12-11 15:52:22
问题 Here is a snippet of code I have to convert a numpy array to c_float ctype array so I can pass it to some functions in C language: arr = my_numpy_array arr = arr/255. arr = arr.flatten() new_arr = (c_float*len(arr))() new_arr[:] = arr but since the last line is actually a for loop and we all know how notorious python is when it comes to for loops for a medium size image array it takes about 0.2 seconds!! so this one line is right now the bottle neck of my whole pipeline. I want to know if

How to do numpy matmul broadcasting between two numpy tensors?

丶灬走出姿态 提交于 2019-12-11 15:52:21
问题 I have the Pauli matrices which are (2x2) and complex II = np.identity(2, dtype=complex) X = np.array([[0, 1], [1, 0]], dtype=complex) Y = np.array([[0, -1j], [1j, 0]], dtype=complex) Z = np.array([[1, 0], [0, -1]], dtype=complex) and a depolarizing_error function which takes in a normally distributed random number param , generated by np.random.normal(noise_mean, noise_sd) def depolarizing_error(param): XYZ = np.sqrt(param/3)*np.array([X, Y, Z]) return np.array([np.sqrt(1-param)*II, XYZ[0],

Why is it that the numpy array column data type does not get updated?

£可爱£侵袭症+ 提交于 2019-12-11 15:15:20
问题 nd2values[:,[1]]=nd2values[:,[1]].astype(int) nd2values outputs array([['021fd159b55773fba8157e2090fe0fe2', '1', '881f83d2dee3f18c7d1751659406144e', '012059d397c0b7e5a30a5bb89c0b075e', 'A'], ['021fd159b55773fba8157e2090fe0fe2', '1', 'cec898a1d355dbfbad8c760615fde1af', '012059d397c0b7e5a30a5bb89c0b075e', 'A'], ['021fd159b55773fba8157e2090fe0fe2', '1', 'a99f44bbff39e352191a870e17f04537', '012059d397c0b7e5a30a5bb89c0b075e', 'A'], ..., ['fdeb2950c4d5209d449ebd2d6afac11e', '4',

Broadcasting a 1D array to a particular dimension of a varying nD array via .reshape(generator)

偶尔善良 提交于 2019-12-11 14:39:43
问题 I have a large matrix of the shape (2,2,2,...n) of nD dimensions, which often varies. However I am also receiving incoming data which is always a 1D array of shape (2,). Now I want to multiply my former matrix of nD dimensions with my 1D array via reshape... and I also have an 'index' of which dimensions I want to broadcast and modify in particular. Thus I'm doing the following (within a loop): matrix_nd *= array_1d.reshape(1 if i!=index else dimension for i, dimension in enumerate(matrix_nd

Performance decreases with increasing nesting of array elements

耗尽温柔 提交于 2019-12-11 10:53:33
问题 A short note: This question relates to another I asked previously, but since asking multiple questions within a single Q&A is concidered bad SO-style I splitted it up. Setup I have the following two implementations of a matrix-calculation: The first implementation uses a matrix of shape (n, m) and the calculation is repeated in a for-loop for repetition -times: import numpy as np def foo(): for i in range(1, n): for j in range(1, m): _deleteA = ( matrix[i, j] + #some constants added here )

Fast and efficient way of serializing and retrieving a large number of numpy arrays from HDF5 file

家住魔仙堡 提交于 2019-12-11 07:49:48
问题 I have a huge list of numpy arrays, specifically 113287 , where each array is of shape 36 x 2048 . In terms of memory, this amounts to 32 Gigabytes . As of now, I have serialized these arrays as a giant HDF5 file. Now, the problem is that retrieving individual arrays from this hdf5 file takes excruciatingly long time (north of 10 mins) for each access. How can I speed this up? This is very important for my implementation since I have to index into this list several thousand times for feeding

Numpy - Find spatial position of a gridpoint in 3-d matrix (knowing the index of that gridpoint)

若如初见. 提交于 2019-12-11 06:05:21
问题 So I think I might be absolutely on the wrong track here, but basically I have a 3-d meshgrid, I find all of the distances to a testpoint at all of the points in that grid import numpy as np #crystal_lattice structure x,y,z = np.linspace(-2,2,5),np.linspace(-2,2,5),np.linspace(-2,2,5) xx,yy,zz = np.meshgrid(x,y,z) #testpoint point = np.array([1,1,1]) d = np.sqrt((point[0]-xx)**2 + (point[1]-yy)**2 + (point[2]-zz)**2) #np.shape(d) = (5, 5, 5) Then I am trying to find the coordinates of he

Comparing object ids of two numpy arrays

空扰寡人 提交于 2019-12-11 03:46:19
问题 I have been using numpy for quite a while but I stumbled upon one thing that I didn't understand fully: a = np.ones(20) b = np.zeros(10) print(id(a)==id(b)) # prints False print(id(a), id(b)) # prints (4591424976, 4590843504) print(id(a[0])==id(b[0])) # prints True print(id(a[0]), id(b[0])) # prints (4588947064, 4588947064) print(id(a[0])) # 4588947184 print(id(b[0])) # 4588947280 Can someone please explain the behavior observed in last four print statements? Also, I was aware of the fact