numpy

IronPython unable to run script that imports numpy

♀尐吖头ヾ 提交于 2021-02-07 09:36:27
问题 Disclaimer - I'm not familiar with Python. I'm a C# developer who has written an application to execute Python scripts (authored by others) using IronPython. These scripts have so far have only needed to use import math , but one of our users has asked for the application to support for Numpy. I have installed Numpy on my PC (using the 'numpy-1.9.2-win32-superpack-python2.7.exe' file), which has created a numpy folder under \Lib\site-packages. I've written a two-line Python script to test

Apply DFT matrix along each axis of 3D array in NumPy?

江枫思渺然 提交于 2021-02-07 09:24:24
问题 I can first obtain the DFT matrix of a given size, say n by import numpy as np n = 64 D = np.fft.fft(np.eye(n)) The FFT is of course just a quick algorithm for applying D to a vector: x = np.random.randn(n) ft1 = np.dot(D,x) print( np.abs(ft1 - fft.fft(x)).max() ) # prints near double precision roundoff The 2D FFT can be obtained by applying D to both the rows and columns of a matrix: x = np.random.randn(n,n) ft2 = np.dot(x, D.T) # Apply D to rows. ft2 = np.dot(D, ft2) # Apply D to cols.

Select random non zero elements from each row of a 2d numpy array

自古美人都是妖i 提交于 2021-02-07 09:23:16
问题 I have a 2d array a = array([[5, 0, 1, 0], [0, 1, 3, 5], [2, 3, 0, 0], [4, 0, 2, 4], [3, 2, 0, 3]]) and a 1d array b = array([1, 2, 1, 2, 2]) which ( b ) tells how many non-zero elements we want to choose from each row of the array a . For example, b[0] = 1 tells us that we have to choose 1 non-zero element from a[0] , b[1] = 2 tells us that we have to choose 2 non-zero elements from a[1] , and so on. For a 1d array, it can be done using np.random.choice , but I can't find how to do it for a

easy multidimensional numpy ndarray to pandas dataframe method?

二次信任 提交于 2021-02-07 09:17:12
问题 Having a 4-D numpy.ndarray, e.g. myarr = np.random.rand(10,4,3,2) dims={'time':1:10,'sub':1:4,'cond':['A','B','C'],'measure':['meas1','meas2']} But with possible higher dimensions. How can I create a pandas.dataframe with multiindex, just passing the dimensions as indexes, without further manual adjustments (reshaping the ndarray into 2D shape)? I can't wrap my head around the reshaping, not even really in 3 dimensions quite yet, so I'm searching for an 'automatic' method if possible. What

easy multidimensional numpy ndarray to pandas dataframe method?

浪子不回头ぞ 提交于 2021-02-07 09:14:03
问题 Having a 4-D numpy.ndarray, e.g. myarr = np.random.rand(10,4,3,2) dims={'time':1:10,'sub':1:4,'cond':['A','B','C'],'measure':['meas1','meas2']} But with possible higher dimensions. How can I create a pandas.dataframe with multiindex, just passing the dimensions as indexes, without further manual adjustments (reshaping the ndarray into 2D shape)? I can't wrap my head around the reshaping, not even really in 3 dimensions quite yet, so I'm searching for an 'automatic' method if possible. What

Creating a dynamic array using numpy in python

核能气质少年 提交于 2021-02-07 09:10:55
问题 I want to create a dynamic array without size specification. In that array, I need to insert elements at any point I require. The remaining values in them can be either null or undefined till it gets a value assigned to it. Ex: a = np.array([]) np.insert(a, any index value, value) So, if I use np.insert(a, 5, 1) I should get the result as: array([null, null, null, null, null, 1]) 回答1: In MATLAB/Octave you can create and extend a matrix with indexing: >> a = [] a = [](0x0) >> a(5) = 1 a = 0 0

Convert 4D array of floats from txt (string) file to numpy array of floats

独自空忆成欢 提交于 2021-02-07 08:54:39
问题 I have a txt (string) file looking like: [[[[0.17262284, 0.95086717, 0.01172171, 0.79262904], [0.52454078, 0.16740103, 0.32694925, 0.78921072], [0.77886716, 0.35550488, 0.89272706, 0.36761104]], [[0.14336841, 0.94488079, 0.83388505, 0.02065268], [0.31804594, 0.22056339, 0.84088501, 0.94994676], [0.57845057, 0.12645735, 0.12628646, 0.05526736]]]] The shape is (1,2,3,4). How can I easily convert it to a NumPy array that considers the data type and the shape (that may be deduced from the

NumPy Array Copy-On-Write

谁说胖子不能爱 提交于 2021-02-07 08:16:35
问题 I have a class that returns large NumPy arrays. These arrays are cached within the class. I would like the returned arrays to be copy-on-write arrays. If the caller ends up just reading from the array, no copy is ever made. This will case no extra memory will be used. However, the array is "modifiable", but does not modify the internal cached arrays. My solution at the moment is to make any cached arrays readonly (a.flags.writeable = False) . This means that if the caller of the function may

NumPy Array Copy-On-Write

我怕爱的太早我们不能终老 提交于 2021-02-07 08:02:16
问题 I have a class that returns large NumPy arrays. These arrays are cached within the class. I would like the returned arrays to be copy-on-write arrays. If the caller ends up just reading from the array, no copy is ever made. This will case no extra memory will be used. However, the array is "modifiable", but does not modify the internal cached arrays. My solution at the moment is to make any cached arrays readonly (a.flags.writeable = False) . This means that if the caller of the function may

Fastest way to read huge MySQL table in python

て烟熏妆下的殇ゞ 提交于 2021-02-07 07:57:51
问题 I was trying to read a very huge MySQL table made of several millions of rows. I have used Pandas library and chunks . See the code below: import pandas as pd import numpy as np import pymysql.cursors connection = pymysql.connect(user='xxx', password='xxx', database='xxx', host='xxx') try: with connection.cursor() as cursor: query = "SELECT * FROM example_table;" chunks=[] for chunk in pd.read_sql(query, connection, chunksize = 1000): chunks.append(chunk) #print(len(chunks)) result = pd