numpy

How to find which items in list of lists is equal to another list

若如初见. 提交于 2021-01-29 02:19:16
问题 I have a list of lists that looks like this: [[0], [0, 1, 2], [2], [3], [4], [5], [0, 1, 2, 3, 4, 5, 6, 7], [7], [8], [9], [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18], [11], [11, 12, 13, 14, 15, 16, 17, 18], [13], [14], [14, 15, 16, 17, 18], [16, 17, 18], [17], [17, 18]] I am trying to find the least number of items in the list, when concatenated, that equal the full range of the list. In this case, the full range of the list is this: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,

Passing numpy arrays in Cython to a C function that requires dynamically allocated arrays

冷暖自知 提交于 2021-01-29 02:16:34
问题 I have some C code that has the following declaration: int myfunc(int m, int n, const double **a, double **b, double *c); So a is a constant 2D array, b is a 2D array, and c is a 1D array, all dynamically allocated. b and c do not need to be anything specifically before they are passed to myfunc , and should be understood as output information. For the purposes of this question, I'm not allowed to change the declaration of myfunc . Question 1: How do I convert a given numpy array a_np into an

Passing numpy arrays in Cython to a C function that requires dynamically allocated arrays

吃可爱长大的小学妹 提交于 2021-01-29 02:13:49
问题 I have some C code that has the following declaration: int myfunc(int m, int n, const double **a, double **b, double *c); So a is a constant 2D array, b is a 2D array, and c is a 1D array, all dynamically allocated. b and c do not need to be anything specifically before they are passed to myfunc , and should be understood as output information. For the purposes of this question, I'm not allowed to change the declaration of myfunc . Question 1: How do I convert a given numpy array a_np into an

Convert array of lists to array of tuples/triple

跟風遠走 提交于 2021-01-29 01:55:37
问题 I have a 2D Numpy array with 3 columns. It looks something like this array([[0, 20, 1], [1,2,1], ........, [20,1,1]]) . It basically is array of list of lists. How can I convert this matrix into array([(0,20,1), (1,2,1), ........., (20,1,1)]) ? I want the output to be a array of triple. I have been trying to use tuple and map functions described in Convert numpy array to tuple, R = mydata #my data is sparse matrix of 1's and 0's #First row #R[0] = array([0,0,1,1]) #Just a sample (rows, cols)

ValueError: setting an array element with a sequence when array is not a sequence

混江龙づ霸主 提交于 2021-01-29 01:30:19
问题 Hello this code is intended to store the coordinates of rectangles drawn with open cv and compile the results into a single image. import numpy as np import cv2 im = cv2.imread('1.jpg') im3 = im.copy() gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray,(5,5),0) thresh = cv2.adaptiveThreshold(blur,255,1,1,11,2) contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE) squares = [] for cnt in contours: if cv2.contourArea(cnt)>50: [x,y,w,h] = cv2

Vectorized sampling of multiple binomial random variables

大城市里の小女人 提交于 2021-01-28 23:36:06
问题 I would like to sample a few hundred binomially distributed random variables, each with a different n and p (using the argument names as defined in the numpy.random.binomial docs). I'll be doing this repeatedly, so I'd like to vectorize the code if possible. Here's an example: import numpy as np # Made up parameters N_random_variables = 500 n_vals = np.random.random_integers(100, 200, N_random_variables) p_vals = np.random.random_sample(N_random_variables) # Can this portion be vectorized?

How to have vectorize calculation between a 1D and 2D numpy array with if conditions

和自甴很熟 提交于 2021-01-28 22:47:58
问题 I have a calculation using a 1D and a 2D numpy array. It has two levels of if -conditions. I was able to use np.where to avoid one if -statement and further use the slow list comprehension to iterate through each row. Ideally, I would like to vectorize the whole calculation process. Is it possible? Here is my code: import numpy as np r_base = np.linspace(0, 4, 5) np.random.seed(0) r_mat = np.array([r_base * np.random.uniform(0.9, 1.1, 5), r_base * np.random.uniform(0.9, 1.1, 5), r_base * np

Groupby and reverse calculation of one column based on pct_change in Python

梦想与她 提交于 2021-01-28 22:00:34
问题 I have a data frame like df1 in which has four columns, suppose all the city s have date range from 2019-01-01 to 2019-07-01 , I would like to groupy city and calculating price based on its value in 2019-07-01 and pct_change : city date price pct_change 0 bj 2019-01-01 NaN NaN 1 bj 2019-02-01 NaN -0.03 2 bj 2019-03-01 NaN 0.16 3 bj 2019-04-01 NaN 0.07 4 bj 2019-05-01 NaN 0.19 5 bj 2019-06-01 NaN -0.05 6 bj 2019-07-01 6.0 -0.02 7 gz 2019-01-01 NaN NaN 8 gz 2019-02-01 NaN 0.03 9 gz 2019-03-01

Multiprocessing an array in chunks

半城伤御伤魂 提交于 2021-01-28 21:53:20
问题 I have broken my standard deviation function into small chunks of std_1, std_2, std_3 etc. to optimize my code to make it run faster. Since I have over 2 million arrays on my main numpy array PC_list . I have used the numba, numpy arrays and multi processing to make the code run faster however I do not see any performance difference in the way even doe the code is broken into pieces from the main function. It takes about 57 seconds for the main function to process and the divided function to

Split a list into n randomly sized chunks

不羁岁月 提交于 2021-01-28 21:46:03
问题 I am trying to split a list into n sublists where the size of each sublist is random (with at least one entry; assume P>I ). I used numpy.split function which works fine but does not satisfy my randomness condition. You may ask which distribution the randomness should follow. I think, it should not matter. I checked several posts which were not equivalent to my post as they were trying to split with almost equally sized chunks. If duplicate, let me know. Here is my approach: import numpy as