numpy

Python - Quick Upscaling of Array with Numpy, No Image Libary Allowed [duplicate]

自闭症网瘾萝莉.ら 提交于 2021-02-04 20:46:09
问题 This question already has answers here : How to repeat elements of an array along two axes? (5 answers) Closed 2 years ago . Note on duplicate message: Similar themes, not exactly a duplicate. Esp. since the loop is still the fastest method. Thanks. Goal: Upscale an array from [small,small] to [big,big] by a factor quickly, don't use an image library. Very simple scaling, one small value will become several big values, after it is normalized for the several big values it becomes. In other

Check common elements of two 2D numpy arrays, either row or column wise

我是研究僧i 提交于 2021-02-04 19:55:07
问题 Given two numpy arrays of nx3 and mx3 , what is an efficient way to determine the row indices (counter) wherein the rows are common in the two arrays. For instance I have the following solution, which is significantly slow for not even much larger arrays def arrangment(arr1,arr2): hits = [] for i in range(arr2.shape[0]): current_row = np.repeat(arr2[i,:][None,:],arr1.shape[0],axis=0) x = current_row - arr1 for j in range(arr1.shape[0]): if np.isclose(x[j,0],0.0) and np.isclose(x[j,1],0.0) and

Extract the first letter from each string in a numpy array

痴心易碎 提交于 2021-02-04 19:27:27
问题 I got a huge numpy array where elements are strings. I like to replace the strings with the first alphabet of the string. For example if C[0] = 'A90CD' I want to replace it with C[0] = 'A' IN nutshell, I was thinking of applying regex in a loop where I have a dictionary of regex expression like '^A.+$' => 'A' '^B.+$' => 'B' etc How can I apply this regex over the numpy arrays ? Or is there any better method to achieve the same ? 回答1: There's no need for regex here. Just convert your array to

how to replace every n-th value of an array in python most efficiently?

旧时模样 提交于 2021-02-04 18:08:26
问题 I was wondering whether there is a more pythonic (and efficient) way of doing the following: MAX_SIZE = 100 nbr_elements = 10000 y = np.random.randint(1, MAX_SIZE, nbr_elements) REPLACE_EVERY_Nth = 100 REPLACE_WITH = 120 c = 0 for index, item in enumerate(y): c += 1 if (c % REPLACE_EVERY_Nth == 0): y[index] = REPLACE_WITH So basically I generate a bunch of numbers from 1 to MAX_SIZE-1 , and then I want to replace every REPLACE_EVERY_Nth element with REPLACE_WITH . This works fine but I guess

Concatenate several np arrays in python

本小妞迷上赌 提交于 2021-02-04 17:59:08
问题 I have several bumpy arrays and I want to concatenate them. I am using np.concatenate((array1,array2),axis=1) . My problem now is that I want to make the number of arrays parametrizable, I wrote this function x1=np.array([1,0,1]) x2=np.array([0,0,1]) x3=np.array([1,1,1]) def conc_func(*args): xt=[] for a in args: xt=np.concatenate(a,axis=1) print xt return xt xt=conc_func(x1,x2,x3) this function returns ([1,1,1]), I want it to return ([1,0,1,0,0,1,1,1,1]). I tried to add the for loop inside

What are the under-the-hood differences between round() and numpy.round()?

主宰稳场 提交于 2021-02-04 17:38:05
问题 Let's look at the ever-shocking round statement: >>> round(2.675, 2) 2.67 I know why round "fails"; it's because of the binary representation of 2.675: >>> import decimal >>> decimal.Decimal(2.675) Decimal('2.67499999999999982236431605997495353221893310546875') What I do not understand is: why does NumPy not fail ? >>> import numpy >>> numpy.round(2.675, 2) 2.6800000000000002 Thinking Do not mind the extra zeros; it's an artefact from Python's printing internal rounding. If we look at the

Save skip rows in pandas read csv

淺唱寂寞╮ 提交于 2021-02-04 16:44:27
问题 I have a list of skip rows ( say [1,5,10] --> row numbers) and when I passed this to pandas read_csv , it ignores those rows. But, I need to save these skipped rows in a different text file. I went through pandas read_csv documentation and few other articles, but have no idea how to save this into a text file. Example : Input file : a,b,c # Some Junk to Skip 1 4,5,6 # Some junk to skip 2 9,20,9 2,3,4 5,6,7 Code : skiprows = [1,3] df = pandas.read_csv(file, skip_rows = skiprows) Now output.txt

howto stream numpy array into pyaudio stream?

喜你入骨 提交于 2021-02-04 16:06:24
问题 I'm writing a code that supposed to give some audio output to the user based on his action, and I want to generate the sound rather than having a fixed number of wav files to play. Now, what I'm doing is to generate the signal in numpy format, store the data in a wav file and then read the same file into pyaudio . I think this is redundant, however, I couldn't find a way to do that. My question is, can I stream a numpy array (or a regular list) directly into my the pyaudio to play? 回答1: If

A broadcasting issue involving where to put the padding

时光毁灭记忆、已成空白 提交于 2021-02-04 15:41:09
问题 Introduction I have a function func which is vectorizable, and I vectorize it using np.frompyfunc . Rather than using a nested for loop, I want to call it only once, and thus I'll need to pad the inputs with np.newaxis 's. My goal is to get rid of the two nested for loops and use the numpy.array broadcasting feature instead. Here is the MWE for loops (I want to get rid of the for loops and instead pad the variables c_0 , c_1 , rn_1 , rn_2 , and factor when calling myfunc . MWE of the problem

Fast Fourier Transform in Python

强颜欢笑 提交于 2021-02-04 15:17:51
问题 I am new to the fourier theory and I've seen very good tutorials on how to apply fft to a signal and plot it in order to see the frequencies it contains. Somehow, all of them create a mix of sines as their data and i am having trouble adapting it to my real problem. I have 242 hourly observations with a daily periodicity, meaning that my period is 24. So I expect to have a peak around 24 on my fft plot. A sample of my data.csv is here: https://pastebin.com/1srKFpJQ Data plotted: My code: data