numpy

Deploy function in AWS Lamda (package size exceeds)

╄→尐↘猪︶ㄣ 提交于 2021-01-29 09:28:32
问题 I am trying to deploy my function on AWS Lambda. I need the following packages for my code to function: keras-tensorflow Pillow scipy numpy pandas I tried installing using docker and uploading the zip file, but it exceeds the file size. Is there a get around for this? How to use these packages for my Lambda function? 回答1: publish your packages in AWS Lambda layer instead, and reference it from your code. The packages published in the AWS Lambda layer will be there all the time and will not

How to construct a rank array with numpy? (What is a rank array?)

泄露秘密 提交于 2021-01-29 09:20:31
问题 I hope all of you are having a great day. In my python class, we are learning how to use Numpy, so we got an assignment about that. My question is this: What is a rank array and how can I construct that with using python? My instructor tried to explain that with these lines but I did not understand anything actually :( These are the instructions: rank_calculator(A) - 5 pts Given a numpy ndarray A, return its rank array. Input: [[ 9 4 15 0 18] [16 19 8 10 1]] Return value: [[4 2 6 0 8] [7 9 3

How to construct a rank array with numpy? (What is a rank array?)

泪湿孤枕 提交于 2021-01-29 09:07:03
问题 I hope all of you are having a great day. In my python class, we are learning how to use Numpy, so we got an assignment about that. My question is this: What is a rank array and how can I construct that with using python? My instructor tried to explain that with these lines but I did not understand anything actually :( These are the instructions: rank_calculator(A) - 5 pts Given a numpy ndarray A, return its rank array. Input: [[ 9 4 15 0 18] [16 19 8 10 1]] Return value: [[4 2 6 0 8] [7 9 3

Changing dataframe columns names by columns from another dataframe python

五迷三道 提交于 2021-01-29 09:03:31
问题 I have a dataframe valence_data with columns word1, word, word3, word4.... And I have my second dataframe word_data with columns 1, 2, ,3 ,4 ... How can I replace the columns names in word_data by names from valence_data. e.g. word_data with columns word1, word, word3, word4.... I am using pandas processing my data. Thanks 回答1: You need to use DataFrame.rename original_names = ["1", "2", ...] new_names = ["word1", "word2", ...] new_columns = dict(zip(original_names, new_names)) df.rename

Append numpy matrix to binary file without numpy header

你离开我真会死。 提交于 2021-01-29 09:01:09
问题 I continously receive new data as numpy matrices which I need to append to an existing file. Structure and data type of that file are fixed, so I need python to do the conversion for me. For a single matrix, this works: myArr = np.reshape(np.arange(15), (3,5)) myArr.tofile('bin_file.dat') But suppose I'd want to keep appending the existing file with more and more arrays, then numpy.tofile will overwrite any content it finds in the file, instead of appending. I found that I could also keep

Purging numpy.memmap

ぃ、小莉子 提交于 2021-01-29 08:49:52
问题 Given a numpy.memmap object created with mode='r' (i.e. read-only), is there a way to force it to purge all loaded pages out of physical RAM, without deleting the object itself? In other words, I'd like the reference to the memmap instance to remain valid, but all physical memory that's being used to cache the on-disk data to be uncommitted. Any views onto to the memmap array must also remain valid. I am hoping to use this as a diagnostic tool, to help separate "real" memory requirements of a

comparing two numpy 2D arrays for similarity

白昼怎懂夜的黑 提交于 2021-01-29 08:48:29
问题 I have 2D numpy array1 that contains only 0 and 255 values ([[255, 0, 255, 0, 0], [ 0, 255, 0, 0, 0], [ 0, 0, 255, 0, 255], [ 0, 255, 255, 255, 255], [255, 0, 255, 0, 255]]) and an array2 that is identical in size and shape as array1 and also contains only 0 and 255 values ([[255, 0, 255, 0, 255], [ 0, 255, 0, 0, 0], [255, 0, 0, 0, 255], [ 0, 0, 255, 255, 255], [255, 0, 255, 0, 0]]) How can I compare array1 to array2 to determine a similarity percentage? 回答1: As you only have two possible

Why does numpy.save produce 100MB file for sys.getsizeof 0.33MB data?

梦想与她 提交于 2021-01-29 08:40:33
问题 I have a numpy array arr (produced from multiple nested lists of mismatching lengths), which apparently takes only sys.getsizeof(arr)/(1000*1000) 0.33848 MB of space in memory. However, when I attempt to save this data to disk with myf=open('.\\test.npy', 'wb') np.save(myf, arr) myf.close() the produced file test.npy turns out to be over 100MB large. Why is that? Did I make some mistake with measuring the actual data size in python memory? Or if not, is there some way to save the data more

Why cv2.line can't draw on 1 channel numpy array slice inplace?

只愿长相守 提交于 2021-01-29 08:37:36
问题 Why cv2.line can't draw on 1 channel numpy array slice inplace? print('cv2.__version__', cv2.__version__) # V1 print('-'*60) a = np.zeros((20,20,4), np.uint8) cv2.line(a[:,:,1], (4,4), (10,10), color=255, thickness=1) print('a[:,:,1].shape', a[:,:,1].shape) print('np.min(a), np.max(a)', np.min(a), np.max(a)) # V2 print('-' * 60) b = np.zeros((20,20), np.uint8) cv2.line(b, (4,4), (10,10), color=255, thickness=1) print('b.shape', b.shape) print('np.min(b), np.max(b)', np.min(b), np.max(b))

TypeError: 'float' object cannot be interpreted as an integer with solve_qp in python

ε祈祈猫儿з 提交于 2021-01-29 08:28:14
问题 I am new to optimization using python and I have a problem with using the predefined function solve_qp from qpsolvers to find the optimize solution of my problems here is my code: import numpy as np X = np.array([1., 2., 4., 6., 9.]).reshape(5, 1) y = 0.5 + (0.3*X) + np.random.randn(5,1) from qpsolvers import solve_qp P = np.dot(X.T,X) q = np.transpose(-2*np.dot(X.T,y))[0] G = None h = None A = None b = None sol = solve_qp(P, q, G, h, A, b) I got error with the predefined function TypeError: