indexing

Date is not working even when date column is set to index

有些话、适合烂在心里 提交于 2021-01-28 06:09:12
问题 I have a multiple dataframe dictionary where the index is set to 'Date' but am having a trouble to capture the specific day of a search. Dictionary created as per link: Call a report from a dictionary of dataframes Then I tried to add the following column to create specific days for each row: df_dict[k]['Day'] = pd.DatetimeIndex(df['Date']).day It´s not working. The idea is to separate the day of the month only (from 1 to 31) for each row. When I call the report, it will give me the day of

combining real and imag columns in dataframe into complex number to obtain magnitude using np.abs

眉间皱痕 提交于 2021-01-28 06:04:21
问题 I have a data frame that has complex numbers split into a real and an imaginary column. I want to add a column (2, actually, one for each channel) to the dataframe that computes the log magnitude: ` ch1_real ch1_imag ch2_real ch2_imag ch1_phase ch2_phase distance 79 0.011960 -0.003418 0.005127 -0.019530 -15.95 -75.290 0.0 78 -0.009766 -0.005371 -0.015870 0.010010 -151.20 147.800 1.0 343 0.002197 0.010990 0.003662 -0.013180 78.69 -74.480 2.0 80 -0.002686 0.010740 0.011960 0.013430 104.00 48

combining real and imag columns in dataframe into complex number to obtain magnitude using np.abs

不羁岁月 提交于 2021-01-28 05:56:21
问题 I have a data frame that has complex numbers split into a real and an imaginary column. I want to add a column (2, actually, one for each channel) to the dataframe that computes the log magnitude: ` ch1_real ch1_imag ch2_real ch2_imag ch1_phase ch2_phase distance 79 0.011960 -0.003418 0.005127 -0.019530 -15.95 -75.290 0.0 78 -0.009766 -0.005371 -0.015870 0.010010 -151.20 147.800 1.0 343 0.002197 0.010990 0.003662 -0.013180 78.69 -74.480 2.0 80 -0.002686 0.010740 0.011960 0.013430 104.00 48

correct accessing of slices with duplicate index-values present

左心房为你撑大大i 提交于 2021-01-28 05:16:35
问题 I have a dataframe with an index that sometimes contains rows with the same index-value. Now I want to slice that dataframe and set values based on row-indices. Consider the following example: import pandas as pd df = pd.DataFrame({'index':[1,2,2,3], 'values':[10,20,30,40]}) df.set_index(['index'], inplace=True) df1 = df.copy() df2 = df.copy() #copy warning df1.iloc[0:2]['values'] = 99 print(df1) df2.loc[df.index[0:2], 'values'] = 99 print(df2) df1 is the expected result, but gives me a

What indexes to improve performance of JOIN and GROUP BY

大城市里の小女人 提交于 2021-01-28 01:18:31
问题 I have setup some tables and ran a query. However in my explain it would appear the SQL results in a temporary table being generated ( I assume this is because of the GROUP BY) I have added some indexes to speed up the query but wondering if there was a way to stop the use of a temporary table and if there is any other way I can speed my query up using indexes? CartData CREATE TABLE `cartdata` ( `IDCartData` INT(11) NOT NULL AUTO_INCREMENT, `CartOrderref` VARCHAR(25) NOT NULL DEFAULT '',

How to create a sub-matrix in numpy

久未见 提交于 2021-01-28 01:12:43
问题 I have a two-dimensional NxM numpy array: a = np.ndarray((N,M), dtype=np.float32) I would like to make a sub-matrix with a selected number of columns and matrices. For each dimension I have as input either a binary vector, or a vector of indices. How can I do this most efficient? Examples a = array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) cols = [True, False, True] rows = [False, False, True, True] cols_i = [0,2] rows_i = [2,3] result = wanted_function(a, cols, rows) or wanted

Python using a loop to search for N number and return index

烈酒焚心 提交于 2021-01-27 20:24:09
问题 I have been programming for a total of three weeks. I am stuck on this problem right now: We will pass you 2 inputs: A list of numbers A number, N, to look for Your job is to loop through the list and find the number specified in the second input. Output the list element index where you find the number. If N is not found in the list, output -1. This is what I have so far: import and N were provided import sys N= int(sys.argv[2]) this is also provided numbers= [] for i in sys.argv[1].split(","

Can't create index due to TypeError: not enough arguments for format string

偶尔善良 提交于 2021-01-27 17:08:00
问题 I am trying to create indices with pymongo, but failing with error File "D:/Users/Dims/Design/EnergentGroup/Python GIS Developer/worker/Approach03\sentinel\mongo.py", line 46, in get_results_collection results_collection.create_index(["uwi", "date_part"], name=index_name, unique=True) File "C:\Anaconda3\lib\site-packages\pymongo\collection.py", line 1386, in create_index name = kwargs.setdefault("name", helpers._gen_index_name(keys)) File "C:\Anaconda3\lib\site-packages\pymongo\helpers.py",

.loc indexing changes type

谁都会走 提交于 2021-01-27 17:01:27
问题 If I have a pandas.DataFrame with columns of different type (e.g. int64 and float64 ), getting a single element from the int column with .loc indexing converts the output to float : import pandas as pd df_test = pd.DataFrame({'ints':[1,2,3], 'floats': [4.5,5.5,6.5]}) df_test['ints'].dtype >>> dtype('int64') df_test.loc[0,'ints'] >>> 1.0 type(df_test.loc[0,'ints']) >>> numpy.float64 If I use .at for indexing, it doesn't happen: type(df_test.at[0,'ints']) >>> numpy.int64 It also doesn't happen

How to get Firestore Index Merge to work?

六眼飞鱼酱① 提交于 2021-01-27 15:10:37
问题 I am having trouble using firestore index merging in order to reduce the number of required indices. Consider this example situation: Firestore Collection: test/somedoc { a: '1', b: '1', c: '1', d: '1' } This will cause Firestore to create 4 automatic single field indices on test for fields a to d. Querying this table with a few equality conditions and one unrelated sort: await db.collection('test') .where('a', '==', '1') .where('b', '==', '1') .where('c', '==', '1') .orderBy('d') .get();