pandas

Exporting superscript notation in pandas dataframe to csv or excel

て烟熏妆下的殇ゞ 提交于 2021-02-18 19:49:15
问题 I would like to write the foll. to a csv file: df.loc[0] = ['Total (2000)', numpy.nan, numpy.nan, numpy.nan, 2.0, 1.6, '10^6 km^2'] Is there a way to do that while writing '10^6 km^2' in a format such that the 6 is a superscript to 10 and 2 is a superscript to km. If not possible in csv, can I export to excel? 回答1: One possible way is to change the actual contents of the dataframe before writing it to a csv (but you can automate this somewhat). As a proof of concept, using '\u2076' as the

Exporting superscript notation in pandas dataframe to csv or excel

扶醉桌前 提交于 2021-02-18 19:49:12
问题 I would like to write the foll. to a csv file: df.loc[0] = ['Total (2000)', numpy.nan, numpy.nan, numpy.nan, 2.0, 1.6, '10^6 km^2'] Is there a way to do that while writing '10^6 km^2' in a format such that the 6 is a superscript to 10 and 2 is a superscript to km. If not possible in csv, can I export to excel? 回答1: One possible way is to change the actual contents of the dataframe before writing it to a csv (but you can automate this somewhat). As a proof of concept, using '\u2076' as the

Exporting superscript notation in pandas dataframe to csv or excel

孤者浪人 提交于 2021-02-18 19:48:16
问题 I would like to write the foll. to a csv file: df.loc[0] = ['Total (2000)', numpy.nan, numpy.nan, numpy.nan, 2.0, 1.6, '10^6 km^2'] Is there a way to do that while writing '10^6 km^2' in a format such that the 6 is a superscript to 10 and 2 is a superscript to km. If not possible in csv, can I export to excel? 回答1: One possible way is to change the actual contents of the dataframe before writing it to a csv (but you can automate this somewhat). As a proof of concept, using '\u2076' as the

Rpy2: pandas dataframe can't fit in R

杀马特。学长 韩版系。学妹 提交于 2021-02-18 19:06:14
问题 I need to read a csv file with python (into a pandas dataframe), work in R and return to python. Then, to pass pandas dataframe to R dataframe I use rpy2, and work ok (code bellow). from pandas import read_csv, DataFrame import pandas.rpy.common as com import rpy2.robjects as robjects r = robjects.r r.library("fitdistrplus") df = read_csv('./datos.csv') r_df = com.convert_to_r_dataframe(df) print(type(r_df)) And this output is: <class 'rpy2.robjects.vectors.FloatVector'> But then, I try to

Pandas dataframe : Multiple Time/Date columns to single Date index

♀尐吖头ヾ 提交于 2021-02-18 18:56:35
问题 I have a dataframe with a Product as a first column, and then 12 month of sales (one column per month). I'd like to 'pivot' the dataframe to end up with a single date index. example data : import pandas as pd import numpy as np df = pd.DataFrame(np.random.randint(10, 1000, size=(2,12)), index=['PrinterBlue', 'PrinterBetter'], columns=pd.date_range('1-1', periods=12, freq='M')) yielding: >>> df 2014-01-31 2014-02-28 2014-03-31 2014-04-30 2014-05-31 \ PrinterBlue 176 77 89 279 81 PrinterBetter

applying pandas cut within a groupby

谁说我不能喝 提交于 2021-02-18 18:15:15
问题 I'm trying to create bins (A_bin) within a DataFrame based on one column (A), and then create unique bins (B_bin) based on another column (B) within each of the original bins. df = pd.DataFrame({'A': [4.5, 5.1, 5.9, 6.3, 6.7, 7.5, 7.9, 8.5, 8.9, 9.3, 9.9, 10.3, 10.9, 11.1, 11.3, 11.9], 'B': [3.2, 2.7, 2.2, 3.3, 2.1, 1.8, 1.4, 1.0, 1.8,2.4, 1.2, 0.8, 1.4, 0.6, 0, -0.4]}) df['A_bin'] = pd.cut(df['A'], bins=3) df['B_bin'] = df.groupby('A_bin')['B'].transform(lambda x: pd.cut(x, bins=2)) This

How to parse deeply nested JSON to pandas dataframe?

◇◆丶佛笑我妖孽 提交于 2021-02-18 18:14:49
问题 Below is the code that parses the following nested jsons to corresponding pandas dataframe : import pandas as pd def flatten_json(nested_json): """ Flatten json object with nested keys into a single level. Args: nested_json: A nested json object. Returns: The flattened json object if successful, None otherwise. """ out = {} def flatten(x, name=''): if type(x) is dict: for a in x: flatten(x[a], name + a + '_') elif type(x) is list: i = 0 for a in x: flatten(a, name + str(i) + '_') i += 1 else:

applying pandas cut within a groupby

故事扮演 提交于 2021-02-18 18:13:39
问题 I'm trying to create bins (A_bin) within a DataFrame based on one column (A), and then create unique bins (B_bin) based on another column (B) within each of the original bins. df = pd.DataFrame({'A': [4.5, 5.1, 5.9, 6.3, 6.7, 7.5, 7.9, 8.5, 8.9, 9.3, 9.9, 10.3, 10.9, 11.1, 11.3, 11.9], 'B': [3.2, 2.7, 2.2, 3.3, 2.1, 1.8, 1.4, 1.0, 1.8,2.4, 1.2, 0.8, 1.4, 0.6, 0, -0.4]}) df['A_bin'] = pd.cut(df['A'], bins=3) df['B_bin'] = df.groupby('A_bin')['B'].transform(lambda x: pd.cut(x, bins=2)) This

Sorting Excel column with Python

对着背影说爱祢 提交于 2021-02-18 14:00:28
问题 Let's say I have a list like this: time type value 80 1A 10 100 1A 20 60 18 56 80 18 7 80 2A 10 100 2A 10 80 28 10 100 28 20 and I need to change it to be like this: time type 60 80 100 1A 10 20 1B 56 7 2A 10 10 2B 10 20 So far what I did is just basic sorting of the column: target_column = 0 book = open_workbook('result.xls') sheet = book.sheets()[0] data = [sheet.row_values(i) for i in range(sheet.nrows)] labels = data[0] data = data[1:] data.sort(key= lambda x: x[target_column]) bk = xlwt

Sorting Excel column with Python

喜你入骨 提交于 2021-02-18 13:59:35
问题 Let's say I have a list like this: time type value 80 1A 10 100 1A 20 60 18 56 80 18 7 80 2A 10 100 2A 10 80 28 10 100 28 20 and I need to change it to be like this: time type 60 80 100 1A 10 20 1B 56 7 2A 10 10 2B 10 20 So far what I did is just basic sorting of the column: target_column = 0 book = open_workbook('result.xls') sheet = book.sheets()[0] data = [sheet.row_values(i) for i in range(sheet.nrows)] labels = data[0] data = data[1:] data.sort(key= lambda x: x[target_column]) bk = xlwt