问题
df = df.groupby(df.index).sum()
I have a dataframe with 3.8 million rows (single column), and I'm trying to group them by index. But it takes forever to finish the computation. Are there any alternative ways to deal with a very large data set? Thanks in advance!!!!
I'm writing in Python.
The data looks like as below.
The index is the customer ID. I want to group the qty_liter
by the Index
.
df = df.groupby(df.index).sum()
But this line of code is taking toooo much time.....
the info about this df is below:
df.info()
<class 'pandas.core.frame.DataFrame'>
Index: 3842595 entries, -2147153165 to \N
Data columns (total 1 columns):
qty_liter object
dtypes: object(1)
memory usage: 58.6+ MB
回答1:
The problem is that your data are not numeric. Processing strings takes a lot longer than processing numbers. Try this first:
df.index = df.index.astype(int)
df.qty_liter = df.qty_liter.astype(float)
Then do groupby()
again. It should be much faster. If it is, see if you can modify your data loading step to have the proper dtypes from the beginning.
来源:https://stackoverflow.com/questions/44704465/pandas-df-groupby-is-too-slow-for-big-data-set-any-alternatives-methods