问题
I have the following data frame:
id<-c(1,1,1,1,1,3,3,3,3)
spent<-c(10,20,30,40,50,60,70,80,90)
date<-c("11-11-07","11-11-07","23-11-07","12-12-08","17-12-08","11-11-07","23-11-07","23- 11-07","16-01-08")
df<-data.frame(id,date,spent)
df$date2<-as.Date(as.character(df$date), format = "%d-%m-%y")
id date spent date2
1 1 11-11-07 10 2007-11-11
2 1 11-11-07 20 2007-11-11
3 1 23-11-07 30 2007-11-23
4 1 12-12-08 40 2008-12-12
5 1 17-12-08 50 2008-12-17
6 3 11-11-07 60 2007-11-11
7 3 23-11-07 70 2007-11-23
8 3 23-11-07 80 2007-11-23
9 3 16-01-08 90 2008-01-16
I need to calculate the sum spent
by each id
per day and include it in the frame work as follow:
id date spent date2 sum.spent
1 1 11-11-07 10 2007-11-11 10
2 1 11-11-07 20 2007-11-11 30
3 1 23-11-07 30 2007-11-23 30
4 1 12-12-08 40 2008-12-12 40
5 1 17-12-08 50 2008-12-17 50
6 3 11-11-07 60 2007-11-11 60
7 3 23-11-07 70 2007-11-23 70
8 3 23-11-07 80 2007-11-23 150
9 3 16-01-08 90 2008-01-16 90
The following script works well (except for the first row which is not a big deal):
df$spent2<-NA
for (a in 2:9)
if (df[a,1]==df[a-1,1]&& df[a,4]==df[a-1,4])
(df[a,5]=df[a,3]+df[a-1,3])else(df[a,5]=df[a,3])
However since the number of rows in my actual dataset is around 1.5 million, the above script takes around 5 days to be executed. I wonder if you can suggest a more efficient way to write this code and achieve the same objective.
回答1:
data.table
is pretty fast, especially for such large datasets. This should run pretty quickly for 1.5 mil records.
library(data.table)
df <- data.table(df)
df <- df[, sum.spent:=cumsum(spent), by = list(id, date2)]
回答2:
Here is a base R solution:
df$sum.spent <- ave(df$spent,df$id,df$date2,FUN=cumsum)
I get a different result than your expected answer though, but I think it is correct?
来源:https://stackoverflow.com/questions/13081821/how-to-speed-up-cummulative-sum-within-group