Rowwise sum per group and add total as a new row in dataframe in Pyspark

自古美人都是妖i 提交于 2021-01-29 14:50:09

问题


I have a dataframe like this sample

df = spark.createDataFrame(
    [(2, "A" , "A2" , 2500),
    (2, "A" , "A11" , 3500),
    (2, "A" , "A12" , 5500),
    (4, "B" , "B25" , 7600),
    (4, "B", "B26" ,5600),
    (5, "C" , "c25" ,2658),
    (5, "C" , "c27" , 1100),
    (5, "C" , "c28" , 1200)],
    ['parent', 'group' , "brand" , "usage"])


output :
+------+-----+-----+-----+
|parent|group|brand|usage|
+------+-----+-----+-----+
|     2|    A|   A2| 2500|
|     2|    A|  A11| 3500|
|     4|    B|  B25| 7600|
|     4|    B|  B26| 5600|
|     5|    C|  c25| 2658|
|     5|    C|  c27| 1100|
|     5|    C|  c28| 1200|
+------+-----+-----+-----+

What I would like to do is to compute, for each group total of usage and add it as a new row with Total value for brand. How can I do this in PySpark?:

Expected result:

+------+-----+-----+-----+
|parent|group|brand|usage|
+------+-----+-----+-----+
|     2|    A|   A2| 2500|
|     2|    A|  A11| 3500|
|     2|    A|Total| 6000|
|     4|    B|  B25| 7600|
|     4|    B|  B26| 5600|
|     4|    B|Total|18700|
|     5|    C|  c25| 2658|
|     5|    C|  c27| 1100|
|     5|    C|  c28| 1200|
|     5|    C|Total| 4958|
+------+-----+-----+-----+

回答1:


import pyspark.sql.functions as F

df = spark.createDataFrame(
[(2, "A" , "A2" , 2500),
(2, "A" , "A11" , 3500),
(2, "A" , "A12" , 5500),
(4, "B" , "B25" , 7600),
(4, "B", "B26" ,5600),
(5, "C" , "c25" ,2658),
(5, "C" , "c27" , 1100),
(5, "C" , "c28" , 1200)],
['parent', 'group' , "brand" , "usage"])

df.show()
+------+-----+-----+-----+
|parent|group|brand|usage|
+------+-----+-----+-----+
|     2|    A|   A2| 2500|
|     2|    A|  A11| 3500|
|     2|    A|  A12| 5500|
|     4|    B|  B25| 7600|
|     4|    B|  B26| 5600|
|     5|    C|  c25| 2658|
|     5|    C|  c27| 1100|
|     5|    C|  c28| 1200|
+------+-----+-----+-----+

#Group by and sum to get the totals
totals = df.groupBy(['group','parent']).agg(F.sum('usage').alias('usage')).withColumn('brand', F.lit('Total'))

# create a temp variable to sort
totals = totals.withColumn('sort_id', F.lit(2))
df = df.withColumn('sort_id', F.lit(1))

#Union dataframes, drop temp variable and show
df.unionByName(totals).sort(['group','sort_id']).drop('sort_id').show()

+------+-----+-----+-----+
|parent|group|brand|usage|
+------+-----+-----+-----+
|     2|    A|  A12| 5500|
|     2|    A|  A11| 3500|
|     2|    A|   A2| 2500|
|     2|    A|Total|11500|
|     4|    B|  B25| 7600|
|     4|    B|  B26| 5600|
|     4|    B|Total|13200|
|     5|    C|  c25| 2658|
|     5|    C|  c28| 1200|
|     5|    C|  c27| 1100|
|     5|    C|Total| 4958|
+------+-----+-----+-----+


来源:https://stackoverflow.com/questions/63848233/rowwise-sum-per-group-and-add-total-as-a-new-row-in-dataframe-in-pyspark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!