Spark SQL: apply aggregate functions to a list of columns

前端 未结 3 551
抹茶落季
抹茶落季 2020-11-22 10:40

Is there a way to apply an aggregate function to all (or a list of) columns of a dataframe, when doing a groupBy? In other words, is there a way to avoid doing

3条回答
  •  萌比男神i
    2020-11-22 11:14

    Another example of the same concept - but say - you have 2 different columns - and you want to apply different agg functions to each of them i.e

    f.groupBy("col1").agg(sum("col2").alias("col2"), avg("col3").alias("col3"), ...)
    

    Here is the way to achieve it - though I do not yet know how to add the alias in this case

    See the example below - Using Maps

    val Claim1 = StructType(Seq(StructField("pid", StringType, true),StructField("diag1", StringType, true),StructField("diag2", StringType, true), StructField("allowed", IntegerType, true), StructField("allowed1", IntegerType, true)))
    val claimsData1 = Seq(("PID1", "diag1", "diag2", 100, 200), ("PID1", "diag2", "diag3", 300, 600), ("PID1", "diag1", "diag5", 340, 680), ("PID2", "diag3", "diag4", 245, 490), ("PID2", "diag2", "diag1", 124, 248))
    
    val claimRDD1 = sc.parallelize(claimsData1)
    val claimRDDRow1 = claimRDD1.map(p => Row(p._1, p._2, p._3, p._4, p._5))
    val claimRDD2DF1 = sqlContext.createDataFrame(claimRDDRow1, Claim1)
    
    val l = List("allowed", "allowed1")
    val exprs = l.map((_ -> "sum")).toMap
    claimRDD2DF1.groupBy("pid").agg(exprs) show false
    val exprs = Map("allowed" -> "sum", "allowed1" -> "avg")
    
    claimRDD2DF1.groupBy("pid").agg(exprs) show false
    

提交回复
热议问题