Combine pivoted and aggregated column in PySpark Dataframe

有些话、适合烂在心里 提交于 2021-01-29 17:48:26

问题


My question is related to this one. I have a PySpark DataFrame, named df, as shown below.

 date      | recipe | percent | volume
----------------------------------------
2019-01-01 |   A    |  0.03   |  53
2019-01-01 |   A    |  0.02   |  55
2019-01-01 |   B    |  0.05   |  60
2019-01-02 |   A    |  0.11   |  75
2019-01-02 |   B    |  0.06   |  64
2019-01-02 |   B    |  0.08   |  66

If I pivot it on recipe and aggregate both percent and volume, I get column names that concatenate recipe and the aggregated variable. I can use alias to clean things up. For example:

df.groupBy('date').pivot('recipe').agg(avg('percent').alias('percent'), avg('volume').alias('volume')).show()

 date      | A_percent | A_volume | B_percent | B_volume
--------------------------------------------------------
2019-01-01 |   0.025   |  54      |  0.05     |  60
2019-01-02 |   0.11    |  75      |  0.07     |  65

However, if I aggregate just one variable, say percent, the column names don't include the aggregated variable:

df.groupBy('date').pivot('recipe').agg(avg('percent').alias('percent')).show()

 date      |   A   |  B
-------------------------
2019-01-01 | 0.025 | 0.05
2019-01-02 | 0.11  | 0.07

How can I set it to include the concatenated name when there is only one variable in the agg function?


回答1:


According to Spark's source code, it has a special branch for pivoting with single aggregation.

    val singleAgg = aggregates.size == 1

    def outputName(value: Expression, aggregate: Expression): String = {
      val stringValue = value.name

      if (singleAgg) {
        stringValue <--- Here
      } 
      else {
        val suffix = {...}
        stringValue + "_" + suffix
      }
    }

I don't know the reason, but the single remaining option is column renaming.

Here is a simplified version for renaming:

  def rename(identity: Set[String], suffix: String)(df: DataFrame): DataFrame = {
    val fieldNames = df.schema.fields.map(filed => filed.name)
    val renamed = fieldNames.map(fieldName => {
      if (identity.contains(fieldName)) {
        fieldName
      } else {
        fieldName + suffix
      }} )

  df.toDF(renamed:_*)
  }

Usage:

rename(Set("date"), "_percent")(pivoted).show()

+----------+---------+---------+
|      date|A_percent|B_percent|
+----------+---------+---------+
|2019-01-01|    0.025|     0.05|
|2019-01-02|     0.11|     0.06|
+----------+---------+---------+


来源:https://stackoverflow.com/questions/57191369/combine-pivoted-and-aggregated-column-in-pyspark-dataframe

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!