Spark DataFrame columns transform to Map type and List of Map Type [duplicate]

只谈情不闲聊 提交于 2019-12-12 04:34:11

问题


I have dataframe as below and Appreciate if someone can help me to get the output in below different format.

Input:

|customerId|transHeader|transLine|

|1001      |1001aa     |1001aa1  |

|1001      |1001aa     |1001aa2  |

|1001      |1001aa     |1001aa3  |

|1001      |1001aa     |1001aa4  |

|1002      |1002bb     |1002bb1  |

|1002      |1002bb     |1002bb2  |

|1002      |1002bb     |1002bb3  |

|1002      |1002bb     |1002bb4  |

|1003      |1003cc     |1003cc1  |

|1003      |1003cc     |1003cc2  |

|1003      |1003cc     |1003cc3  |

+----------+-----------+---------+

Expected OutputSet 1:

customerId  headerLineMapGroup 

1001              Map(1001aa -> (1001aa1, 1001aa2, 1001aa3, 1001aa4))

1002              Map(1002bb -> (1002bb1, 1002bb2, 1002bb3, 1002bb4))

1003              Map(1003cc -> (1003cc1, 1003cc2, 1003cc3))         

Expected OutputSet 2:

customerId  headerLineListOfMapGroup 

1001        List[   Map(1001aa -> 1001aa1), Map(1001aa ->1001aa2), Map(1001aa ->1001aa3), Map(1001aa ->1001aa4) ]

1002        List[   Map(1002bb -> 1002bb1), Map(1002bb -> 1002bb2), Map(1002bb -> 1002bb3), Map(1002bb -> 1002bb4)]

1003        List[   Map(1003cc -> 1003cc1), Map(1003cc ->1003cc2), Map(1003cc ->1003cc3) ]     

回答1:


Here is the solution using udf.

    val spark = SparkSession
    .builder()
    .master("local")
    .appName("ParquetAppendMode")
    .getOrCreate()

    import spark.implicits._

    val data = spark.sparkContext.parallelize(Seq(
      (1001, "1001aa","1001aa1"),
      (1001, "1001aa","1001aa2"),
      (1001, "1001aa","1001aa3")
  )).toDF("customerId", "transHeader", "transLine")

  val toMap = udf((header: String, line: Seq[String]) => {
    Map(header -> line)
  })
  val toMapList = udf((header: String, line: Seq[String]) => {
    line.map(l => Map(header -> l)).toList
  })

  val grouped = data.groupBy("customerId", "transHeader").agg(collect_list("transLine").alias("transLine"))

  grouped.withColumn("headerLineMapGroup", toMap($"transHeader", $"transLine"))
      .drop("transHeader", "transLine")
      .show(false)

  grouped.withColumn("headerLineMapGroupList", toMapList($"transHeader", $"transLine"))
    .drop("transHeader", "transLine")
    .show(false)

Hope this helps!



来源:https://stackoverflow.com/questions/44225514/spark-dataframe-columns-transform-to-map-type-and-list-of-map-type

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!