How to do custom partition in spark dataframe with saveAsTextFile

▼魔方 西西 提交于 2019-12-06 11:35:51

This can be done by using concat_ws, this function works similarly to mkString but can be performed on directly on dataframe. This makes the conversion step to rdd redundant and the df.write.partitionBy() method can be used. A small example that will concatenate all available columns,

import org.apache.spark.sql.functions._
import spark.implicits._

val df = Seq(("01", "20000", "45.30"), ("01", "30000", "45.30"))
  .toDF("col1", "col2", "col3")

val df2 = df.select($"DataPartiotion", $"StatementTypeCode",
  concat_ws("|^|", df.schema.fieldNames.map(c => col(c)): _*).as("concatenated"))

This will give you a resulting dataframe like this,

+--------------+-----------------+------------------+
|DataPartiotion|StatementTypeCode|      concatenated|
+--------------+-----------------+------------------+
|            01|            20000|01|^|20000|^|45.30|
|            01|            30000|01|^|30000|^|45.30|
+--------------+-----------------+------------------+
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!