Spark: can you include partition columns in output files?

瘦欲@ 提交于 2019-12-20 02:13:19

问题


I am using Spark to write out data into partitions. Given a dataset with two columns (foo, bar), if I do df.write.mode("overwrite").format("csv").partitionBy("foo").save("/tmp/output"), I get an output of

/tmp/output/foo=1/X.csv
/tmp/output/foo=2/Y.csv
...

However, the output CSV files only contain the value for bar, not foo. I know the value of foo is already captured in the directory name foo=N, but is it possible to also include the value of foo in the CSV file?


回答1:


Only if you make a copy under different name:

(df
    .withColumn("foo_", col("foo"))
    .write.mode("overwrite")
    .format("csv").partitionBy("foo_").save("/tmp/output"))


来源:https://stackoverflow.com/questions/48190107/spark-can-you-include-partition-columns-in-output-files

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!