How to control the number of output part files created by Spark job upon writing?

限于喜欢 提交于 2019-12-04 17:40:51
zweiterlinde

You may want to try using the DataFrame.coalesce method to decrease the number of partitions; it returns a DataFrame with the specified number of partitions (each of which becomes a file on insertion).

To increase or decrease the partitions you can use Dataframe.repartition function. But coalesce does not cause shuffle while repartition does.

Lior Chaga

Since 1.6 you can use repartition on data frame, which means you'll get 1 file per hive partition. Beware of large shuffles though, best to have your DF partitioned properly from starts if possible. See https://stackoverflow.com/a/32920122/2204206

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!