How to control the number of output part files created by Spark job upon writing?

时间秒杀一切 提交于 2020-01-13 05:46:09

问题


Hi I am having couple of Spark jobs which processes thousands of files every day. File size may very from MBs to GBs. After finishing job I usually save using the following code

finalJavaRDD.saveAsParquetFile("/path/in/hdfs"); OR
dataFrame.write.format("orc").save("/path/in/hdfs") //storing as ORC file as of Spark 1.4

Spark job creates plenty of small part files in final output directory. As far as I understand Spark creates part file for each partition/task please correct me if I am wrong. How do we control amount of part files Spark creates? Finally I would like to create Hive table using these parquet/orc directory and I heard Hive is slow when we have large no of small files. Please guide I am new to Spark. Thanks in advance.


回答1:


You may want to try using the DataFrame.coalesce method to decrease the number of partitions; it returns a DataFrame with the specified number of partitions (each of which becomes a file on insertion).

To increase or decrease the partitions you can use Dataframe.repartition function. But coalesce does not cause shuffle while repartition does.




回答2:


Since 1.6 you can use repartition on data frame, which means you'll get 1 file per hive partition. Beware of large shuffles though, best to have your DF partitioned properly from starts if possible. See https://stackoverflow.com/a/32920122/2204206



来源:https://stackoverflow.com/questions/31249265/how-to-control-the-number-of-output-part-files-created-by-spark-job-upon-writing

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!