How to split parquet files into many partitions in Spark?

后端 未结 5 891
萌比男神i
萌比男神i 2020-12-06 05:10

So I have just 1 parquet file I\'m reading with Spark (using the SQL stuff) and I\'d like it to be processed with 100 partitions. I\'ve tried setting spark.default.pa

5条回答
  •  悲哀的现实
    2020-12-06 05:57

    To achieve that you should use SparkContext to set Hadoop configuration (sc.hadoopConfiguration) property mapreduce.input.fileinputformat.split.maxsize.

    By setting this property to a lower value than hdfs.blockSize, than you will get as much partitions as the number of splits.

    For example:
    When hdfs.blockSize = 134217728 (128MB),
    and one file is read which contains exactly one full block,
    and mapreduce.input.fileinputformat.split.maxsize = 67108864 (64MB)

    Then there will be two partitions those splits will be read into.

提交回复
热议问题