Why does Apache Spark partitions CSV read based on the file size and how do I change the partitions?

后端 未结 0 936
小鲜肉
小鲜肉 2020-12-09 19:47

Here is my pyspark code:

csv_file = "/FileStore/tables/mnt/training/departuredelays02.csv"
schema   = "`date` STRING, `delay` INT, `distance` I         


        
相关标签:
回答
  • 消灭零回复
提交回复
热议问题