Caching dataframes while keeping partitions

后端 未结 1 418
攒了一身酷
攒了一身酷 2020-12-06 13:32

I\'m on Spark 2.2.0, running on EMR.

I have a big dataframe df (40G or so in compressed snappy files) which is partitioned by keys k1 and <

相关标签:
1条回答
  • 2020-12-06 14:14

    This is to be expected. Spark internal columnar format used for caching is input format agnostic. Once you loaded data there there is no connection to the original input is gone.

    The exception here is new data source API [SPARK-22389][SQL] data source v2 partitioning reporting interface, which allows for persisting partitioning information, but it is new in 2.3 and still experimental.

    0 讨论(0)
提交回复
热议问题