I\'m on Spark 2.2.0, running on EMR.
I have a big dataframe df (40G or so in compressed snappy files) which is partitioned by keys k1 and <
This is to be expected. Spark internal columnar format used for caching is input format agnostic. Once you loaded data there there is no connection to the original input is gone.
The exception here is new data source API [SPARK-22389][SQL] data source v2 partitioning reporting interface, which allows for persisting partitioning information, but it is new in 2.3 and still experimental.