What is the difference between spark checkpoint and persist to a disk

后端 未结 4 1362
北海茫月
北海茫月 2020-11-29 16:29

What is the difference between spark checkpoint and persist to a disk. Are both these store in the local disk?

4条回答
  •  一整个雨季
    2020-11-29 16:48

    There are few important differences but the fundamental one is what happens with lineage. Persist / cache keeps lineage intact while checkpoint breaks lineage. Lets consider following examples:

    import org.apache.spark.storage.StorageLevel
    
    val rdd = sc.parallelize(1 to 10).map(x => (x % 3, 1)).reduceByKey(_ + _)
    
    • cache / persist:

      val indCache  = rdd.mapValues(_ > 4)
      indCache.persist(StorageLevel.DISK_ONLY)
      
      indCache.toDebugString
      // (8) MapPartitionsRDD[13] at mapValues at :24 [Disk Serialized 1x Replicated]
      //  |  ShuffledRDD[3] at reduceByKey at :21 [Disk Serialized 1x Replicated]
      //  +-(8) MapPartitionsRDD[2] at map at :21 [Disk Serialized 1x Replicated]
      //     |  ParallelCollectionRDD[1] at parallelize at :21 [Disk Serialized 1x Replicated]
      
      indCache.count
      // 3
      
      indCache.toDebugString
      // (8) MapPartitionsRDD[13] at mapValues at :24 [Disk Serialized 1x Replicated]
      //  |       CachedPartitions: 8; MemorySize: 0.0 B; ExternalBlockStoreSize: 0.0 B; DiskSize: 587.0 B
      //  |  ShuffledRDD[3] at reduceByKey at :21 [Disk Serialized 1x Replicated]
      //  +-(8) MapPartitionsRDD[2] at map at :21 [Disk Serialized 1x Replicated]
      //     |  ParallelCollectionRDD[1] at parallelize at :21 [Disk Serialized 1x Replicated]
      
    • checkpoint:

      val indChk  = rdd.mapValues(_ > 4)
      indChk.checkpoint
      
      indChk.toDebugString
      // (8) MapPartitionsRDD[11] at mapValues at :24 []
      //  |  ShuffledRDD[3] at reduceByKey at :21 []
      //  +-(8) MapPartitionsRDD[2] at map at :21 []
      //     |  ParallelCollectionRDD[1] at parallelize at :21 []
      
      indChk.count
      // 3
      
      indChk.toDebugString
      // (8) MapPartitionsRDD[11] at mapValues at :24 []
      //  |  ReliableCheckpointRDD[12] at count at :27 []
      

    As you can see, in the first case lineage is preserved even if data is fetched from the cache. It means that data can be recomputed from scratch if some partitions of indCache are lost. In the second case lineage is completely lost after the checkpoint and indChk doesn't carry an information required to rebuild it anymore.

    checkpoint, unlike cache / persist is computed separately from other jobs. That's why RDD marked for checkpointing should be cached:

    It is strongly recommended that this RDD is persisted in memory, otherwise saving it on a file will require recomputation.

    Finally checkpointed data is persistent and not removed after SparkContext is destroyed.

    Regarding data storage SparkContext.setCheckpointDir used by RDD.checkpoint requires DFS path if running in non-local mode. Otherwise it can be local files system as well. localCheckpoint and persist without replication should use local file system.

    Important Note:

    RDD checkpointing is a different concept than a chekpointing in Spark Streaming. The former one is designed to address lineage issue, the latter one is all about streaming reliability and failure recovery.

提交回复
热议问题