Apache Spark (Structured Streaming) : S3 Checkpoint support

后端 未结 4 2045
温柔的废话
温柔的废话 2020-12-09 00:00

From the spark structured streaming documentation: \"This checkpoint location has to be a path in an HDFS compatible file system, and can be set as an option in the D

4条回答
  •  北海茫月
    2020-12-09 00:47

    What makes an FS HDFS "compliant?" it's a file system, with the behaviours specified in Hadoop FS specification. The difference between an object store and FS is covered there, with the key point being "eventually consistent object stores without append or O(1) atomic renames are not compliant"

    For S3 in particular

    1. It's not consistent: after a new blob is created, a list command often doesn't show it. Same for deletions.
    2. When a blob is overwritten or deleted, it can take a while to go away
    3. rename() is implemented by copy and then delete

    Spark streaming checkpoints by saving everything to a location and then renaming it to the checkpoint directory. This makes the time to checkpoint proportional to the time to do a copy of the data in S3, which is ~6-10 MB/s.

    The current bit of streaming code isn't suited for s3

    For now, do one of

    • checkpoint to HDFS and then copy over the results
    • checkpoint to a bit of EBS allocated and attached to your cluster
    • checkpoint to S3, but have a long gap between checkpoints so that the time to checkpoint doesn't bring your streaming app down.

    If you are using EMR, you can pay the premium for a consistent, dynamo DB backed S3, which gives you better consistency. But copy time is still the same, so checkpointing will be just as slow

提交回复
热议问题