Spark throws java.io.IOException: Failed to rename when saving part-xxxxx.gz

喜欢而已 提交于 2019-12-04 12:51:53

It's not safe to use S3 as a direct destination of work without a "consistency layer" (Consistent EMR, or from the Apache Hadoop project itself, S3Guard), or a Special output committer designed explicitly for work with S3 (Hadoop 3.1+ "the S3A committers"). Rename is where things fail, as listing inconsistency means that the scan for files to copy may miss data, or find deleted files which it can't rename. Your stack trace looks exactly how I'd expect this to surface: job commits failing apparently at random.

Rather than go into the details, here's a video of Ryan Blue on the topic

Workaround: write to your local cluster FS then use distcp to upload to S3.

PS: for Hadoop 2.7+, switch to the s3a:// connector. It has exactly the same consistency problem without S3Guard enabled, but better performance.

The solutions in @Steve Loughran post are great. Just to add a little info to help explaining the issue.

Hadoop-2.7 uses Hadoop Commit Protocol for committing. When Spark saves result to S3, it actually saves temporary result to S3 first and make it visible by renaming it when job succeeds (reason and detail can be found in this great doc). However, S3 is an object store and does not have real "rename"; it copy the data to target object, then delete original object.

S3 is "eventually consistent", which means the delete operation could happen before copy is fully synced. When this happens, the rename would fail.

In my cases, this was only triggered in some chained jobs. I haven't seen this in simple save job.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!