Amazon s3a returns 400 Bad Request with Spark

后端 未结 3 1010
一整个雨季
一整个雨季 2020-12-02 01:33

For checkout purpose I try to set up an Amazon S3 bucket as checkpoint file.

val checkpointDir = \"s3a://bucket-name/checkpoint.txt\"
val sc = new SparkConte         


        
3条回答
  •  甜味超标
    2020-12-02 02:15

    If you'd like to anyway use the region that supports Signature V4 in spark you can pass flag -Dcom.amazonaws.services.s3.enableV4 to the driver options and executor options on runtime. For example:

    spark-submit --conf spark.driver.extraJavaOptions='-Dcom.amazonaws.services.s3.enableV4' \
        --conf spark.executor.extraJavaOptions='-Dcom.amazonaws.services.s3.enableV4' \
        ... (other spark options)
    

    With this settings Spark is able to write to Frankfurt (and other V4-only regions) even with not-so-fresh AWS sdk version (com.amazonaws:aws-java-sdk:1.7.4 in my case)

提交回复
热议问题