How to use s3 with Apache spark 2.2 in the Spark shell

后端 未结 1 873
深忆病人
深忆病人 2020-12-08 01:38

I\'m trying to load data from an Amazon AWS S3 bucket, while in the Spark shell.

I have consulted the following resources:

Parsing files from Amazon S3 with

相关标签:
1条回答
  • 2020-12-08 02:04

    If you are using Apache Spark 2.2.0, then you should use hadoop-aws-2.7.3.jar and aws-java-sdk-1.7.4.jar.

    $ spark-shell --jars jars/hadoop-aws-2.7.3.jar,jars/aws-java-sdk-1.7.4.jar
    

    After that, when you will try to load data from S3 bucket in the shell, you will be able to do so.

    0 讨论(0)
提交回复
热议问题