Spark + s3 - error - java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found

前端 未结 4 1170
抹茶落季
抹茶落季 2020-12-20 14:35

I have a spark ec2 cluster where I am submitting a pyspark program from a Zeppelin notebook. I have loaded the hadoop-aws-2.7.3.jar and aws-java-sdk-1.11.179.jar and place

4条回答
  •  独厮守ぢ
    2020-12-20 15:04

    Following worked for me

    My system config:

    Ubuntu 16.04.6 LTS python3.7.7 openjdk version 1.8.0_252 spark-2.4.5-bin-hadoop2.7

    1. Configure PYSPARK_PYTHON path: add following line in $spark_home/conf/spark-env.sh

      export PYSPARK_PYTHON= python_env_path/bin/python

    2. Start pyspark

      pyspark --packages com.amazonaws:aws-java-sdk-pom:1.11.760,org.apache.hadoop:hadoop-aws:2.7.0 --conf spark.hadoop.fs.s3a.endpoint=s3.us-west-2.amazonaws.com

      com.amazonaws:aws-java-sdk-pom:1.11.760 : depends on jdk version hadoop:hadoop-aws:2.7.0: depends on your hadoop version s3.us-west-2.amazonaws.com: depends on your s3 location

    3.Read data from s3

    df2=spark.read.parquet("s3a://s3location_file_path")
    

    Credits

提交回复
热议问题