I have a spark ec2 cluster where I am submitting a pyspark program from a Zeppelin notebook. I have loaded the hadoop-aws-2.7.3.jar and aws-java-sdk-1.11.179.jar and place
Following worked for me
My system config:
Ubuntu 16.04.6 LTS python3.7.7 openjdk version 1.8.0_252 spark-2.4.5-bin-hadoop2.7
Configure PYSPARK_PYTHON path: add following line in $spark_home/conf/spark-env.sh
export PYSPARK_PYTHON= python_env_path/bin/python
Start pyspark
pyspark --packages com.amazonaws:aws-java-sdk-pom:1.11.760,org.apache.hadoop:hadoop-aws:2.7.0 --conf spark.hadoop.fs.s3a.endpoint=s3.us-west-2.amazonaws.com
com.amazonaws:aws-java-sdk-pom:1.11.760 : depends on jdk version hadoop:hadoop-aws:2.7.0: depends on your hadoop version s3.us-west-2.amazonaws.com: depends on your s3 location
3.Read data from s3
df2=spark.read.parquet("s3a://s3location_file_path")
Credits