Spark can use Hadoop S3A file system org.apache.hadoop.fs.s3a.S3AFileSystem. By adding the following into the conf/spark-defaults.conf, I can get spark
Did some more digging and figured it out. Here's what was wrong:
The JARs necessary for S3A can be added to $SPARK_HOME/jars (as described in SPARK-15965)
The line
spark.history.provider org.apache.hadoop.fs.s3a.S3AFileSystem
in $SPARK_HOME/conf/spark-defaults.conf will cause
Exception in thread "main" java.lang.NoSuchMethodException: org.apache.hadoop.fs.s3a.S3AFileSystem.(org.apache.spark.SparkConf)
exception. That line can be safely removed as suggested in this answer.
To summarize:
I added the following JARs to $SPARK_HOME/jars:
and added this line to $SPARK_HOME/conf/spark-defaults.conf
spark.history.fs.logDirectory s3a://spark-logs-test/
You'll need some other configuration to enable logging in the first place, but once the S3 bucket has the logs, this is the only configuration that is needed for the History Server.