i just set up an AWS EMR Cluster (EMR Version 5.18 with Spark 2.3.2). I ssh into the master maschine and run spark-shell or pyspark and get the following error:
We have also run into this issue and hope some AWS or Spark engineers are reading this. I've narrowed this down to the /etc/spark/conf/log4j.properties file and how the loggers are configured using the ${spark.yarn.app.container.log.dir} system property. That value is evaluating to null and so the logging directory now evaluates to /stdout and /stderr instead of the desired /mnt/var/log/hadoop-yarn/containers/ which is how it worked in EMR < 5.18.0.
Workaround #1 (not ideal): If you set that property to a static path which the hadoop user has access to like /var/log/hadoop-yarn/stderr things work fine. This probably breaks things like the history server and an unknown number of other things, but spark-shell and pyspark can start without errors.
UPDATE Workaround #2 (revert): Not sure why I didn't do this earlier but comparing this to a 5.13 cluster, the entirety of the DRFA-stderr and DRFA-stdout appenders were non-existent. If you comment those sections out, delete them, or simply copy the log4j.properties file from the template this problem also goes away (again, unknown impact to the rest of the services). I'm not sure where that section originated from, the master repo configs do not have those appenders so it appears to be proprietary to AWS distros.