pyspark.sql.utils.AnalysisException: u'Path does not exist
问题 I am running a spark job with amazon emr using the standard hdfs, not S3 to store my files. I have a hive table in hdfs://user/hive/warehouse/ but it cannot be found when my spark job is ran. I configured the spark property spark.sql.warehouse.dir to reflect that of my hdfs directory and while the yarn logs do say: 17/03/28 19:54:05 INFO SharedState: Warehouse path is 'hdfs://user/hive/warehouse/'. later on in the logs it says(full log at end of page): LogType:stdout Log Upload Time:Tue Mar