SPARK SQL fails if there is no specified partition path available

匆匆过客 提交于 2019-12-20 05:57:11

问题


I am using Hive Metastore in EMR. I am able to query the table manually through HiveSQL .
But When i use the same table in Spark Job, it says Input path does not exist: s3://

Caused by: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: s3://....

I have deleted my above partition path in s3://.. but it still works in my Hive without Dropping Partition at table level. but its not working in pyspark anyways

Here is my full code

from pyspark import SparkContext, HiveContext
from pyspark import SQLContext
from pyspark.sql import SparkSession

sc = SparkContext(appName = "test")
sqlContext = SQLContext(sparkContext=sc)
sqlContext.sql("select count(*) from logan_test.salary_csv").show()
print("done..")

I submitted my job as below to use hive catalog tables.

spark-submit test.py --files /usr/lib/hive/conf/hive-site.xml


回答1:


I have had a similar error with HDFS where the Metastore kept a partition for the table, but the directory was missing

Check s3... If it is missing, or you deleted it, you need to run MSCK REPAIR TABLE from Hive. Sometimes this doesn't work, and you actually do need a DROP PARTITION

That property is false by default, but you set configuration properties by passing a SparkConf object to SparkContext

from pyspark import SparkConf, SparkContext

conf = SparkConf().setAppName("test").set("spark.sql.hive.verifyPartitionPath", "false"))
sc = SparkContext(conf = conf)

Or, the Spark 2 way is using a SparkSession.

from pyspark.sql import SparkSession

spark = SparkSession.builder \
...     .appName("test") \
...     .config("spark.sql.hive.verifyPartitionPath", "false") \
...     .enableHiveSupport()
...     .getOrCreate()


来源:https://stackoverflow.com/questions/47933705/spark-sql-fails-if-there-is-no-specified-partition-path-available

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!