PySpark job fails when loading multiple files and one is missing [duplicate]

眉间皱痕 提交于 2020-01-04 05:26:09

问题


When using PySpark to load multiple JSON files from S3 I get an error and the Spark job fails if a file is missing.

Caused by: org.apache.hadoop.mapred.InvalidInputException: Input Pattern s3n://example/example/2017-02-18/*.json matches 0 files

This is how I add the 5 last days to my job with PySpark.

days = 5
x = 0
files = []

while x < days:
    filedate = (date.today() - timedelta(x)).isoformat()
    path = "s3n://example/example/"+filedate+"/*.json"
    files.append(path)
    x += 1

rdd = sc.textFile(",".join(files))                      
df = sql_context.read.json(rdd, schema)

How can I get PySpark to ignore the missing files and continue with the job?


回答1:


Use a function that tries to load the file, if the file is missing it fails and returns False.

from py4j.protocol import Py4JJavaError

def path_exist(sc, path):
    try:
        rdd = sc.textFile(path)
        rdd.take(1)
        return True
    except Py4JJavaError as e:
        return False

This lets you check if files are available before adding them to your list without having to use AWS Cli or S3 commands.

days = 5
x = 0
files = []

while x < days:
    filedate = (date.today() - timedelta(x)).isoformat()
    path = "s3n://example/example/"+filedate+"/*.json"
    if path_exist(sc, path):
        files.append(path)
    else:
        print('Path does not exist, skipping: ' + path)
    x += 1

rdd = sc.textFile(",".join(files))                      
df = sql_context.read.json(rdd, schema)

I found this solution at http://www.learn4master.com/big-data/pyspark/pyspark-check-if-file-exists



来源:https://stackoverflow.com/questions/42340407/pyspark-job-fails-when-loading-multiple-files-and-one-is-missing

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!