Can I read multiple files into a Spark Dataframe from S3, passing over nonexistent ones?

前端 未结 3 1042
忘掉有多难
忘掉有多难 2020-12-08 12:46

I would like to read multiple parquet files into a dataframe from S3. Currently, I\'m using the following method to do this:

files = [\'s3a://dev/2017/01/03/         


        
3条回答
  •  心在旅途
    2020-12-08 13:14

    Yes, it's possible if you change method of specifying input to hadoop glob pattern, for example:

    files = 's3a://dev/2017/01/{02,03}/data.parquet'
    df = session.read.parquet(files)
    

    You can read more on patterns in Hadoop javadoc.

    But, in my opinion this isn't elegant way of working with data partitioned by time (by day in your case). If you are able to rename directories like this:

    • s3a://dev/2017/01/03/data.parquet --> s3a://dev/day=2017-01-03/data.parquet
    • s3a://dev/2017/01/02/data.parquet --> s3a://dev/day=2017-01-02/data.parquet

    then you can take advantage of spark partitioning schema and read data by:

    session.read.parquet('s3a://dev/') \
        .where(col('day').between('2017-01-02', '2017-01-03')
    

    This way will omit empty/non-existing directories as well. Additionall column day will appear in your dataframe (it will be string in spark <2.1.0 and datetime in spark >= 2.1.0), so you will know in which directory each record exists.

提交回复
热议问题