How to read a list of parquet files from S3 as a pandas dataframe using pyarrow?

后端 未结 7 1743
小蘑菇
小蘑菇 2020-12-04 09:15

I have a hacky way of achieving this using boto3 (1.4.4), pyarrow (0.4.1) and pandas (0.20.3).

First, I can read a single parq

7条回答
  •  情深已故
    2020-12-04 09:28

    Thanks! Your question actually tell me a lot. This is how I do it now with pandas (0.21.1), which will call pyarrow, and boto3 (1.3.1).

    import boto3
    import io
    import pandas as pd
    
    # Read single parquet file from S3
    def pd_read_s3_parquet(key, bucket, s3_client=None, **args):
        if s3_client is None:
            s3_client = boto3.client('s3')
        obj = s3_client.get_object(Bucket=bucket, Key=key)
        return pd.read_parquet(io.BytesIO(obj['Body'].read()), **args)
    
    # Read multiple parquets from a folder on S3 generated by spark
    def pd_read_s3_multiple_parquets(filepath, bucket, s3=None, 
                                     s3_client=None, verbose=False, **args):
        if not filepath.endswith('/'):
            filepath = filepath + '/'  # Add '/' to the end
        if s3_client is None:
            s3_client = boto3.client('s3')
        if s3 is None:
            s3 = boto3.resource('s3')
        s3_keys = [item.key for item in s3.Bucket(bucket).objects.filter(Prefix=filepath)
                   if item.key.endswith('.parquet')]
        if not s3_keys:
            print('No parquet found in', bucket, filepath)
        elif verbose:
            print('Load parquets:')
            for p in s3_keys: 
                print(p)
        dfs = [pd_read_s3_parquet(key, bucket=bucket, s3_client=s3_client, **args) 
               for key in s3_keys]
        return pd.concat(dfs, ignore_index=True)
    

    Then you can read multiple parquets under a folder from S3 by

    df = pd_read_s3_multiple_parquets('path/to/folder', 'my_bucket')
    

    (One can simplify this code a lot I guess.)

提交回复
热议问题