How to read a list of parquet files from S3 as a pandas dataframe using pyarrow?

后端 未结 7 1725
小蘑菇
小蘑菇 2020-12-04 09:15

I have a hacky way of achieving this using boto3 (1.4.4), pyarrow (0.4.1) and pandas (0.20.3).

First, I can read a single parq

相关标签:
7条回答
  • 2020-12-04 09:28

    Thanks! Your question actually tell me a lot. This is how I do it now with pandas (0.21.1), which will call pyarrow, and boto3 (1.3.1).

    import boto3
    import io
    import pandas as pd
    
    # Read single parquet file from S3
    def pd_read_s3_parquet(key, bucket, s3_client=None, **args):
        if s3_client is None:
            s3_client = boto3.client('s3')
        obj = s3_client.get_object(Bucket=bucket, Key=key)
        return pd.read_parquet(io.BytesIO(obj['Body'].read()), **args)
    
    # Read multiple parquets from a folder on S3 generated by spark
    def pd_read_s3_multiple_parquets(filepath, bucket, s3=None, 
                                     s3_client=None, verbose=False, **args):
        if not filepath.endswith('/'):
            filepath = filepath + '/'  # Add '/' to the end
        if s3_client is None:
            s3_client = boto3.client('s3')
        if s3 is None:
            s3 = boto3.resource('s3')
        s3_keys = [item.key for item in s3.Bucket(bucket).objects.filter(Prefix=filepath)
                   if item.key.endswith('.parquet')]
        if not s3_keys:
            print('No parquet found in', bucket, filepath)
        elif verbose:
            print('Load parquets:')
            for p in s3_keys: 
                print(p)
        dfs = [pd_read_s3_parquet(key, bucket=bucket, s3_client=s3_client, **args) 
               for key in s3_keys]
        return pd.concat(dfs, ignore_index=True)
    

    Then you can read multiple parquets under a folder from S3 by

    df = pd_read_s3_multiple_parquets('path/to/folder', 'my_bucket')
    

    (One can simplify this code a lot I guess.)

    0 讨论(0)
  • 2020-12-04 09:31

    You should use the s3fs module as proposed by yjk21. However as result of calling ParquetDataset you'll get a pyarrow.parquet.ParquetDataset object. To get the Pandas DataFrame you'll rather want to apply .read_pandas().to_pandas() to it:

    import pyarrow.parquet as pq
    import s3fs
    s3 = s3fs.S3FileSystem()
    
    pandas_dataframe = pq.ParquetDataset('s3://your-bucket/', filesystem=s3).read_pandas().to_pandas()
    
    0 讨论(0)
  • 2020-12-04 09:34

    If you are open to also use AWS Data Wrangler.

    import awswrangler as wr
    
    df = wr.s3.read_parquet(path="s3://...")
    
    0 讨论(0)
  • 2020-12-04 09:38

    Probably the easiest way to read parquet data on the cloud into dataframes is to use dask.dataframe in this way:

    import dask.dataframe as dd
    df = dd.read_parquet('s3://bucket/path/to/data-*.parq')
    

    dask.dataframe can read from Google Cloud Storage, Amazon S3, Hadoop file system and more!

    0 讨论(0)
  • 2020-12-04 09:38

    You can use s3fs from dask which implements a filesystem interface for s3. Then you can use the filesystem argument of ParquetDataset like so:

    import s3fs
    s3 = s3fs.S3FileSystem()
    dataset = pq.ParquetDataset('s3n://dsn/to/my/bucket', filesystem=s3)
    
    0 讨论(0)
  • 2020-12-04 09:40

    Provided you have the right package setup

    $ pip install pandas==1.1.0 pyarrow==1.0.0 s3fs==0.4.2
    

    and your AWS shared config and credentials files configured appropriately

    you can use pandas right away:

    import pandas as pd
    
    df = pd.read_parquet("s3://bucket/key.parquet")
    

    In case of having multiple AWS profiles you may also need to set

    $ export AWS_DEFAULT_PROFILE=profile_under_which_the_bucket_is_accessible
    

    so you can access your bucket.

    0 讨论(0)
提交回复
热议问题