Pyarrow read/write from s3

ε祈祈猫儿з 提交于 2020-07-19 03:32:16

问题


Is it possible to read and write parquet files from one folder to another folder in s3 without converting into pandas using pyarrow.

Here is my code:

import pyarrow.parquet as pq
import pyarrow as pa
import s3fs

s3 = s3fs.S3FileSystem()

bucket = 'demo-s3'

pd = pq.ParquetDataset('s3://{0}/old'.format(bucket), filesystem=s3).read(nthreads=4).to_pandas()
table = pa.Table.from_pandas(pd)
pq.write_to_dataset(table, 's3://{0}/new'.format(bucket), filesystem=s3, use_dictionary=True, compression='snappy')

回答1:


If you do not wish to copy the files directly, it appears you can indeed avoid pandas thus:

table = pq.ParquetDataset('s3://{0}/old'.format(bucket),
    filesystem=s3).read(nthreads=4)
pq.write_to_dataset(table, 's3://{0}/new'.format(bucket), 
    filesystem=s3, use_dictionary=True, compression='snappy')



回答2:


Why not just copy directly (S3 -> S3) and save memory and I/O?

import awswrangler as wr

SOURCE_PATH = "s3://..."
TARGET_PATH = "s3://..."

wr.s3.copy_objects(
    source_path=SOURCE_PATH,
    target_path=TARGET_PATH
)

Reference



来源:https://stackoverflow.com/questions/49513152/pyarrow-read-write-from-s3

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!