read a parquet files from HDFS using PyArrow

情到浓时终转凉″ 提交于 2019-12-09 20:42:51

问题


I know I can connect to an HDFS cluster via pyarrow using pyarrow.hdfs.connect()

I also know I can read a parquet file using pyarrow.parquet's read_table()

However, read_table() accepts a filepath, whereas hdfs.connect() gives me a HadoopFileSystem instance.

Is it somehow possible to use just pyarrow (with libhdfs3 installed) to get a hold of a parquet file/folder residing in an HDFS cluster? What I wish to get to is the to_pydict() function, then I can pass the data along.


回答1:


Try

fs = pa.hdfs.connect(...)
fs.read_parquet('/path/to/hdfs-file', **other_options)

or

import pyarrow.parquet as pq
with fs.open(path) as f:
    pq.read_table(f, **read_options)

I opened https://issues.apache.org/jira/browse/ARROW-1848 about adding some more explicit documentation about this




回答2:


I tried the same via Pydoop library and engine = pyarrow and it worked perfect for me.Here is the generalized method.

!pip install pydoop pyarrow
import pydoop.hdfs as hd

#read files via Pydoop and return df

def readParquetFilesPydoop(path):
    with hd.open(path) as f:
        df = pd.read_parquet(f ,engine='pyarrow')
        logger.info ('file: ' +  path  +  ' : ' + str(df.shape))
        return df


来源:https://stackoverflow.com/questions/47443151/read-a-parquet-files-from-hdfs-using-pyarrow

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!