问题
I know I can connect to an HDFS cluster via pyarrow using pyarrow.hdfs.connect()
I also know I can read a parquet file using pyarrow.parquet
's read_table()
However, read_table()
accepts a filepath, whereas hdfs.connect()
gives me a HadoopFileSystem
instance.
Is it somehow possible to use just pyarrow (with libhdfs3 installed) to get a hold of a parquet file/folder residing in an HDFS cluster? What I wish to get to is the to_pydict()
function, then I can pass the data along.
回答1:
Try
fs = pa.hdfs.connect(...)
fs.read_parquet('/path/to/hdfs-file', **other_options)
or
import pyarrow.parquet as pq
with fs.open(path) as f:
pq.read_table(f, **read_options)
I opened https://issues.apache.org/jira/browse/ARROW-1848 about adding some more explicit documentation about this
回答2:
I tried the same via Pydoop library and engine = pyarrow and it worked perfect for me.Here is the generalized method.
!pip install pydoop pyarrow
import pydoop.hdfs as hd
#read files via Pydoop and return df
def readParquetFilesPydoop(path):
with hd.open(path) as f:
df = pd.read_parquet(f ,engine='pyarrow')
logger.info ('file: ' + path + ' : ' + str(df.shape))
return df
来源:https://stackoverflow.com/questions/47443151/read-a-parquet-files-from-hdfs-using-pyarrow