Python: save pandas data frame to parquet file

。_饼干妹妹 提交于 2020-03-13 05:55:08

问题


Is it possible to save a pandas data frame directly to a parquet file? If not, what would be the suggested process?

The aim is to be able to send the parquet file to another team, which they can use scala code to read/open it. Thanks!


回答1:


Pandas has a core function to_parquet(). Just write the dataframe to parquet format like this:

df.to_parquet('myfile.parquet')

You still need to install a parquet library such as fastparquet. If you have more than one parquet library installed, you also need to specify which engine you want pandas to use, otherwise it will take the first one to be installed (as in the documentation). For example:

df.to_parquet('myfile.parquet', engine='fastparquet')



回答2:


There is a relatively early implementation of a package called fastparquet - it could be a good use case for what you need.

https://github.com/dask/fastparquet

conda install -c conda-forge fastparquet

or

pip install fastparquet

from fastparquet import write 
write('outfile.parq', df)

or, if you want to use some file options, like row grouping/compression:

write('outfile2.parq', df, row_group_offsets=[0, 10000, 20000], compression='GZIP', file_scheme='hive')



回答3:


pyarrow has support for storing pandas dataframes:

import pyarrow

pyarrow.Table.from_pandas(dataset)



回答4:


Yes, it is possible. Here is example code:

import pyarrow as pa
import pyarrow.parquet as pq

df = pd.DataFrame(data={'col1': [1, 2], 'col2': [3, 4]})
table = pa.Table.from_pandas(df, preserve_index=True)
pq.write_table(table, 'output.parquet')



回答5:


this is the approach that worked for me - similar to the above - but also chose to stipulate the compression type:

import pandas as pd 

set up test dataframe

df = pd.DataFrame(data={'col1': [1, 2], 'col2': [3, 4]})

import the required parquet library (make sure this has been installed, I used : $ conda install fastparquet)

import fastparquet

convert data frame to parquet and save to current directory

df.to_parquet('df.parquet.gzip', compression='gzip')

read the parquet file in current directory, back into a pandas data frame

pd.read_parquet('df.parquet.gzip')

output:

    col1    col2
0    1       3
1    2       4



回答6:


Yes pandas supports saving the dataframe in paraquet format.

Simple method to write pandas dataframe to parquet.

Assuming, df is the pandas dataframe. We need to import following libraries.

import pyarrow as pa
import pyarrow.parquet as pq

First, write the datafrmae df into a pyarrow table.

# Convert DataFrame to Apache Arrow Table
table = pa.Table.from_pandas(df_image_0)

Second, write the table into paraquet file say file_name.paraquet

# Parquet with Brotli compression
pq.write_table(table, 'file_name.paraquet')

NOTE: paraquet files can be further compressed while writing. Following are the popular compression formats.

  • Snappy ( default, requires no argument)
  • gzip
  • brotli

Parquet with Snappy compression

 pq.write_table(table, 'file_name.paraquet')

Parquet with GZIP compression

pq.write_table(table, 'file_name.paraquet', compression='GZIP')

Parquet with Brotli compression

pq.write_table(table, 'file_name.paraquet', compression='BROTLI')

Comparative comparision achieved with different formats of paraquet

Reference: https://tech.jda.com/efficient-dataframe-storage-with-apache-parquet/



来源:https://stackoverflow.com/questions/41066582/python-save-pandas-data-frame-to-parquet-file

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!