How to view Apache Parquet file in Windows?

前端 未结 5 1704
渐次进展
渐次进展 2020-12-24 03:38

I couldn\'t find any plain English explanations regarding Apache Parquet files. Such as:

  1. What are they?
  2. Do I need Hadoop or HDFS to view/create/store
相关标签:
5条回答
  • 2020-12-24 03:50

    What is Apache Parquet?

    Apache Parquet is a binary file format that stores data in a columnar fashion. Data inside a Parquet file is similar to an RDBMS style table where you have columns and rows. But instead of accessing the data one row at a time, you typically access it one column at a time.

    Apache Parquet is one of the modern big data storage formats. It has several advantages, some of which are:

    • Columnar storage: efficient data retrieval, efficient compression, etc...
    • Metadata is at the end of the file: allows Parquet files to be generated from a stream of data. (common in big data scenarios)
    • Supported by all Apache big data products

    Do I need Hadoop or HDFS?

    No. Parquet files can be stored in any file system, not just HDFS. As mentioned above it is a file format. So it's just like any other file where it has a name and a .parquet extension. What will usually happen in big data environments though is that one dataset will be split (or partitioned) into multiple parquet files for even more efficiency.

    All Apache big data products support Parquet files by default. So that is why it might seem like it only can exist in the Apache ecosystem.

    How can I create/read Parquet Files?

    As mentioned, all current Apache big data products such as Hadoop, Hive, Spark, etc. support Parquet files by default.

    So it's possible to leverage these systems to generate or read Parquet data. But this is far from practical. Imagine that in order to read or create a CSV file you had to install Hadoop/HDFS + Hive and configure them. Luckily there are other solutions.

    To create your own parquet files:

    • In Java please see my following post: Generate Parquet File using Java
    • In .NET please see the following library: parquet-dotnet

    To view parquet file contents:

    • Please try the following Windows utility: https://github.com/mukunku/ParquetViewer

    Are there other methods?

    Possibly. But not many exist and they mostly aren't well documented. This is due to Parquet being a very complicated file format (I could not even find a formal definition). The ones I've listed are the only ones I'm aware of as I'm writing this response

    0 讨论(0)
  • 2020-12-24 04:04

    In addition to @sal's extensive answer there is one further question I encountered in this context:

    How can I access the data in a parquet file with SQL?

    As we are still in the Windows context here, I know of not that many ways to do that. The best results were achieved by using Spark as the SQL engine with Python as interface to Spark. However, I assume that the Zeppelin environment works as well, but did not try that out myself yet.

    There is very well done guide by Michael Garlanyk to guide one through the installation of the Spark/Python combination.

    Once set up, I'm able to interact with parquets through:

    from os import walk
    from pyspark.sql import SQLContext
    
    sc = SparkContext.getOrCreate()
    sqlContext = SQLContext(sc)
    
    parquetdir = r'C:\PATH\TO\YOUR\PARQUET\FILES'
    
    # Getting all parquet files in a dir as spark contexts.
    # There might be more easy ways to access single parquets, but I had nested dirs
    dirpath, dirnames, filenames = next(walk(parquetdir), (None, [], []))
    
    # for each parquet file, i.e. table in our database, spark creates a tempview with
    # the respective table name equal the parquet filename
    print('New tables available: \n')
    
    for parquet in filenames:
        print(parquet[:-8])
        spark.read.parquet(parquetdir+'\\'+parquet).createOrReplaceTempView(parquet[:-8])
    

    Once loaded your parquets this way, you can interact with the Pyspark API e.g. via:

    my_test_query = spark.sql("""
    select
      field1,
      field2
    from parquetfilename1
    where
      field1 = 'something'
    """)
    
    my_test_query.show()
    
    0 讨论(0)
  • 2020-12-24 04:04

    On Mac if we want to view the content we can install 'parquet-tools'

    • brew install parquet-tools
    • parquet-tools head filename

    We can always read the parquet file to a dataframe in Spark and see the content.

    They are of columnar formats and are more suitable for analytical environments,write once and read many. Parquet files are more suitable for Read intensive applications.

    0 讨论(0)
  • 2020-12-24 04:08

    Maybe too late for this thread, just make some complement for anyone who wants to view Parquet file with a desktop application running on MAC or Linux.
    There is a desktop application to view Parquet and also other binary format data like ORC and AVRO. It's pure Java application so that can be run at Linux, Mac and also Windows. Please check Bigdata File Viewer for details.

    It supports complex data type like array, map, etc.

    0 讨论(0)
  • 2020-12-24 04:09

    This is possible now through Apache Arrow, which helps to simplify communication/transfer between different data formats, see my answer here or the official docs in case of Python.

    Basically this allows you to quickly read/ write parquet files in a pandas DataFrame like fashion giving you the benefits of using notebooks to view and handle such files like it was a regular csv file.

    EDIT:

    As an example, given the latest version of Pandas, make sure pyarrow is installed:

    Then you can simply use pandas to manipulate parquet files:

    import pandas as pd
    
    # read
    df = pd.read_parquet('myfile.parquet')
    
    # write
    df.to_parquet('my_newfile.parquet')
    
    df.head()
    
    0 讨论(0)
提交回复
热议问题