Hive doesn't read partitioned parquet files generated by Spark

后端 未结 2 627
离开以前
离开以前 2020-12-14 23:28

I\'m having a problem to read partitioned parquet files generated by Spark in Hive. I\'m able to create the external table in hive but when I try to select a few lines, hive

2条回答
  •  天涯浪人
    2020-12-15 00:09

    Even though this Question was answered already, the following point may also help the users who are still not able to solve the issue just by MSCK REPAIR TABLE table_name;

    I have an hdfs file system which is partitioned as below:

    //

    eg: my_file.pq/column_5=test/column_6=5

    I created a hive table with partitions

    eg:

    CREATE EXTERNAL TABLE myschema.my_table(
    `column_1` int,
    `column_2` string,
    `column_3` string,
    `column_4` string
    )
    PARTITIONED BY (`column_5` string, `column_6` int) STORED AS PARQUET
    LOCATION
      'hdfs://u/users/iamr/my_file.pq'
    

    After this, I repaired the schema partitions using the following command

    MSCK REPAIR TABLE myschema.my_table;

    After this it was started working for me.

    Another thing I noticed was that, while writing PARQUET files from spark, name the columns with lower case, otherwise hive may not able to map it. For me after renaming the columns in PARQUET file, it started working

    for eg: my_file.pq/COLUMN_5=test/COLUMN_6=5 didn't worked for me

    but my_file.pq/column_5=test/column_6=5 worked

提交回复
热议问题