DataFrame.write.parquet - Parquet-file cannot be read by HIVE or Impala

情到浓时终转凉″ 提交于 2019-12-11 04:17:57

问题


I wrote a DataFrame with pySpark into HDFS with this command:

df.repartition(col("year"))\
.write.option("maxRecordsPerFile", 1000000)\
.parquet('/path/tablename', mode='overwrite', partitionBy=["year"], compression='snappy')

When taking a look into the HDFS I can see that the files are properly laying there. Anyhow, when I try to read the table with HIVE or Impala, the table cannot be found.

Whats going wrong here, am I missing something?

Interestingly, df.write.format('parquet').saveAsTable("tablename") works properly.


回答1:


It's an expected behaviour from spark as:

  • df...etc.parquet("") writes the data to HDFS location and won't create any table in Hive.

  • but df..saveAsTable("") creates the table in hive and writes data to it.

In the case the table already exists, behavior of this function depends on the save mode, specified by the mode function (default to throwing an exception). When mode is Overwrite, the schema of the DataFrame does not need to be the same as that of the existing table.

That's the reason why you are not able to find table in hive after performing df...parquet("")



来源:https://stackoverflow.com/questions/56581105/dataframe-write-parquet-parquet-file-cannot-be-read-by-hive-or-impala

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!