Spark HiveContext : Insert Overwrite the same table it is read from

左心房为你撑大大i 提交于 2019-12-23 05:27:10

问题


I want to apply SCD1 and SCD2 using PySpark in HiveContext. In my approach, I am reading incremental data and target table. After reading, I am joining them for upsert approach. I am doing registerTempTable on all the source dataframes. I am trying to write final dataset into target table and I am facing the issue that Insert overwrite is not possible in the table it is read from.

Please suggest some solution for this. I do not want to write intermediate data into a physical table and read it again.

Is there any property or way to store the final data set without keeping the dependency on the table it is read from. This way, It might be possible to overwrite the table.

Please suggest.


回答1:


You should never overwrite a table from which you are reading. It can result in anything between data corruption and complete data loss in case of failure.

It is also important to point out that correctly implemented SCD2 shouldn't never overwrite a whole table and can be implemented as a (mostly) append operation. As far as I am aware SCD1 cannot be efficiently implemented without mutable storage, therefore is not a good fit for Spark.




回答2:


I was going through the documentation of spark and a thought clicked to me when I was checking one property there.

As my table was parquet, I used hive meta store to read the data by setting this property to false.

hiveContext.conf("spark.sql.hive.convertMetastoreParquet","false")

This solution is working fine for me.



来源:https://stackoverflow.com/questions/46143084/spark-hivecontext-insert-overwrite-the-same-table-it-is-read-from

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!