Spark incremental loading overwrite old record

后端 未结 3 1349
被撕碎了的回忆
被撕碎了的回忆 2020-12-16 07:39

I have a requirement to do the incremental loading to a table by using Spark (PySpark)

Here\'s the example:

Day 1

id | value
-----------
1  |         


        
3条回答
  •  温柔的废话
    2020-12-16 08:37

    dataframe appending is done by union function in pyspark. I'll demo with an example and create 2 dataframes as you mentioned in the question.

    from pyspark.sql.types import Row
    df1 = sqlContext.createDataFrame([Row(id=1,value="abc"),Row(id=2,value="def")])
    
    df1.show()
    +---+-----+
    | id|value|
    +---+-----+
    |  1|  abc|
    |  2|  def|
    +---+-----+
    
    df2 = sqlContext.createDataFrame([Row(id=2,value="cde"),Row(id=3,value="xyz")])
    df2.show()
    +---+-----+
    | id|value|
    +---+-----+
    |  2|  cde|
    |  3|  xyz|
    +---+-----+
    

    Lets do a union between the two dataframes and you will get the desired result.

    df2.union(df1).dropDuplicates(["id"]).show()
    +---+-----+
    | id|value|
    +---+-----+
    |  1|  abc|
    |  3|  xyz|
    |  2|  cde|
    +---+-----+
    

    You can sort the output using asc from pyspark.sql.functions

    from pyspark.sql.functions import asc
    
    
    df2.union(df1).dropDuplicates(["id"]).sort(asc("id")).show()
    +---+-----+
    | id|value|
    +---+-----+
    |  1|  abc|
    |  2|  cde|
    |  3|  xyz|
    +---+-----+
    

提交回复
热议问题