I have a requirement to do the incremental loading to a table by using Spark (PySpark)
Here\'s the example:
Day 1
id | value
-----------
1 |
Workaround, add a date column in dataframe, then rank based on id and order by date in descending and take the rank == 1. It will always give you the latest record based on id.
df.("rank", rank().over(Window.partitionBy($"id").orderBy($"date".desc)))
.filter($"rank" === 1)
.drop($"rank")
.orderBy($"id")
.show