Create a group id over a window in Spark Dataframe

后端 未结 2 1987
萌比男神i
萌比男神i 2021-01-01 08:17

I have a dataframe where I want to give id\'s in each Window partition. For example I have

id | col |
1  |  a  |
2  |  a  |
3  |  b  |
4  |  c  |
5  |  c  |         


        
2条回答
  •  暗喜
    暗喜 (楼主)
    2021-01-01 09:09

    Simply using a dense_rank inbuilt function over Window function should give you your desired result as

    from pyspark.sql import window as W
    import pyspark.sql.functions as f
    df.select('id', f.dense_rank().over(W.Window.orderBy('col')).alias('group')).show(truncate=False)
    

    which should give you

    +---+-----+
    |id |group|
    +---+-----+
    |1  |1    |
    |2  |1    |
    |3  |2    |
    |4  |3    |
    |5  |3    |
    +---+-----+
    

提交回复
热议问题