Create a group id over a window in Spark Dataframe

后端 未结 2 1981
萌比男神i
萌比男神i 2021-01-01 08:17

I have a dataframe where I want to give id\'s in each Window partition. For example I have

id | col |
1  |  a  |
2  |  a  |
3  |  b  |
4  |  c  |
5  |  c  |         


        
相关标签:
2条回答
  • 2021-01-01 09:00

    You can assign a row_number for distinct col and self join with the original dataframe.

    val data = Seq(
      (1, "a"),
      (2, "a"),
      (3, "b"),
      (4, "c"),
      (5, "c")
    ).toDF("id","col")
    
    val df2 = data.select("col").distinct()
      .withColumn("group", row_number().over(Window.orderBy("col")))
    
    
    val result = data.join(df2, Seq("col"), "left")
        .drop("col")
    

    The code is in scala but can be easily changed to pyspark.

    Hope this helps

    0 讨论(0)
  • 2021-01-01 09:09

    Simply using a dense_rank inbuilt function over Window function should give you your desired result as

    from pyspark.sql import window as W
    import pyspark.sql.functions as f
    df.select('id', f.dense_rank().over(W.Window.orderBy('col')).alias('group')).show(truncate=False)
    

    which should give you

    +---+-----+
    |id |group|
    +---+-----+
    |1  |1    |
    |2  |1    |
    |3  |2    |
    |4  |3    |
    |5  |3    |
    +---+-----+
    
    0 讨论(0)
提交回复
热议问题