Create a group id over a window in Spark Dataframe

爱⌒轻易说出口 提交于 2019-12-01 14:10:44

Simply using a dense_rank inbuilt function over Window function should give you your desired result as

from pyspark.sql import window as W
import pyspark.sql.functions as f
df.select('id', f.dense_rank().over(W.Window.orderBy('col')).alias('group')).show(truncate=False)

which should give you

+---+-----+
|id |group|
+---+-----+
|1  |1    |
|2  |1    |
|3  |2    |
|4  |3    |
|5  |3    |
+---+-----+

You can assign a row_number for distinct col and self join with the original dataframe.

val data = Seq(
  (1, "a"),
  (2, "a"),
  (3, "b"),
  (4, "c"),
  (5, "c")
).toDF("id","col")

val df2 = data.select("col").distinct()
  .withColumn("group", row_number().over(Window.orderBy("col")))


val result = data.join(df2, Seq("col"), "left")
    .drop("col")

The code is in scala but can be easily changed to pyspark.

Hope this helps

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!