Spark SQL Row_number() PartitionBy Sort Desc

不羁岁月 提交于 2019-12-20 10:36:07

问题


I've successfully create a row_number() partitionBy by in Spark using Window, but would like to sort this by descending, instead of the default ascending. Here is my working code:

from pyspark import HiveContext
from pyspark.sql.types import *
from pyspark.sql import Row, functions as F
from pyspark.sql.window import Window

data_cooccur.select("driver", "also_item", "unit_count", 
    F.rowNumber().over(Window.partitionBy("driver").orderBy("unit_count")).alias("rowNum")).show()

That gives me this result:

 +------+---------+----------+------+
 |driver|also_item|unit_count|rowNum|
 +------+---------+----------+------+
 |   s10|      s11|         1|     1|
 |   s10|      s13|         1|     2|
 |   s10|      s17|         1|     3|

And here I add the desc() to order descending:

data_cooccur.select("driver", "also_item", "unit_count", F.rowNumber().over(Window.partitionBy("driver").orderBy("unit_count").desc()).alias("rowNum")).show()

And get this error:

AttributeError: 'WindowSpec' object has no attribute 'desc'

What am I doing wrong here?


回答1:


desc should be applied on a column not a window definition. You can use either a method on a column:

from pyspark.sql.functions import col  

F.rowNumber().over(Window.partitionBy("driver").orderBy(col("unit_count").desc())

or a standalone function:

from pyspark.sql.functions import desc

F.rowNumber().over(Window.partitionBy("driver").orderBy(desc("unit_count"))



回答2:


Or you can use the SQL code in Spark-SQL:

from pyspark.sql import SparkSession

spark = SparkSession\
    .builder\
    .master('local[*]')\
    .appName('Test')\
    .getOrCreate()

spark.sql("""
    select driver
        ,also_item
        ,unit_count
        ,ROW_NUMBER() OVER (PARTITION BY driver ORDER BY unit_count DESC) AS rowNum
    from data_cooccur
""").show()



回答3:


Update Actually, I tried looking more into this, and it appears to not work. (in fact it throws an error). The reason why it didn't work is that I had this code under a call to display() in Databricks (code after the display() call is never run). It seems like the orderBy() on a dataframe and the orderBy() on a window are not actually the same. I will keep this answer up just for negative confirmation

As of PySpark 2.4,(and probably earlier), simply adding in the keyword ascending=False into the orderBy call works for me.

Ex.

personal_recos.withColumn("row_number", F.row_number().over(Window.partitionBy("COLLECTOR_NUMBER").orderBy("count", ascending=False)))

and

personal_recos.withColumn("row_number", F.row_number().over(Window.partitionBy("COLLECTOR_NUMBER").orderBy(F.col("count").desc())))

seem to give me the same behaviour.



来源:https://stackoverflow.com/questions/35247168/spark-sql-row-number-partitionby-sort-desc

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!