How to use window functions in PySpark?

徘徊边缘 提交于 2019-12-03 05:16:48

To be able to use window function you have to create a window first. Definition is pretty much the same as for normal SQL it means you can define either order, partition or both. First lets create some dummy data:

import numpy as np
np.random.seed(1)

keys = ["foo"] * 10 + ["bar"] * 10
values = np.hstack([np.random.normal(0, 1, 10), np.random.normal(10, 1, 100)])

df = sqlContext.createDataFrame([
   {"k": k, "v": round(float(v), 3)} for k, v in zip(keys, values)])

Make sure you're using HiveContext (Spark < 2.0 only):

from pyspark.sql import HiveContext

assert isinstance(sqlContext, HiveContext)

Create a window:

from pyspark.sql.window import Window

w =  Window.partitionBy(df.k).orderBy(df.v)

which is equivalent to

(PARTITION BY k ORDER BY v) 

in SQL.

As a rule of thumb window definitions should always contain PARTITION BY clause otherwise Spark will move all data to a single partition. ORDER BY is required for some functions, while in different cases (typically aggregates) may be optional.

There are also two optional which can be used to define window span - ROWS BETWEEN and RANGE BETWEEN. These won't be useful for us in this particular scenario.

Finally we can use it for a query:

from pyspark.sql.functions import percentRank, ntile

df.select(
    "k", "v",
    percentRank().over(w).alias("percent_rank"),
    ntile(3).over(w).alias("ntile3")
)

Note that ntile is not related in any way to the quantiles.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!