PySpark: Take average of a column after using filter function

喜夏-厌秋 提交于 2019-11-30 13:45:40

问题


I am using the following code to get the average age of people whose salary is greater than some threshold.

dataframe.filter(df['salary'] > 100000).agg({"avg": "age"})

the column age is numeric (float) but still I am getting this error.

py4j.protocol.Py4JJavaError: An error occurred while calling o86.agg. 
: scala.MatchError: age (of class java.lang.String)

Do you know any other way to obtain the avg etc. without using groupBy function and SQL queries.


回答1:


Aggregation function should be a value and a column name a key:

dataframe.filter(df['salary'] > 100000).agg({"age": "avg"})

Alternatively you can use pyspark.sql.functions:

from pyspark.sql.functions import col, avg

dataframe.filter(df['salary'] > 100000).agg(avg(col("age")))

It is also possible to use CASE .. WHEN

from pyspark.sql.functions import when

dataframe.select(avg(when(df['salary'] > 100000, df['age'])))


来源:https://stackoverflow.com/questions/32550478/pyspark-take-average-of-a-column-after-using-filter-function

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!