PySpark: Take average of a column after using filter function

前端 未结 2 1282
南旧
南旧 2020-12-30 01:30

I am using the following code to get the average age of people whose salary is greater than some threshold.

dataframe.filter(df[\'salary\'] > 100000).agg(         


        
相关标签:
2条回答
  • 2020-12-30 02:05

    You can try this too:

    dataframe.filter(df['salary'] > 100000).groupBy().avg('age')
    
    0 讨论(0)
  • 2020-12-30 02:23

    Aggregation function should be a value and a column name a key:

    dataframe.filter(df['salary'] > 100000).agg({"age": "avg"})
    

    Alternatively you can use pyspark.sql.functions:

    from pyspark.sql.functions import col, avg
    
    dataframe.filter(df['salary'] > 100000).agg(avg(col("age")))
    

    It is also possible to use CASE .. WHEN

    from pyspark.sql.functions import when
    
    dataframe.select(avg(when(df['salary'] > 100000, df['age'])))
    
    0 讨论(0)
提交回复
热议问题