What\'s the syntax for using a groupby-having in Spark without an sql/hiveContext? I know I can do
DataFrame df = some_df
df.registreTempTable(\"df\");
d
Say for example if I want to find products in each category, having fees less than 3200 and their count must not be less than 10:
sqlContext.sql("select Category,count(*) as
count from hadoopexam where HadoopExamFee<3200
group by Category having count>10")
from pyspark.sql.functions import *
df.filter(df.HadoopExamFee<3200)
.groupBy('Category')
.agg(count('Category').alias('count'))
.filter(column('count')>10)
Yes, it doesn't exist. You express the same logic with agg
followed by where
:
df.groupBy(someExpr).agg(somAgg).where(somePredicate)