What\'s the syntax for using a groupby-having in Spark without an sql/hiveContext? I know I can do
DataFrame df = some_df df.registreTempTable(\"df\"); d
Yes, it doesn't exist. You express the same logic with agg followed by where:
agg
where
df.groupBy(someExpr).agg(somAgg).where(somePredicate)