问题
I would like to perform window function (concretely moving average), but over all columns of a dataframe.
I can do it this way
from pyspark.sql import SparkSession, functions as func
df = ...
df.select([func.avg(df[col]).over(windowSpec).alias(col) for col in df.columns])
but I'm afraid this isn't very efficient. Is there a better way to do it?
回答1:
An alternative which may be better is to create a new df where you Group By the columns in Window function and apply average on the remaining columns then do a left join. For large data frames where the df is being spilled over to disk (or cannot be persisted in memory), this will definitely be more optimal.
来源:https://stackoverflow.com/questions/43545864/apply-window-function-over-multiple-columns