pyspark: rolling average using timeseries data

前端 未结 4 1059
时光取名叫无心
时光取名叫无心 2020-12-02 11:38

I have a dataset consisting of a timestamp column and a dollars column. I would like to find the average number of dollars per week ending at the timestamp of each row. I

4条回答
  •  广开言路
    2020-12-02 11:55

    I figured out the correct way to calculate a moving/rolling average using this stackoverflow:

    Spark Window Functions - rangeBetween dates

    The basic idea is to convert your timestamp column to seconds, and then you can use the rangeBetween function in the pyspark.sql.Window class to include the correct rows in your window.

    Here's the solved example:

    %pyspark
    from pyspark.sql import functions as F
    from pyspark.sql.window import Window
    
    
    #function to calculate number of seconds from number of days
    days = lambda i: i * 86400
    
    df = spark.createDataFrame([(17, "2017-03-10T15:27:18+00:00"),
                            (13, "2017-03-15T12:27:18+00:00"),
                            (25, "2017-03-18T11:27:18+00:00")],
                            ["dollars", "timestampGMT"])
    df = df.withColumn('timestampGMT', df.timestampGMT.cast('timestamp'))
    
    #create window by casting timestamp to long (number of seconds)
    w = (Window.orderBy(F.col("timestampGMT").cast('long')).rangeBetween(-days(7), 0))
    
    df = df.withColumn('rolling_average', F.avg("dollars").over(w))
    

    This results in the exact column of rolling averages that I was looking for:

    dollars   timestampGMT            rolling_average
    17        2017-03-10 15:27:18.0   17.0
    13        2017-03-15 12:27:18.0   15.0
    25        2017-03-18 11:27:18.0   19.0
    

提交回复
热议问题