How to compute the sum of orders over a 12 months period sliding by 1 month per customer in Spark

ぐ巨炮叔叔 提交于 2019-12-07 10:23:34

问题


I am relatively new to spark with Scala. currently I am trying to aggregate order data in spark over a 12 months period that slides monthly.

Below is a simple sample of my data, I tried to format it so you can easily test it

import spark.implicits._
import org.apache.spark.sql._
import org.apache.spark.sql.functions._


var sample = Seq(("C1","01/01/2016", 20), ("C1","02/01/2016", 5), 
 ("C1","03/01/2016", 2),  ("C1","04/01/2016", 3), ("C1","05/01/2017", 5),
 ("C1","08/01/2017", 5), ("C1","01/02/2017", 10), ("C1","01/02/2017", 10),  
 ("C1","01/03/2017", 10)).toDF("id","order_date", "orders")

sample = sample.withColumn("order_date",
to_date(unix_timestamp($"order_date", "dd/MM/yyyy").cast("timestamp")))

sample.show 
 +---+----------+------+
 | id|order_date|orders|
 +---+----------+------+
 | C1|2016-01-01|    20|
 | C1|2016-01-02|     5|
 | C1|2016-01-03|     2|
 | C1|2016-01-04|     3|
 | C1|2017-01-05|     5|
 | C1|2017-01-08|     5|
 | C1|2017-02-01|    10|
 | C1|2017-02-01|    10|
 | C1|2017-03-01|    10|
 +---+----------+------+

the imposed upon me outcome is the following.

id      period_start    period_end  rolling
C1      2015-01-01      2016-01-01  30
C1      2016-01-01      2017-01-01  40
C1      2016-02-01      2017-02-01  30
C1      2016-03-01      2017-03-01  40

what I tried to do so far

I collapsed the dates per costumer to the first day of the month

(e.i. 2016-01-[1..31] >> 2016-01-01 )

import org.joda.time._

val collapse_month = (month:Integer, year:Integer ) => {
   var  dt = new DateTime().withYear(year)
                        .withMonthOfYear(month)
                        .withDayOfMonth(1)
   dt.toString("yyyy-MM-dd")
 }

val collapse_month_udf = udf(collapse_month)


sample = sample.withColumn("period_end",
           collapse_month_udf(
           month(col("order_date")),
           year(col("order_date"))
           ).as("date"))

sample.groupBy($"id",  $"period_end")
              .agg(sum($"orders").as("orders"))
              .orderBy("period_end").show
 +---+----------+------+
 | id|period_end|orders|
 +---+----------+------+
 | C1|2016-01-01|    30|
 | C1|2017-01-01|    10|
 | C1|2017-02-01|    20|
 | C1|2017-03-01|    10|
 +---+----------+------+

I tried the provided window function but I was not able to use 12 months sliding by one option.

I am really not sure what is the best way to proceed from this point, that would not take 5 hours given how much data I have to work with.

Any help would be appreciated.


回答1:


tried the provided window function but I was not able to use 12 months sliding by one option.

You can still use window with longer intervals, but all parameters have to be expressed in days or weeks:

window($"order_date", "365 days", "28 days")

Unfortunately window this won't respect month or year boundaries, so it won't be that useful for you.

Personally I would aggregate data first:

val byMonth = sample
  .groupBy($"id", trunc($"order_date", "month").alias("order_month"))
  .agg(sum($"orders").alias("orders"))
+---+-----------+-----------+                                                   
| id|order_month|sum(orders)|
+---+-----------+-----------+
| C1| 2017-01-01|         10|
| C1| 2016-01-01|         30|
| C1| 2017-02-01|         20|
| C1| 2017-03-01|         10|
+---+-----------+-----------+

Create reference date range:

import java.time.temporal.ChronoUnit

val Row(start: java.sql.Date, end: java.sql.Date) = byMonth
  .select(min($"order_month"), max($"order_month"))
  .first

val months = (0L to ChronoUnit.MONTHS.between(
    start.toLocalDate, end.toLocalDate))
  .map(i => java.sql.Date.valueOf(start.toLocalDate.plusMonths(i)))
  .toDF("order_month")

And combine with unique ids:

val ref = byMonth.select($"id").distinct.crossJoin(months)

and join back with the source:

val expanded = ref.join(byMonth, Seq("id", "order_month"), "leftouter")
+---+-----------+------+ 
| id|order_month|orders|
+---+-----------+------+
| C1| 2016-01-01|    30|
| C1| 2016-02-01|  null|
| C1| 2016-03-01|  null|
| C1| 2016-04-01|  null|
| C1| 2016-05-01|  null|
| C1| 2016-06-01|  null|
| C1| 2016-07-01|  null|
| C1| 2016-08-01|  null|
| C1| 2016-09-01|  null|
| C1| 2016-10-01|  null|
| C1| 2016-11-01|  null|
| C1| 2016-12-01|  null|
| C1| 2017-01-01|    10|
| C1| 2017-02-01|    20|
| C1| 2017-03-01|    10|
+---+-----------+------+

With data prepared like this you can use window functions:

import org.apache.spark.sql.expressions.Window

val w = Window.partitionBy($"id")
     .orderBy($"order_month")
    .rowsBetween(-12, Window.currentRow)

expanded.withColumn("rolling", sum("orders").over(w))
  .na.drop(Seq("orders"))
  .select(
      $"order_month" - expr("INTERVAL 12 MONTHS") as "period_start",
      $"order_month" as "period_end",
      $"rolling")
+------------+----------+-------+
|period_start|period_end|rolling|
+------------+----------+-------+
|  2015-01-01|2016-01-01|     30|
|  2016-01-01|2017-01-01|     40|
|  2016-02-01|2017-02-01|     30|
|  2016-03-01|2017-03-01|     40|
+------------+----------+-------+

Please be advised this is a very expensive operation, requiring at least two shuffles:

== Physical Plan ==
*Project [cast(cast(order_month#104 as timestamp) - interval 1 years as date) AS period_start#1387, order_month#104 AS period_end#1388, rolling#1375L]
+- *Filter AtLeastNNulls(n, orders#55L)
   +- Window [sum(orders#55L) windowspecdefinition(id#7, order_month#104 ASC NULLS FIRST, ROWS BETWEEN 12 PRECEDING AND CURRENT ROW) AS rolling#1375L], [id#7], [order_month#104 ASC NULLS FIRST]
      +- *Sort [id#7 ASC NULLS FIRST, order_month#104 ASC NULLS FIRST], false, 0
         +- Exchange hashpartitioning(id#7, 200)
            +- *Project [id#7, order_month#104, orders#55L]
               +- *BroadcastHashJoin [id#7, order_month#104], [id#181, order_month#49], LeftOuter, BuildRight
                  :- BroadcastNestedLoopJoin BuildRight, Cross
                  :  :- *HashAggregate(keys=[id#7], functions=[])
                  :  :  +- Exchange hashpartitioning(id#7, 200)
                  :  :     +- *HashAggregate(keys=[id#7], functions=[])
                  :  :        +- *HashAggregate(keys=[id#7, trunc(order_date#14, month)#1394], functions=[])
                  :  :           +- Exchange hashpartitioning(id#7, trunc(order_date#14, month)#1394, 200)
                  :  :              +- *HashAggregate(keys=[id#7, trunc(order_date#14, month) AS trunc(order_date#14, month)#1394], functions=[])
                  :  :                 +- LocalTableScan [id#7, order_date#14]
                  :  +- BroadcastExchange IdentityBroadcastMode
                  :     +- LocalTableScan [order_month#104]
                  +- BroadcastExchange HashedRelationBroadcastMode(List(input[0, string, true], input[1, date, true]))
                     +- *HashAggregate(keys=[id#181, trunc(order_date#14, month)#1395], functions=[sum(cast(orders#183 as bigint))])
                        +- Exchange hashpartitioning(id#181, trunc(order_date#14, month)#1395, 200)
                           +- *HashAggregate(keys=[id#181, trunc(order_date#14, month) AS trunc(order_date#14, month)#1395], functions=[partial_sum(cast(orders#183 as bigint))])
                              +- LocalTableScan [id#181, order_date#14, orders#183]

It is also possible to express this using rangeBetween frame, but you have to encode data first:

val encoded = byMonth
  .withColumn("order_month_offset",
      // Choose "zero" date appropriate in your scenario
      months_between($"order_month", to_date(lit("1970-01-01"))))


val w = Window.partitionBy($"id")
  .orderBy($"order_month_offset")
  .rangeBetween(-12, Window.currentRow)

encoded.withColumn("rolling", sum($"orders").over(w))
+---+-----------+------+------------------+-------+                             
| id|order_month|orders|order_month_offset|rolling|
+---+-----------+------+------------------+-------+
| C1| 2016-01-01|    30|             552.0|     30|
| C1| 2017-01-01|    10|             564.0|     40|
| C1| 2017-02-01|    20|             565.0|     30|
| C1| 2017-03-01|    10|             566.0|     40|
+---+-----------+------+------------------+-------+

This would make the join with reference obsolete and simplify execution plan.



来源:https://stackoverflow.com/questions/47531381/how-to-compute-the-sum-of-orders-over-a-12-months-period-sliding-by-1-month-per

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!