Efficiently calculate row totals of a wide Spark DF

試著忘記壹切 提交于 2019-12-02 01:01:35

You're out of luck here. One way or another you're are going to hit some recursion limits (even if you go around SQL parser, sufficiently large sum of expressions will crash query planner). There are some slow solutions available:

  • Use spark_apply (at the cost of conversion to and from R):

    wide_sdf %>% spark_apply(function(df) { data.frame(total = rowSums(df)) })
    
  • Convert to long format and aggregate (at the cost of explode and shuffle):

    key_expr <- "monotonically_increasing_id() AS key"
    
    value_expr <- paste(
     "explode(array(", paste(colnames(wide_sdf), collapse=","), ")) AS value"
    )
    
    wide_sdf %>% 
      spark_dataframe() %>% 
      # Add id and explode. We need a separate invoke so id is applied
      # before "lateral view"
      sparklyr::invoke("selectExpr", list(key_expr, "*")) %>% 
      sparklyr::invoke("selectExpr", list("key", value_expr)) %>% 
      sdf_register() %>% 
      # Aggregate by id
      group_by(key) %>% 
      summarize(total = sum(value)) %>% 
      arrange(key)
    

To get something more efficient you should consider writing Scala extension and applying sum directly on a Row object, without exploding:

package com.example.sparklyr.rowsum

import org.apache.spark.sql.{DataFrame, Encoders}

object RowSum {
  def apply(df: DataFrame, cols: Seq[String]) = df.map {
    row => cols.map(c => row.getAs[Double](c)).sum
  }(Encoders.scalaDouble)
}

and

invoke_static(
  sc, "com.example.sparklyr.rowsum.RowSum", "apply",
  wide_sdf %>% spark_dataframe
) %>% sdf_register()
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!