How to send each group at a time to the spark executors?

六月ゝ 毕业季﹏ 提交于 2019-12-30 13:37:09

问题


I'm unable to send each group of dataframe at a time to the executor.

I have a data as below in company_model_vals_df dataframe.

 ----------------------------------------------------------------------------------------
 | model_id  |  fiscal_year  | fiscal_quarter | col1 | col2 | col3 | col4 | col5 | col6 |
 ----------------------------------------------------------------------------------------
 |    1      | 2018          |   1             | r1   | r2   | r3   |  r4  | r5   |  r6 |
 |    1      | 2018          |   2             | r1   | r2   | r3   |  r4  | r5   |  r6 |
 |    1      | 2018          |   1             | r1   | r2   | r3   |  r4  | r5   |  r6 |
 |    1      | 2018          |   2             | r1   | r2   | r3   |  r4  | r5   |  r6 |
 |    1      | 2018          |   1             | r1   | r2   | r3   |  r4  | r5   |  r6 |
 |    2      | 2017          |   3             | r1   | r2   | r3   |  r4  | r5   |  r6 |
 |    2      | 2017          |   1             | r1   | r2   | r3   |  r4  | r5   |  r6 |
 |    2      | 2017          |   3             | r1   | r2   | r3   |  r4  | r5   |  r6 |
 |    2      | 2017          |   3             | r1   | r2   | r3   |  r4  | r5   |  r6 |
 |    2      | 2017          |   1             | r1   | r2   | r3   |  r4  | r5   |  r6 |
 ----------------------------------------------------------------------------------------

I want to send each grouped data to executor, to process each one at a time.

For that I am doing as below:

var dist_company_model_vals_df =  company_model_vals_df.select("model_id","fiscal_quarter","fiscal_year").distinct()

// Want to send each group at a time to write by executors.

dist_company_model_vals_df.foreach(rowDf => {
  writeAsParquet(rowDf , parquet_file)    // this simply writes the data as parquet file
})

Error :

This throws a NullPointerException as rowDf is not found on the Executor side. What is the correct way to handle this in spark-sql using Scala 2.11?

Part 2 : Question

When i do company_model_vals_df.groupBy("model_id","fiscal_quarter","fiscal_year") the data is spilling a lot on disk even after i increased the memory. I.e. company_model_vals_df is huge dataframe ... lot of spilling happening when doing groupBy.

Same is the case below i.e. with partitionBy

company_model_vals_df.write.partitionBy("model_id","fiscal_quarter","fiscal_year")

PSEDO CODE : So in order to avoid is first I would do tuples of val groups = company_model_vals_df.groupBy("model_id","fiscal_quarter","fiscal_year").collect

groups.forEach{ group ->
   // I want to prepare child dataframes for each group from    company_model_vals_df

   val child_df = company_model_vals_df.where(model_id= group.model_id && fiscal_quarter === group.fiscal_quarter && etc)

 this child_df , i want wrote to a file i.e. saveAs(path)
}

Is there anyway to do it. Any spark functions or API useful for me here? please suggest a way to resolve this.


回答1:


There are few options here -

  • you need to fork the dataset into several datasets and work them individually like ,
var dist_company_model_vals_list =  company_model_vals_df
  .select("model_id","fiscal_quarter","fiscal_year").distinct().collectAsList

Then filter company_model_vals_df with output of dist_company_model_vals_list list which provides several datasets that you can work independently, like

def rowList = {
import org.apache.spark.sql._
var dfList:Seq[DataFrame] = Seq()
for (data <- dist_company_model_vals_list.zipWithIndex) {
val i = data._2
val row = data.-1
val filterCol = col($"model_id").equalTo(row.get(i).getInt(0).and($"fiscal_quarter").equalTo(row.get(i).getInt(1).and($"fiscal_year").equalTo(row.get(i).getInt(2))

   val resultDf = company_model_vals_df.filter(filterCol)    
dfList +: = resultDf
      }
dfList
}
  • If your objective is to write the data, you can use partitionBy("model_id","fiscal_quarter","fiscal_year") method on dataframeWriterto write them separately.



回答2:


If I understand your question correctly, you want to manipulate the data separately for each "model_id","fiscal_quarter","fiscal_year".

If that's correct, you would do it with a groupBy(), for example:

company_model_vals_df.groupBy("model_id","fiscal_quarter","fiscal_year").agg(avg($"col1") as "average")

If what you're looking for is to write each logical group into a separate folder, you can do that by writing:

company_model_vals_df.write.partitionBy("model_id","fiscal_quarter","fiscal_year").parquet("path/to/save")


来源:https://stackoverflow.com/questions/55037648/how-to-send-each-group-at-a-time-to-the-spark-executors

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!