问题
I am generating a hierarchy for a table determining the parent child.
Below is the configuration used, even after getting the error with regards to the too large frame:
Spark properties
--conf spark.yarn.executor.memoryOverhead=1024mb \
--conf yarn.nodemanager.resource.memory-mb=12288mb \
--driver-memory 32g \
--driver-cores 8 \
--executor-cores 32 \
--num-executors 8 \
--executor-memory 256g \
--conf spark.maxRemoteBlockSizeFetchToMem=15g
import org.apache.log4j.{Level, Logger};
import org.apache.spark.SparkContext;
import org.apache.spark.sql.{DataFrame, SparkSession};
import org.apache.spark.sql.functions._;
import org.apache.spark.sql.expressions._;
lazy val sparkSession = SparkSession.builder.enableHiveSupport().getOrCreate();
import spark.implicits._;
val hiveEmp: DataFrame = sparkSession.sql("select * from db.employee");
hiveEmp.repartition(300);
import org.apache.spark.sql.functions._;
val nestedLevel = 3;
val empHierarchy = (1 to nestedLevel).foldLeft(hiveEmp.as("wd0")) { (wDf, i) =>
val j = i - 1
wDf.join(hiveEmp.as(s"wd$i"), col(s"wd$j.parent_id".trim) === col(s"wd$i.id".trim), "left_outer")
}.select(
col("wd0.id") :: col("wd0.parent_id") ::
col("wd0.amount").as("amount") :: col("wd0.payment_id").as("payment_id") :: (
(1 to nestedLevel).toList.map(i => col(s"wd$i.amount").as(s"amount_$i")) :::
(1 to nestedLevel).toList.map(i => col(s"wd$i.payment_id").as(s"payment_id_$i"))
): _*);
empHierarchy.write.saveAsTable("employee4");
Error
Caused by: org.apache.spark.SparkException: Task failed while writing rows
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:204)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
... 3 more
Caused by: org.apache.spark.shuffle.FetchFailedException: Too large frame: 5454002341
at org.apache.spark.storage.ShuffleBlockFetcherIterator.throwFetchFailedException(ShuffleBlockFetcherIterator.scala:361)
at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:336)
回答1:
use this spark config, spark.maxRemoteBlockSizeFetchToMem < 2g
Since there is lot of issues with> 2G partition (cannot shuffle, cannot cache on disk), Hence it is throwing failedfetchedexception too large data frame.
回答2:
Suresh is right. Here's a better documented & formatted version of his answer with some useful background info:
- bug report (link to the fix is at the very bottom)
- fix (fixed as of 2.2.0 - already mentioned by Jared)
- change of config's default value (changed as of 2.4.0)
If you're on a version 2.2.x or 2.3.x, you can achieve the same effect by setting the value of the config to Int.MaxValue - 512, i.e. by setting spark.maxRemoteBlockSizeFetchToMem=2147483135. See here for the default value used as of September 2019.
回答3:
This means that size of your dataset partitions is enormous. You need to repartition your dataset to more partitions.
you can do this using,
df.repartition(n)
Here, n is dependent on the size of your dataset.
回答4:
Got the exact same error when trying to Backfill a few years of Data. Turns out, its because your partitions are of size > 2gb.
You can either Bump up the number of partitions (using repartition()) so that your partitions are under 2GB. (Keep your partitions close to 128mb to 256mb i.e. close to the HDFS Block size)
Or you can bump up the shuffle limit to > 2GB as mentioned above. (Avoid it). Also, partitions with large amount of data will result in tasks that take a long time to finish.
Note: repartition(n) will result in n part files per partition during write to s3/hdfs.
Read this for more info: http://www.russellspitzer.com/2018/05/10/SparkPartitions/
来源:https://stackoverflow.com/questions/51278275/spark-failure-caused-by-org-apache-spark-shuffle-fetchfailedexception-too-la