Spark: Exception in thread “main” org.apache.spark.sql.catalyst.errors.package

╄→尐↘猪︶ㄣ 提交于 2021-02-05 04:43:14

问题


While running my spark-submit code, I get this error when I execute.

Scala file which performs joins.

I am just curious to know what is this TreeNodeException error.

Why do we have this error?

Please share your ideas on this TreeNodeException error:

Exception in thread “main” org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:

回答1:


I encountered this exception when joining dataframes too

Exception in thread “main” org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:

To fix it, I simply reversed the order of the join. That is, instead of doing df1.join(df2, on_col="A"), I did df2.join(df1, on_col="A"). Not sure why this is the case but my intuition tells me the logic tree that Spark must follow is messy when you use the former command but not the with the latter. You can think of it as the number of comparisons Spark would have to make with column "A" in my toy example to join both dataframes. I know it's not a definite answer but I hope it helps.




回答2:


Ok so the stack trace given above is not sufficient to understand the root cause, but as you mentioned you are using the join the most probably it's happening because of that. I faced the same issue for join, if you dig down your stack trace you would see something like -

+- *HashAggregate(keys=[], functions=[partial_count(1)], output=[count#73300L])
+- *Project
+- *BroadcastHashJoin 
...
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]

This gives hint why it's failing, Spark tries to join using "Broadcast Hash Join", which has Timeout and Broadcast size threshold, either of which causes above error.To fix this depending on underlying error -

Increase the "spark.sql.broadcastTimeout", default is 300 sec -

spark = SparkSession
  .builder
  .appName("AppName")
  .config("spark.sql.broadcastTimeout", "1800")
  .getOrCreate()

Or increase the broadcast threshold,default is 10 MB -

spark = SparkSession
      .builder
      .appName("AppName")
      .config("spark.sql.autoBroadcastJoinThreshold", "20485760 ")
      .getOrCreate()

Or disable the Broadcast join by setting value to -1

spark = SparkSession
          .builder
          .appName("AppName")
          .config("spark.sql.autoBroadcastJoinThreshold", "-1")
          .getOrCreate()

More details can be found here - https://spark.apache.org/docs/latest/sql-performance-tuning.html



来源:https://stackoverflow.com/questions/46930591/spark-exception-in-thread-main-org-apache-spark-sql-catalyst-errors-package

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!