Why does pyspark fail with “Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder'”?

时光总嘲笑我的痴心妄想 提交于 2021-02-08 04:31:09

问题


For the life of me I cannot figure out what is wrong with my PySpark install. I have installed all dependencies, including Hadoop, but PySpark cant find it--am I diagnosing this correctly?

See the full error message below, but it ultimately fails on PySpark SQL

pyspark.sql.utils.IllegalArgumentException: u"Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"

nickeleres@Nicks-MBP:~$ pyspark
Python 2.7.10 (default, Feb  7 2017, 00:08:15) 
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/opt/spark-2.2.0/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
17/10/24 21:21:58 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4041. Attempting port 4042.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4042. Attempting port 4043.
Traceback (most recent call last):
  File "/opt/spark/python/pyspark/shell.py", line 45, in <module>
    spark = SparkSession.builder\
  File "/opt/spark/python/pyspark/sql/session.py", line 179, in getOrCreate
    session._jsparkSession.sessionState().conf().setConfString(key, value)
  File "/opt/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
  File "/opt/spark/python/pyspark/sql/utils.py", line 79, in deco
    raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: u"Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
>>> 

回答1:


tl;dr Close all the other Spark processes and start over.

The following WARN messages say that there is another process (or multiple processes) that holds the ports.

I'm sure that the process(es) are Spark processes, e.g. pyspark sessions or Spark applications.

17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4041. Attempting port 4042.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4042. Attempting port 4043.

That's why after Spark/pyspark has found that the port 4044 is free to use for web UI it tried to instantiate HiveSessionStateBuilder and failed.

pyspark failed as you cannot have more than one Spark application up and running that uses the same local Hive metastore.




回答2:


WHY THIS HAPPENS ?

Because we try to create new session more than once ! on different tabs of browser of jupyter notebook.

Solution :

START NEW SESSION ON SINGLE TAB IN JUPYTER NOTEBOOK AND AVOID TO CREATE NEW SESSION ON DIFFRENT TABS

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('EXAMPLE').getOrCreate()



回答3:


Another possible cause is that the spark application failed to start due to minimum machine requirements were not attended.

In the Application history tab:

Diagnostics:Uncaught exception: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested virtual cores < 0, or requested virtual cores > max configured, requestedVirtualCores=5, maxVirtualCores=4

Illustration:



来源:https://stackoverflow.com/questions/46924010/why-does-pyspark-fail-with-error-while-instantiating-org-apache-spark-sql-hive

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!