How to use Avro on HDInsight Spark/Jupyter?

限于喜欢 提交于 2020-01-24 03:39:10

问题


I am trying to read in a avro file inside HDInsight Spark/Jupyter cluster but got

u'Failed to find data source: com.databricks.spark.avro. Please find an Avro package at http://spark.apache.org/third-party-projects.html;'
Traceback (most recent call last):
  File "/usr/hdp/current/spark2-client/python/pyspark/sql/readwriter.py", line 159, in load
    return self._df(self._jreader.load(path))
  File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "/usr/hdp/current/spark2-client/python/pyspark/sql/utils.py", line 69, in deco
    raise AnalysisException(s.split(': ', 1)[1], stackTrace)
AnalysisException: u'Failed to find data source: com.databricks.spark.avro. Please find an Avro package at http://spark.apache.org/third-party-projects.html;'

df = spark.read.format("com.databricks.spark.avro").load("wasb://containername@aaa...aaa.blob.core.windows.net/...")

How do I resolve this? It seems like I need to install the package but how can I do it on HDInsight?


回答1:


You just need to follow the below article

https://docs.microsoft.com/en-in/azure/hdinsight/spark/apache-spark-jupyter-notebook-use-external-packages

For HDInsight 3.3 and HDInsight 3.4

You will add below cell in your notebook

%%configure 
{ "packages":["com.databricks:spark-avro_2.10:0.1"] }

For HDInsight 3.5

You will add below cell in your notebook

%%configure
{ "conf": {"spark.jars.packages": "com.databricks:spark-avro_2.10:0.1" }}

For HDInsight 3.6

You will add below cell in your notebook

%%configure
{ "conf": {"spark.jars.packages": "com.databricks:spark-avro_2.11:4.0.0" }}


来源:https://stackoverflow.com/questions/49596821/how-to-use-avro-on-hdinsight-spark-jupyter

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!