How to add any new library like spark-csv in Apache Spark prebuilt version

后端 未结 6 755
死守一世寂寞
死守一世寂寞 2020-12-12 19:35

I have build the Spark-csv and able to use the same from pyspark shell using the following command

bin/spark-shell --packages com.databricks:spark-csv_2.10:1         


        
6条回答
  •  执念已碎
    2020-12-12 20:06

    Instead of placing the jars in any specific folder a simple fix would be to start the pyspark shell with the following arguments:

    bin/pyspark --packages com.databricks:spark-csv_2.10:1.0.3
    

    This will automatically load the required spark-csv jars.

    Then do the following to read the csv file:

    from pyspark.sql import SQLContext
    sqlContext = SQLContext(sc)
    df = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('file.csv')
    df.show()
    

提交回复
热议问题