unable to install pyspark

最后都变了- 提交于 2020-02-24 11:11:49

问题


I am trying to install pyspark as this:

python setup.py install

I get this error:

Could not import pypandoc - required to package PySpark

pypandoc is installed already

Any ideas how can I install pyspark?


回答1:


I faced the same issue and solved it as below install pypandoc before installing pyspark

pip install pypandoc
pip install pyspark



回答2:


You need to use findspark or spark-submit to use pyspark. After installing scala and java download apache spark and put it some folder. then try this 2 ways: in shell:

pip install findspark

in code:

import findspark
findspark.init('pathToSpark')

or submit in shell

/path/to/spark/bin/spark-submit somecode.py



回答3:


Steps to install PySpark API for jupyter notebook:

  1. Go to this site https://spark.apache.org/downloads.html to download latest spark. The file will be downloaded in .tgz format. Extract this tgz file in a directory where you want to install PySpark.

  2. After extracting the tgz file , you will need to download hadoop because Apache spark requires Hadoop, so download hadoop from https://github.com/steveloughran/winutils/blob/master/hadoop-2.7.1/bin/winutils.exe, A file will be downloaded - 'winutils.exe'. Copy this exe file in the 'bin/' directory of your spark (spark-2.2.0-bin-hadoop2.7/bin)

  3. If you have anaconda installed, there will be .condarc file in C:\Users\, open that, change ssl_verify from true to false. This will help you to install python libraries directly from prompt.(In case if you have restricted network)

  4. Open anaconda prompt and type 'conda install findspark' to install findspark python module.If you are not able to install it, go to this link https://github.com/minrk/findspark and download ZIP,extract it and open anaconda prompt and go to this extracted path and run 'python setup.py install'.

  5. Open ThisPC>> Properties>> Advanced System Settings(You need to have admin access for that).Click on Environment Variables and then Add new user environment variables.

  6. After creating 4 user variables and adding spark path to 'PATH' system variable, open jupyter notebook and run this code:

    import findspark
    findspark.init()
    import pyspark
    from pyspark.sql import SQLContext
    from pyspark import SparkContext    
    
    sc = SparkContext("local", "First App")
    sqlContext = SQLContext(sc)
    

    If you dont get any error, the installation has been completed successfully.




回答4:


If you are using window follow the following steps:
1) install Jdk in the computer from link: https://www.oracle.com/technetwork/java/javase/downloads/index.html

2) set the environment variable $JAVA_HOME= /path/where/you/installed/jdk than add path in the PATH=%JAVA_HOME/bin

3)download the spark from the link:- https://spark.apache.org/downloads.html this file in the Zip format extract the file and file name is like spark-2.3.1-bin-hadoop2.7.tgz , move this folder to the C Directory. and set the environment variable

SPARK_HOME=/path/of the /spark 

4)download the scala ide from the link :- http://scala-ide.org/ extract the file and copy the Eclipse folder to the C: directory

5) now open cmd and write spark-shell it will open the scala shell for you.




回答5:


2018 version-

Install PYSPARK on Windows 10 JUPYTER-NOTEBOOK with ANACONDA NAVIGATOR.

STEP 1

Download Packages

1) spark-2.2.0-bin-hadoop2.7.tgz Download

2) Java JDK 8 version Download

3) Anaconda v 5.2 Download

4) scala-2.12.6.msi Download

5) hadoop v2.7.1 Download

STEP 2

Create SPARK folder in C:/ drive and extract Hadoop, spark and install Scala using scala-2.12.6.msi in the same directory. The directory structure should be It will look like this

Note: During installation of SCALA, specify C:/Spark folder

STEP 3

Now set the windows environment variables:

  1. HADOOP_HOME=C:\spark\hadoop

  2. JAVA_HOME=C:\Program Files\Java\jdk1.8.0_151

  3. SCALA_HOME=C:\spark\scala\bin

  4. SPARK_HOME=C:\spark\spark\bin

  5. PYSPARK_PYTHON=C:\Users\user\Anaconda3\python.exe

  6. PYSPARK_DRIVER_PYTHON=C:\Users\user\Anaconda3\Scripts\jupyter.exe

  7. PYSPARK_DRIVER_PYTHON_OPTS=notebook

  8. NOW SELECT PATH OF SPARK :

    Click on Edit and add New

    Add "C:\spark\spark\bin” to variable “Path” Windows

STEP 4

  • Make folder where you want to store Jupyter-Notebook outputs and files
  • After that open Anaconda command prompt and cd Folder name
  • then enter Pyspark

thats it your browser will pop up with Juypter localhost

STEP 5

Check if PySpark is working or not !

Type simple code and run it

from pyspark.sql import Row
a = Row(name = 'Vinay' , age=22 , height=165)
print("a: ",a)



回答6:


As of version 2.2 you can directly install pyspark using pip

pip install pyspark



回答7:


Try installing pypandoc with python3 with pip3 install pypandoc.



来源:https://stackoverflow.com/questions/51500288/unable-to-install-pyspark

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!