Running pyspark after pip install pyspark

前端 未结 4 1990
遇见更好的自我
遇见更好的自我 2020-12-14 03:12

I wanted to install pyspark on my home machine. I did

pip install pyspark
pip install jupyter

Both seemed to work well.

<
相关标签:
4条回答
  • 2020-12-14 03:24

    To install Spark, make sure you have Java 8 or higher installed. Then go to Spark Downloads page to select latest spark release, prebuilt package for Hadoop and download it. Unzip the file and move to your /opt (or for that matter any folder, but remember where you moved it)

    mv spark-2.4.4-bin-hadoop2.7 /opt/spark-2.4.4
    

    Then create a symbolic link. This way you can download and use multiple spark versions.

    ln -s /opt/spark-2.4.4 /opt/spark
    

    Add the following to your, .bash_profile to tell your bash where to find Spark.

    export SPARK_HOME=/opt/spark
    export PATH=$SPARK_HOME/bin:$PATH
    

    Finally, to setup Spark to use python3, please add the following to /opt/spark/conf/spark-env.sh file

    export PYSPARK_PYTHON=/usr/local/bin/python3
    export PYSPARK_DRIVER_PYTHON=python3
    
    0 讨论(0)
  • 2020-12-14 03:26

    I just faced the same issue, but it turned out that pip install pyspark downloads spark distirbution that works well in local mode. Pip just doesn't set appropriate SPARK_HOME. But when I set this manually, pyspark works like a charm (without downloading any additional packages).

    $ pip3 install --user pyspark
    Collecting pyspark
      Downloading pyspark-2.3.0.tar.gz (211.9MB)
        100% |████████████████████████████████| 211.9MB 9.4kB/s 
    Collecting py4j==0.10.6 (from pyspark)
      Downloading py4j-0.10.6-py2.py3-none-any.whl (189kB)
        100% |████████████████████████████████| 194kB 3.9MB/s 
    Building wheels for collected packages: pyspark
      Running setup.py bdist_wheel for pyspark ... done
      Stored in directory: /home/mario/.cache/pip/wheels/4f/39/ba/b4cb0280c568ed31b63dcfa0c6275f2ffe225eeff95ba198d6
    Successfully built pyspark
    Installing collected packages: py4j, pyspark
    Successfully installed py4j-0.10.6 pyspark-2.3.0
    
    $ PYSPARK_PYTHON=python3 SPARK_HOME=~/.local/lib/python3.5/site-packages/pyspark pyspark
    Python 3.5.2 (default, Nov 23 2017, 16:37:01) 
    [GCC 5.4.0 20160609] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    2018-03-31 14:02:39 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    Welcome to
          ____              __
         / __/__  ___ _____/ /__
        _\ \/ _ \/ _ `/ __/  '_/
       /__ / .__/\_,_/_/ /_/\_\   version 2.3.0
          /_/
    
    Using Python version 3.5.2 (default, Nov 23 2017 16:37:01)
    >>>
    
    0 讨论(0)
  • 2020-12-14 03:27

    Pyspark from PyPi (i.e. installed with pip) does not contain the full Pyspark functionality; it is only intended for use with a Spark installation in an already existing cluster [EDIT: or in local mode only - see accepted answer]. From the docs:

    The Python packaging for Spark is not intended to replace all of the other use cases. This Python packaged version of Spark is suitable for interacting with an existing cluster (be it Spark standalone, YARN, or Mesos) - but does not contain the tools required to setup your own standalone Spark cluster. You can download the full version of Spark from the Apache Spark downloads page.

    NOTE: If you are using this with a Spark standalone cluster you must ensure that the version (including minor version) matches or you may experience odd errors

    You should download a full Spark distribution as described here.

    0 讨论(0)
  • 2020-12-14 03:41

    If you are in python 3.0+ then open anaconda prompt execute the below command pip3 install --user pyspark

    0 讨论(0)
提交回复
热议问题