Can apache spark run without hadoop?

前端 未结 10 2127
长发绾君心
长发绾君心 2020-12-12 10:54

Are there any dependencies between Spark and Hadoop?

If not, are there any features I\'ll miss when I run

10条回答
  •  不思量自难忘°
    2020-12-12 11:00

    Spark is an in-memory distributed computing engine.

    Hadoop is a framework for distributed storage (HDFS) and distributed processing (YARN).

    Spark can run with or without Hadoop components (HDFS/YARN)


    Distributed Storage:

    Since Spark does not have its own distributed storage system, it has to depend on one of these storage systems for distributed computing.

    S3 – Non-urgent batch jobs. S3 fits very specific use cases when data locality isn’t critical.

    Cassandra – Perfect for streaming data analysis and an overkill for batch jobs.

    HDFS – Great fit for batch jobs without compromising on data locality.


    Distributed processing:

    You can run Spark in three different modes: Standalone, YARN and Mesos

    Have a look at the below SE question for a detailed explanation about both distributed storage and distributed processing.

    Which cluster type should I choose for Spark?

提交回复
热议问题