I am trying to understand how spark runs on YARN cluster/client. I have the following question in my mind.
Is it necessary that spark is installed on all th
Adding to other answers.
- Is it necessary that spark is installed on all the nodes in yarn cluster?
No, If the spark job is scheduling in YARN(either client or cluster mode). Spark installation needed in many nodes only for standalone mode.
These are the visualisations of spark app deployment modes.
Spark Standalone Cluster

In cluster mode driver will be sitting in one of the Spark Worker node whereas in client mode it will be within the machine which launched the job.
YARN cluster mode

YARN client mode

This table offers a concise list of differences between these modes:

pics source
- It says in the documentation "Ensure that HADOOP_CONF_DIR or YARN_CONF_DIR points to the directory which contains the (client side) configuration files for the Hadoop cluster". Why does client node have to install Hadoop when it is sending the job to cluster?
Hadoop installation is not mandatory but configurations(not all) are!. We can call them as Gateway nodes. It's for two main reasons.
HADOOP_CONF_DIR directory will be
distributed to the YARN cluster so that all containers used by the
application use the same configuration. yarn-default.xml). Thus, the --master parameter is yarn.
Update: (2017-01-04)
Spark 2.0+ no longer requires a fat assembly jar for production deployment. source