Failed to bind to: spark-master, using a remote cluster with two workers

爱⌒轻易说出口 提交于 2019-11-28 21:15:09

Setting the environment variable SPARK_LOCAL_IP=127.0.0.1 solved this for me.

I had this problem when my /etc/hosts file was mapping the wrong IP address to my local hostname.

The BindException in your logs complains about the IP address 192.168.0.191. I assume that resolves to the hostname of your machine and it's not the actual IP address that your network interface is using. It should work fine once you fix that.

AvkashChauhan

I had spark working in my EC2 instance. I started a new web server and to meet its requirement I had to change hostname to ec2 public DNS name i.e.

hostname ec2-54-xxx-xxx-xxx.compute-1.amazonaws.com

After that my spark could not work and showed error as below:

16/09/20 21:02:22 WARN Utils: Service 'sparkDriver' could not bind on port 0. Attempting port 1. 16/09/20 21:02:22 ERROR SparkContext: Error initializing SparkContext.

I solve it by setting SPARK_LOCAL_IP to as below:

export SPARK_LOCAL_IP="localhost"

then just launched sparkling shell as below:

$SPARK_HOME/bin/spark-shell

Possily your master is running on non-default port. Can you post your submit command? Have a look in https://spark.apache.org/docs/latest/spark-standalone.html#connecting-an-application-to-the-cluster

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!