ConnectException: Connection refused when run mapreduce in Hadoop

泪湿孤枕 提交于 2019-12-11 06:59:14

问题


I set up Hadoop(2.6.0) with multi machines mode : 1 namenode + 3 datanodes. When I used command : start-all.sh, they (namenode, datanode, resource manager, node manager) worked ok. I checked it with jps command and result on each node were bellow:

NameNode :

7300 ResourceManager

6942 NameNode

7154 SecondaryNameNode

DataNodes:

3840 DataNode

3924 NodeManager

And I also uploaded sample text file on HDFS at: /user/hadoop/data/sample.txt. Absolutely no error at that moment.

But when I tried to run a mapreduce with hadoop example's jar :

hadoop jar hadoop-mapreduce-examples-2.6.0.jar wordcount /user/hadoop/data/sample.txt /user/hadoop/output

I have this error:

15/04/08 03:31:26 INFO mapreduce.Job: Job job_1428478232474_0001 running    in uber mode : false
15/04/08 03:31:26 INFO mapreduce.Job:  map 0% reduce 0%
15/04/08 03:31:26 INFO mapreduce.Job: Job job_1428478232474_0001 failed with     state FAILED due to: Application application_1428478232474_0001 failed 2 times due to Error launching appattempt_1428478232474_0001_000002. Got exception: java.net.ConnectException: Call From hadoop/127.0.0.1 to localhost:53245 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy31.startContainers(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:119)
at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:254)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
at org.apache.hadoop.ipc.Client.call(Client.java:1438)
    ... 9 more Failing the application.
15/04/08 03:31:26 INFO mapreduce.Job: Counters: 0

About the configuration, sure that namenode can ssh to datanodes and vice versa without prompt password.I also dissabled IP6 and modified /etc/hosts file :

127.0.0.1 localhost hadoop

192.168.56.102 hadoop-nn

192.168.56.103 hadoop-dn1

192.168.56.104 hadoop-dn2

192.168.56.105 hadoop-dn3

I dont know why mapreduced can't run althought namenode and datanodes worked alright. I'm almost stucked at here, can you help me find the reason??

Thank you

Edit : Here config in hdfs-site.xml (namenode):

<property>
    <name>dfs.namenode.name.dir</name>
    <value>file:///usr/local/hadoop/hadoop_stores/hdfs/namenode</value>
    <description>NameNode directory for namespace and transaction logs storage.</description>
</property>
<property>
    <name>dfs.replication</name>
    <value>3</value>
</property>
<property>
    <name>dfs.permissions</name>
    <value>false</value>
</property>
<property>
    <name>dfs.datanode.use.datanode.hostname</name>
    <value>false</value>
</property>
<property>
    <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
    <value>false</value>
</property>
<property>
     <name>dfs.namenode.http-address</name>
     <value>hadoop-nn:50070</value>
     <description>Your NameNode hostname for http access.</description>
</property>
<property>
     <name>dfs.namenode.secondary.http-address</name>
     <value>hadoop-nn:50090</value>
     <description>Your Secondary NameNode hostname for http access.</description>
</property>

In datanodes :

<property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///usr/local/hadoop/hadoop_stores/hdfs/data/datanode</value>
    <description>DataNode directory</description>
</property>

<property>
    <name>dfs.replication</name>
    <value>3</value>
</property>
<property>
    <name>dfs.permissions</name>
    <value>false</value>
</property>
<property>
    <name>dfs.datanode.use.datanode.hostname</name>
    <value>false</value>
</property>
<property>
     <name>dfs.namenode.http-address</name>
     <value>hadoop-nn:50070</value>
     <description>Your NameNode hostname for http access.</description>
</property>
<property>
     <name>dfs.namenode.secondary.http-address</name>
     <value>hadoop-nn:50090</value>
     <description>Your Secondary NameNode hostname for http access.</description>

Here's result with command : hadoop fs -ls /user/hadoop/data

hadoop@hadoop:~/DATA$ hadoop fs -ls /user/hadoop/data 15/04/09 00:23:27 Found 2 items

-rw-r--r-- 3 hadoop supergroup 29 2015-04-09 00:22 >/user/hadoop/data/sample.txt

-rw-r--r-- 3 hadoop supergroup 27 2015-04-09 00:22 >/user/hadoop/data/sample1.txt

hadoop fs -ls /user/hadoop/output

ls: `/user/hadoop/output': No such file or directory


回答1:


Found solution!! see this post- yarn shows data nodes id/name as localhost

Call From localhost.localdomain/127.0.0.1 to localhost.localdomain:56148 failed on connection exception: java.net.ConnectException: Connection refused;

Both master and slaves were having host names of localhost.localdomain in /etc/hostname.
I changed host names of slaves to slave1 and slave2. That worked. Thank you everyone for your time.

@kate make sure etc/hostname in namenode and datanodes are not set to localhost. Just type ~# hostname in terminal to see. You can set a new hostname by the same command.

My master and workers or slaves' /etc/hosts looks like this-

127.0.0.1    localhost localhost.localdomain localhost4 localhost4.localdomain4
#127.0.1.1    localhost
192.168.111.72  master
192.168.111.65  worker1
192.168.111.66  worker2

hostname of worker1

hduser@worker1:/mnt/hdfs/datanode$ cat /etc/hostname 
worker1

and worker2

hduser@worker2:/usr/local/hadoop/logs$ cat /etc/hostname 
worker2

Also, probably you don't want to have "hadoop" hostname with loopback interface. i.e.

127.0.0.1 localhost hadoop 

Check this point (1) in https://wiki.apache.org/hadoop/ConnectionRefused.

Thank you.




回答2:


FIREWALL ISSUE:

java.net.ConnectException: Connection refused

This error might be due to firewall issues. Do this in terminal:

sudo apt-get install iptables-persistent
sudo iptables -L
sudo iptables-save > /usr/iptables-backup/iptables.v4.rules

Check whether the file is created before continuing (since this will be used to restore firewall if something goes wrong).

Now, flush iptable rules (i.e. stop firewall):

sudo iptables -F

Now try,

sudo iptables -L

This command should return no rules. Now, try to run your map/reduce job.

Note: If you want to restore iptables to previous condition, type this in terminal:

sudo iptables-restore < /usr/iptables-backup/iptables.v4.rules



来源:https://stackoverflow.com/questions/29508847/connectexception-connection-refused-when-run-mapreduce-in-hadoop

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!