问题
I have a CDH4.3 all-in-one vm up and running, i am trying to install a hadoop client remotely. I noticed that, without changing any default settings, my hadoop cluster is listening to 127.0.0.1:8020
.
[cloudera@localhost ~]$ netstat -lent | grep 8020
tcp 0 0 127.0.0.1:8020 0.0.0.0:* LISTEN 492 100202
[cloudera@localhost ~]$ telnet ${all-in-one vm external IP} 8020
Trying ${all-in-one vm external IP}...
telnet: connect to address ${all-in-one vm external IP} Connection refused
[cloudera@localhost ~]$ telnet 127.0.0.1 8020
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'
my remote machine has all the configuration(core-site.xml, hdfs-site.xml
) pointing to the ${all-in-one vm external IP}
. When I run something from the remote client and I get this:
└ $ ./bin/hdfs --config /home/${myself}/hadoop-2.0.0-cdh4.3.0/etc/hadoop dfs -ls
13/10/27 05:27:53 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ls: Call From ubuntu/127.0.1.1 to ${all-in-one vm external IP}:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
I changed my hadoop all-in-one vm:
core-site.xml and hdfs-site.xml
under /etc/hadoop/conf
from:localhost.localdomain
-> ${all-in-one vm external IP},
but after restarting hdfs, it still listens to localhost 8020. any ideas? How can I make it listen to ${external IP} on 8020 instead of localhost.
回答1:
You should be able to directly tweak the property dfs.namenode.rpc-address
to be 0.0.0.0:8020
to make the NameNode Client IPC service listen on all interfaces, or set it to your specific IP to only make it listen there.
That said, the all-in-one vm external IP
change you state should have worked, but since the question does not have your exact configurations and logs, I cannot tell why.
来源:https://stackoverflow.com/questions/19618242/fs-defaultfs-only-listens-to-localhosts-port-8020