Incorrect configuration: namenode address dfs.namenode.rpc-address is not configured

白昼怎懂夜的黑 提交于 2019-11-30 09:04:08

I too was facing the same issue and finally found that there was a space in fs.default.name value. truncating the space fixed the issue. The above core-site.xml doesn't seem to have space so the issue may be different from what i had. my 2 cents

Hamdi Charef

These steps solved the problem for me:

  • export HADOOP_CONF_DIR = $HADOOP_HOME/etc/hadoop
  • echo $HADOOP_CONF_DIR
  • hdfs namenode -format
  • hdfs getconf -namenodes
  • ./start-dfs.sh

Obviously,your core-site.xml has configure error.

<property>
 <name>fs.defaultFS</name>
 <value>hdfs://namenode:8020</value>
</property>

Your <name>fs.defaultFS</name> setting as <value>hdfs://namenode:8020</value>,but your machine hostname is datanode1.So you just need change namenode to datanode1 will be OK.

check the core-site.xml under $HADOOP_INSTALL/etc/hadoop dir. Verify that the property fs.default.name is configured correctly

user3705189

I had the exact same issue. I found a resolution by checking the environment on the Data Node:

$ sudo update-alternatives --install /etc/hadoop/conf hadoop-conf /etc/hadoop/conf.my_cluster 50
$ sudo update-alternatives --set hadoop-conf /etc/hadoop/conf.my_cluster

Make sure that the alternatives are set correctly on the Data Nodes.

Configuring the full host name in core-site.xml, masters and slaves solved the issue for me.

Old: node1 (failed)

New: node1.krish.com (Succeed)

creating dfs.name.dir and dfs.data.dir directories and configuring full hostname in core-site.xml, masters & slaves is solved my issue

In my situation, I fixed by change /etc/hosts config to lower case.

Yousef Irman

in my case, I have wrongly set HADOOP_CONF_DIR to an other Hadoop installation.

Add to hadoop-env.sh:

export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop/
ranubwj

This type of problem mainly arises is there is a space in the value or name of the property in any one of the following files- core-site.xml, hdfs-site.xml, mapred-site.xml, yarn-site.xml

just make sure you did not put any spaces or (changed the line) in between the opening and closing name and value tags.

Code:

 <property> 
<name>dfs.name.dir</name> <value>file:///home/hadoop/hadoop_tmp/hdfs/namenode</value> 
<final>true</final> 
</property>
rahul mahajan

I was facing the same issue, formatting HDFS solved my issue. Don't format HDFS if you have important meta data.
Command for formatting HDFS: hdfs namenode -format

When namenode was not working

After formatting HDFS

Check your '/etc/hosts' file:
There must be a line like below: (if not, so add that)
namenode 127.0.0.1
Replace 127.0.01 with your namenode IP.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!