I am getting this error when I try and boot up a DataNode. From what I have read, the RPC paramters are only used for a HA configuration, which I am not setting up (I think)
I was facing the same issue, formatting HDFS solved my issue. Don't format HDFS if you have important meta data.
Command for formatting HDFS: hdfs namenode -format
When namenode was not working
After formatting HDFS
I had the exact same issue. I found a resolution by checking the environment on the Data Node:
$ sudo update-alternatives --install /etc/hadoop/conf hadoop-conf /etc/hadoop/conf.my_cluster 50
$ sudo update-alternatives --set hadoop-conf /etc/hadoop/conf.my_cluster
Make sure that the alternatives are set correctly on the Data Nodes.
Obviously,your core-site.xml has configure error.
<property>
<name>fs.defaultFS</name>
<value>hdfs://namenode:8020</value>
</property>
Your <name>fs.defaultFS</name>
setting as <value>hdfs://namenode:8020</value>
,but your machine hostname is datanode1
.So you just need change namenode
to datanode1
will be OK.
These steps solved the problem for me:
export HADOOP_CONF_DIR = $HADOOP_HOME/etc/hadoop
echo $HADOOP_CONF_DIR
hdfs namenode -format
hdfs getconf -namenodes
./start-dfs.sh
creating dfs.name.dir and dfs.data.dir directories and configuring full hostname in core-site.xml, masters & slaves is solved my issue
This type of problem mainly arises is there is a space in the value or name of the property in any one of the following files- core-site.xml, hdfs-site.xml, mapred-site.xml, yarn-site.xml
just make sure you did not put any spaces or (changed the line) in between the opening and closing name and value tags.
Code:
<property>
<name>dfs.name.dir</name> <value>file:///home/hadoop/hadoop_tmp/hdfs/namenode</value>
<final>true</final>
</property>