Data Replication error in Hadoop

淺唱寂寞╮ 提交于 2019-12-02 18:10:17

The solution that worked for me was to run namenode and datanode one by one and not together using bin/start-all.sh. What happens using this approach is that the error is clearly visible if you are having some problem setting the datanodes on the network and also many posts on stackoverflow suggest that namenode requires some time to start-off, therefore, it should be given some time to start before starting the datanodes. Also, in this case I was having problem with different ids of namenode and datanodes for which I had to change the ids of the datanode with same id as the namenode.

The step by step procedure will be:

  1. Start the namenode bin/hadoop namenode. Check for errors, if any.
  2. Start the datanodes bin/hadoop datanode. Check for errors, if any.
  3. Now start the task-tracker, job tracker using 'bin/start-mapred.sh'

Look at your namenode (probably http://localhost:50070) and see how many datanodes it says you have.

If it is 0, then either your datanode isn't running or it isn't configured to connect to the namenode.

If it is 1, check to see how much free space it says there is in the DFS. It may be that the data node doesn't have anywhere it can write data to (data dir doesn't exist, or doesn't have write permissions).

Although solved, I'm adding this for future readers. Cody's advice of inspecting the start of namenode and datanode was useful, and further investigation led me to delete the hadoop-store/dfs directory. Doing this solved this error for me.

I had the same problem, I took a look at the datanode logs and there was a warning saying that the dfs.data.dir had incorrect permissions... so I just changed them and everything worked, which is kind of weird.

Specifically, my "dfs.data.dir" was set to "/home/hadoop/hd_tmp", and the error I got was:

...
...
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /home/hadoop/hd_tmp/dfs/data, expected: rwxr-xr-x, while actual: rwxrwxr-x
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.
...
...

So I simply executed these commands:

  • I stopped all the demons with "bin/stop-all.sh"
  • Change the permissions of the directory with "chmod -R 755 /home/hadoop/hd_tmp"
  • I gave format again to the namenode with "bin/hadoop namenode -format".
  • I re-started the demons "bin/start-all.sh"
  • And voilà, the datanode was up and running! (I checked it with the command "jsp", where a process named DataNode was shown).

And then everything worked fine.

In my case, I wrongly set one destination for dfs.name.dir and dfs.data.dir. The correct format is

 <property>
 <name>dfs.name.dir</name>
 <value>/path/to/name</value>
 </property>

 <property>
 <name>dfs.data.dir</name>
 <value>/path/to/data</value>
 </property>

I removed the extra properties in the hdfs-site.xml and then this issue was gone. Hadoop needs to improve on their error messages. I tried each of the above solutions and none worked.

I encountered the same problem. When I looked at localhost:50070, under the cluster summary, all properties were shown as 0 except "DFS Used% 100". Usually, this situation occur because there are some mistakes in the three *-site.xml files under HADOOP_INSTALL/conf and hosts file.

In my case, the cause is unable to resolve the hostname. I solved the problem simply by adding "IP_Address hostname" to /etc/hosts.

swapna

In my case I had to delete:

/tmp/hadoop-<user-name> folder and format and start using sbin/start-dfs.sh

sbin/start-yarn.sh

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!