Datanode process not running in Hadoop

前端 未结 30 2284
慢半拍i
慢半拍i 2020-12-04 09:02

I set up and configured a multi-node Hadoop cluster using this tutorial.

When I type in the start-all.sh command, it shows all the processes initializing properly as

相关标签:
30条回答
  • 2020-12-04 09:21

    Check whether the hadoop.tmp.dir property in the core-site.xml is correctly set. If you set it, navigate to this directory, and remove or empty this directory. If you didn't set it, you navigate to its default folder /tmp/hadoop-${user.name}, likewise remove or empty this directory.

    0 讨论(0)
  • 2020-12-04 09:24

    You need to do something like this:

    • bin/stop-all.sh (or stop-dfs.sh and stop-yarn.sh in the 2.x serie)
    • rm -Rf /app/tmp/hadoop-your-username/*
    • bin/hadoop namenode -format (or hdfs in the 2.x serie)

    the solution was taken from: http://pages.cs.brandeis.edu/~cs147a/lab/hadoop-troubleshooting/. Basically it consists in restarting from scratch, so make sure you won't loose data by formating the hdfs.

    0 讨论(0)
  • 2020-12-04 09:24

    Follow these steps and your datanode will start again.

    1)Stop dfs. 2)Open hdfs-site.xml 3)Remove the data.dir and name.dir properties from hdfs-site.xml and -format namenode again.

    4)Then start dfs again.

    0 讨论(0)
  • 2020-12-04 09:24

    Got the same error. Tried to start and stop dfs several times, cleared all directories that are mentioned in previous answers, but nothing helped.

    The issue was resolved only after rebooting OS and configuring Hadoop from the scratch. (configuring Hadoop from the scratch without rebooting didn't work)

    0 讨论(0)
  • 2020-12-04 09:28
    1. Stop the dfs and yarn first.
    2. Remove the datanode and namenode directories as specified in the core-site.xml file.
    3. Re-create the directories.
    4. Then re-start the dfs and the yarn as follows.

      start-dfs.sh

      start-yarn.sh

      mr-jobhistory-daemon.sh start historyserver

      Hope this works fine.

    0 讨论(0)
  • 2020-12-04 09:28

    I Have applied some mixed configuration, and its worked for me.
    First >>
    Stop Hadoop all Services using ${HADOOP_HOME}/sbin/stop-all.sh

    Second >>
    Check mapred-site.xml which is located at your ${HADOOP_HOME}/etc/hadoop/mapred-site.xml and change the localhost to master.

    Third >>
    Remove the temporary folder created by hadoop
    rm -rf //path//to//your//hadoop//temp//folder

    Fourth >>
    Add the recursive permission on temp.
    sudo chmod -R 777 //path//to//your//hadoop//temp//folder

    Fifth >>
    Now Start all the services again. And First check that all service including datanode is running. enter image description here

    0 讨论(0)
提交回复
热议问题