Datanode process not running in Hadoop

前端 未结 30 2282
慢半拍i
慢半拍i 2020-12-04 09:02

I set up and configured a multi-node Hadoop cluster using this tutorial.

When I type in the start-all.sh command, it shows all the processes initializing properly as

相关标签:
30条回答
  • 2020-12-04 09:17
    • Erase the files where data and name are in dfs.

    In my case , I have hadoop on windows, over C:/, this file according to core-site.xml, etc , it was in tmp/Administrator/dfs/data... name, etc, so erase it.

    Then, namenode -format. and try again,

    0 讨论(0)
  • 2020-12-04 09:18

    You need to check :

    /app/hadoop/tmp/dfs/data/current/VERSION and /app/hadoop/tmp/dfs/name/current/VERSION ---

    in those two files and that to Namespace ID of name node and datanode.

    If and only if data node's NamespaceID is same as name node's NamespaceID then your datanode will run.

    If those are different copy the namenode NamespaceID to your Datanode's NamespaceID using vi editor or gedit and save and re run the deamons it will work perfectly.

    0 讨论(0)
  • 2020-12-04 09:18

    Try this

    1. stop-all.sh
    2. vi hdfs-site.xml
    3. change the value given for property dfs.data.dir
    4. format namenode
    5. start-all.sh
    0 讨论(0)
  • 2020-12-04 09:19

    Delete the files under $hadoop_User/dfsdata and $hadoop_User/tmpdata then run:

    hdfs namenode -format
    

    finally run:

    start-all.sh
    

    Then your problem gets solved.

    0 讨论(0)
  • 2020-12-04 09:19
        mv /usr/local/hadoop_store/hdfs/datanode /usr/local/hadoop_store/hdfs/datanode.backup
    
        mkdir /usr/local/hadoop_store/hdfs/datanode
    
        hadoop datanode OR start-all.sh
    
        jps
    
    0 讨论(0)
  • 2020-12-04 09:20

    I faced similar issue while running the datanode. The following steps were useful.

    1. In [hadoop_directory]/sbin directory use ./stop-all.sh to stop all the running services.
    2. Remove the tmp dir using rm -r [hadoop_directory]/tmp (The path configured in [hadoop_directory]/etc/hadoop/core-site.xml)
    3. sudo mkdir [hadoop_directory]/tmp (Make a new tmp directory)
    4. Go to */hadoop_store/hdfs directory where you have created namenode and datanode as sub-directories. (The paths configured in [hadoop_directory]/etc/hadoop/hdfs-site.xml). Use

      rm -r namenode
      
      rm -r datanode
      
    5. In */hadoop_store/hdfs directory use

      sudo mkdir namenode
      
      sudo mkdir datanode
      

    In case of permission issue, use

       chmod -R 755 namenode 
    
       chmod -R 755 datanode
    
    1. In [hadoop_directory]/bin use

       hadoop namenode -format (To format your namenode)
      
    2. In [hadoop_directory]/sbin directory use ./start-all.sh or ./start-dfs.sh to start the services.
    3. Use jps to check the services running.
    0 讨论(0)
提交回复
热议问题