Writing to HDFS could only be replicated to 0 nodes instead of minReplication (=1)

后端 未结 7 1900
谎友^
谎友^ 2020-12-08 04:32

I have 3 data nodes running, while running a job i am getting the following given below error ,

java.io.IOException: File /user/ashsshar/olhcache/load

相关标签:
7条回答
  • 2020-12-08 05:11

    In my case, this issue was resolved by opening the firewall port on 50010 on the datanodes.

    0 讨论(0)
  • 2020-12-08 05:12

    I had the same issue, I was running very low on disk space. Freeing up disk solved it.

    0 讨论(0)
  • 2020-12-08 05:13
    1. Check whether your DataNode is running,use the command:jps.
    2. If it is not running wait sometime and retry.
    3. If it is running, I think you have to re-format your DataNode.
    0 讨论(0)
  • 2020-12-08 05:20

    1.Stop all Hadoop daemons

    for x in `cd /etc/init.d ; ls hadoop*` ; do sudo service $x stop ; done
    

    2.Remove all files from /var/lib/hadoop-hdfs/cache/hdfs/dfs/name

    Eg: devan@Devan-PC:~$ sudo rm -r /var/lib/hadoop-hdfs/cache/
    

    3.Format Namenode

    sudo -u hdfs hdfs namenode -format
    

    4.Start all Hadoop daemons

    for x in `cd /etc/init.d ; ls hadoop*` ; do sudo service $x start ; done
    

    Stop All Hadoop Service

    0 讨论(0)
  • 2020-12-08 05:21

    What I usually do when this happens is that I go to tmp/hadoop-username/dfs/ directory and manually delete the data and name folders (assuming you are running in a Linux environment).

    Then format the dfs by calling bin/hadoop namenode -format (make sure that you answer with a capital Y when you are asked whether you want to format; if you are not asked, then re-run the command again).

    You can then start hadoop again by calling bin/start-all.sh

    0 讨论(0)
  • 2020-12-08 05:23

    Very Simple fix for the same issue on Windows 8.1
    I used Windows 8.1 OS and Hadoop 2.7.2, Did the following things to overcome this issue.

    1. When I started the hdfs namenode -format, I noticed there is a lock in my directory. please refer the figure below.
    2. Once I deleted the full folder as shown below, and again I did the hdfs namenode -format.
    3. After performing above two steps, I could successfully place my required files in HDFS system. I used start-all.cmd command to start yarn and namenode.
    0 讨论(0)
提交回复
热议问题