Writing to HDFS could only be replicated to 0 nodes instead of minReplication (=1)

后端 未结 7 1913
谎友^
谎友^ 2020-12-08 04:32

I have 3 data nodes running, while running a job i am getting the following given below error ,

java.io.IOException: File /user/ashsshar/olhcache/load

7条回答
  •  春和景丽
    2020-12-08 05:21

    What I usually do when this happens is that I go to tmp/hadoop-username/dfs/ directory and manually delete the data and name folders (assuming you are running in a Linux environment).

    Then format the dfs by calling bin/hadoop namenode -format (make sure that you answer with a capital Y when you are asked whether you want to format; if you are not asked, then re-run the command again).

    You can then start hadoop again by calling bin/start-all.sh

提交回复
热议问题