Writing to HDFS could only be replicated to 0 nodes instead of minReplication (=1)

后端 未结 7 1901
谎友^
谎友^ 2020-12-08 04:32

I have 3 data nodes running, while running a job i am getting the following given below error ,

java.io.IOException: File /user/ashsshar/olhcache/load

相关标签:
7条回答
  • 2020-12-08 05:34

    I had this problem and I solved it as bellow:

    1. Find where are your datanode and namenode metadata/data saved; if you cannot find it, simply do this command on mac to find it (there are located in a folder called "tmp")

      find /usr/local/Cellar/ -name "tmp";

      find command is like this: find <"directory"> -name <"any string clue for that directory or file">

    2. After finding that file, cd into it. /usr/local/Cellar//hadoop/hdfs/tmp

      then cd to dfs

      then using -ls command see that data and name directories are located there.

    3. Using remove command, remove them both:

      rm -R data . and rm -R name

    4. Go to bin folder and end everything if you already have not done it:

      sbin/end-dfs.sh

    5. Exit from the server or localhost.

    6. Log into the server again: ssh <"server name">

    7. start the dfs:

      sbin/start-dfs.sh

    8. Format the namenode for being sure:

      bin/hdfs namenode -format

    9. you can now use hdfs commands to upload your data into dfs and run MapReduce jobs.

    0 讨论(0)
提交回复
热议问题