Hadoop: …be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation

后端 未结 10 1886
走了就别回头了
走了就别回头了 2020-12-03 02:41

I\'m getting the following error when attempting to write to HDFS as part of my multi-threaded application

could only be replicated to 0 nodes instead of min         


        
10条回答
  •  [愿得一人]
    2020-12-03 03:20

    This error is caused by the block replication system of HDFS since it could not manage to make any copies of a specific block within the focused file. Common reasons of that:

    1. Only a NameNode instance is running and it's not in safe-mode
    2. There is no DataNode instances up and running, or some are dead. (Check the servers)
    3. Namenode and Datanode instances are both running, but they cannot communicate with each other, which means There is connectivity issue between DataNode and NameNode instances.
    4. Running DataNode instances are not able to talk to the server because of some networking of hadoop-based issues (check logs that include datanode info)
    5. There is no hard disk space specified in configured data directories for DataNode instances or DataNode instances have run out of space. (check dfs.data.dir // delete old files if any)
    6. Specified reserved spaces for DataNode instances in dfs.datanode.du.reserved is more than the free space which makes DataNode instances to understand there is no enough free space.
    7. There is no enough threads for DataNode instances (check datanode logs and dfs.datanode.handler.count value)
    8. Make sure dfs.data.transfer.protection is not equal to “authentication” and dfs.encrypt.data.transfer is equal to true.

    Also please:

    • Verify the status of NameNode and DataNode services and check the related logs
    • Verify if core-site.xml has correct fs.defaultFS value and hdfs-site.xml has a valid value.
    • Verify hdfs-site.xml has dfs.namenode.http-address.. for all NameNode instances specified in case of PHD HA configuration.
    • Verify if the permissions on the directories are correct

    Ref: https://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo

    Ref: https://support.pivotal.io/hc/en-us/articles/201846688-HDFS-reports-Configured-Capacity-0-0-B-for-datanode

    Also, please check: Writing to HDFS from Java, getting "could only be replicated to 0 nodes instead of minReplication"

提交回复
热议问题