how can I increase hdfs capacity

后端 未结 2 451
北海茫月
北海茫月 2021-01-03 03:14

How can I increase the configured capacity of my hadoop DFS from the default 50GB to 100GB?

My present setup is hadoop 1.2.1 running on a centOS6 machine with 120GB

2条回答
  •  粉色の甜心
    2021-01-03 03:34

    Stop all the service: stop-all.sh

    then add these properties in terms of increasing the storage size in hdfs-site.xml:


        
            dfs.disk.balancer.enabled
            true
    
    
            dfs.storage.policy.enabled
            true
    
    
            dfs.blocksize
            134217728
    
    
            dfs.namenode.handler.count
            100
    
     
             dfs.namenode.name.dir
             file:///usr/local/hadoop_store/hdfs/namenode
    
    
    dfs.datanode.data.dir
    file:///usr/local/hadoop_store/hdfs/datanode,[disk]file:///hadoop_store2/hdfs/datanode
     
    

    also remember to put [disk] for including a extra disk on folder, [ssd] for dedicated extra ssd drive. always remember to check the "///" triple "/" for the directory pointing.

    After that,

    format the namenode to get the settings inherited in the Hadoop cluster, by giving a command

    hadoop namenode -format then start the services from beginning: Start-all.sh

    "/* remember without formating the hdfs the setting will not be activated as it will search for the Blockpool Id (BP_ID) in dfs.datanode.data.dir, and for the new location it will not found any BP_ID. "/*

提交回复
热议问题