how can I increase hdfs capacity

后端 未结 2 456
北海茫月
北海茫月 2021-01-03 03:14

How can I increase the configured capacity of my hadoop DFS from the default 50GB to 100GB?

My present setup is hadoop 1.2.1 running on a centOS6 machine with 120GB

2条回答
  •  庸人自扰
    2021-01-03 03:51

    Set the location of the hdfs to a partition with more free space. For hadoop-1.2.1 this can be done by setting the hadoop.tmp.dir in hadoop-1.2.1/conf/core-site.xml

    
    
    
    
    
    
       
          fs.default.name
         hdfs://localhost:9000
         
       
        hadoop.tmp.dir
        /home/myUserID/hdfs
        base location for other hdfs directories.
       
    
    

    Running

    df

    had said my _home partition was my hard disk, minus 50GB for my /
    ( _root) partition. The default location for hdfs is /tmp/hadoop-myUserId which is in the / partition. This is where my initial 50GB hdfs size came from.

    Creation and confirmation of the partition location of a directory for the hdfs was accomplished by

    mkdir ~/hdfs
    df -P ~/hdfs | tail -1 | cut -d' ' -f 1
    

    successful implementation was accomplished by

    stop-all.sh
    start-dfs.sh
    hadoop namenode -format
    start-all.sh
    hadoop dfsadmin -report
    

    which reports the size of the hdfs as the size of my _home partition.

    Thank you jtravaglini for the comment/clue.

提交回复
热议问题