Is there a way to add nodes to a running Hadoop cluster?

后端 未结 5 1776
花落未央
花落未央 2020-12-24 08:44

I have been playing with Cloudera and I define the number of clusters before I start my job then use the cloudera manager to make sure everything is running.

I’m w

5条回答
  •  無奈伤痛
    2020-12-24 08:55

    Following steps should help you launch the new node into the running cluster.

    1> Update the /etc/hadoop/conf/slaves list with the new node-name
    2> Sync the full configuration /etc/hadoop/conf to the new datanode from the Namenode. If the file system isn't shared.  
    2>  Restart all the hadoop services on Namenode/Tasktracker and all the services on the new Datanode. 
    3>  Verify the new datanode from the browser http://namenode:50070
    4>  Run the balancer script to readjust the data between the nodes. 
    

    If you don't want to restart the services on the NN, when you add a new node. I would say add the names ahead to the slaves configuration file. So they report as decommission/dead nodes until they are available. Following the above DataNode only steps. Again this not the best practice.

提交回复
热议问题