问题
So here's my situation :
I have a mapreduce job that uses HBase. My mapper takes one line of text input and updates HBase. I have no reducer, and I'm not writing any output to the disc. I would like the ability to add more processing power to my cluster when I'm expecting a burst of utilization, and then scale back down when utilization decreases. Let's assume for the moment that I can't use Amazon or any other cloud provider; I'm running in a private cluster.
One solution would be to add new machines to my cluster when I need more capacity. However, I want to be able to add and remove these machines without any waiting or hassle. I don't want to rebalance HDFS every time I need to add or remove a node.
So it would seem that a good strategy would be to have a "core" cluster, where each machine is running a tasktracker AND a datanode, and when I need added capacity, I can spin up some "disposable" machines that are running tasktrackers, but NOT datanodes. Is this possible? If so, what are the implications?
I realize that a tasktracker running on a machine with no datanode won't have the benefit of data locality. But in practice, what does this mean? I'm imagining that, when scheduling a job on one of the "disposable" machines, the jobtracker will send a line of input over the network to the tasktracker, which then takes that line of input and feeds it directly to a Mapper, without writing anything to the disc. Is this what happens?
Oh, and I'm using Cloudera cdh3u3. Don't know if that matters.
回答1:
I'm imagining that, when scheduling a job on one of the "disposable" machines, the jobtracker will send a line of input over the network to the tasktracker, which then takes that line of input and feeds it directly to a Mapper, without writing anything to the disc. Is this what happens?
Not quite, the Job tracker tasks a task tracker to run a map task to process the input split. The JobTracker does not pass the data to the task tracker, more is passes the serialized split information (file name, start offset and length). The TaskTracker runs the MapTask, and it is the MapTask that instantiates the InputFormat and associated RecordReader for the split information - passing the input Key/Values to the Mapper.
In the case where you don't have a local data node, OR you do have a local data node, but the data is not replicated on the local data node, the data will be read across the network from another data node (hopefully rack local, but could still come from somewhere else).
You can see the stats for how often a data block was local to the task or local to the rack in the Hadoop counters output.
来源:https://stackoverflow.com/questions/10343397/with-hadoop-can-i-create-a-tasktracker-on-a-machine-that-isnt-running-a-datano