how to kill hadoop jobs

末鹿安然 提交于 2019-11-29 19:58:44

Depending on the version, do:

version <2.3.0

Kill a hadoop job:

hadoop job -kill $jobId

You can get a list of all jobId's doing:

hadoop job -list

version >=2.3.0

Kill a hadoop job:

yarn application -kill $ApplicationId

You can get a list of all ApplicationId's doing:

yarn application -list

Use of folloing command is depreciated

hadoop job -list
hadoop job -kill $jobId

consider using

mapred job -list
mapred job -kill $jobId

Run list to show all the jobs, then use the jobID/applicationID in the appropriate command.

Kill mapred jobs:

mapred job -list
mapred job -kill <jobId>

Kill yarn jobs:

yarn application -list
yarn application -kill <ApplicationId>

An unhandled exception will (assuming it's repeatable like bad data as opposed to read errors from a particular data node) eventually fail the job anyway.

You can configure the maximum number of times a particular map or reduce task can fail before the entire job fails through the following properties:

  • mapred.map.max.attempts - The maximum number of attempts per map task. In other words, framework will try to execute a map task these many number of times before giving up on it.
  • mapred.reduce.max.attempts - Same as above, but for reduce tasks

If you want to fail the job out at the first failure, set this value from its default of 4 to 1.

Venu A Positive

Simply forcefully kill the process ID, the hadoop job will also be killed automatically . Use this command:

kill -9 <process_id> 

eg: process ID no: 4040 namenode

username@hostname:~$ kill -9 4040
标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!