I have a running Spark application where it occupies all the cores where my other applications won\'t be allocated any resource.
I did some quick research and peopl
This might not be an ethical and preferred solution but it helps in environments where you can't access the console to kill the job using yarn application command.
Steps are
Go to application master page of spark job. Click on the jobs section. Click on the active job's active stage. You will see "kill" button right next to the active stage.
This works if the succeeding stages are dependent on the currently running stage. Though it marks job as " Killed By User"
It may be time consuming to get all the application Ids from YARN and kill them one by one. You can use a Bash for loop to accomplish this repetitive task quickly and more efficiently as shown below:
Kill all applications on YARN which are in ACCEPTED state:
for x in $(yarn application -list -appStates ACCEPTED | awk 'NR > 2 { print $1 }'); do yarn application -kill $x; done
Kill all applications on YARN which are in RUNNING state:
for x in $(yarn application -list -appStates RUNNING | awk 'NR > 2 { print $1 }'); do yarn application -kill $x; done
https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Application_State_API
PUT http://{rm http address:port}/ws/v1/cluster/apps/{appid}/state
{
"state":"KILLED"
}
First use:
yarn application -list
Note down the application id Then to kill use:
yarn application -kill application_id
yarn application -kill application_1428487296152_25597