What is the best way to wait for kubernetes job to be complete? I noticed a lot of suggestions to use:
kubectl wait --for=condition=complete job/myjob
>
kubectl wait --for=condition=
My workaround is using oc get --wait, --wait is closed the command if the target resource is updated. I will monitor status section of the job using oc get --wait until status is updated. Update of status section is meaning the Job is complete with some status conditions.
If the job complete successfully, then status.conditions.type is updated immediately as Complete. But if the job is failed then the job pod will be restarted automatically regardless restartPolicy is OnFailure or Never. But we can deem the job is Failed status if not to updated as Complete after first update.
Look the my test evidence as follows.
# vim job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
parallelism: 1
completions: 1
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["perl", "-wle", "exit 0"]
restartPolicy: Never
Complete if it complete the job successfully.
# oc create -f job.yml &&
oc get job/pi -o=jsonpath='{.status}' -w &&
oc get job/pi -o=jsonpath='{.status.conditions[*].type}' | grep -i -E 'failed|complete' || echo "Failed"
job.batch/pi created
map[startTime:2019-03-09T12:30:16Z active:1]Complete
# vim job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
parallelism: 1
completions: 1
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["perl", "-wle", "exit 1"]
restartPolicy: Never
Failed if the first job update is not Complete. Test if after delete the existing job resource.
# oc delete job pi
job.batch "pi" deleted
# oc create -f job.yml &&
oc get job/pi -o=jsonpath='{.status}' -w &&
oc get job/pi -o=jsonpath='{.status.conditions[*].type}' | grep -i -E 'failed|complete' || echo "Failed"
job.batch/pi created
map[active:1 startTime:2019-03-09T12:31:05Z]Failed
I hope it help you. :)