How to run containers sequentially as a Kubernetes job?

馋奶兔 提交于 2019-12-17 18:07:52

问题


I'm trying to replace my legacy job scheduler with Kubernetes job and wondering how to write sequential jobs as a Kubernetes job.

First, I wrote the following script to execute job1 and job2 in the written order but it didn't work as I expected.

apiVersion: batch/v1
kind: Job
metadata:
  name: sequential
spec:
  activeDeadlineSeconds: 100
  template:
    metadata:
      name: sequential_jobs
    spec:
      containers:
      - name: job1
        image: image1
      - name: job2
        image: image2
      restartPolicy: Never

The job described above seems to run job1 and job2 in parallel. Is there any good way to run job1 and job2 in the written order?

Appended.

I recently found https://github.com/argoproj/argo very good for my usecase.


回答1:


After a few attempts, I did this and that solved the basic problem (similar to what the OP has posted). This configuration ensures that job-1 completes before job-2 begins. If job-1 fails, job-2 container is not started. I still need to work on the retries and failure handling, but the basics works. Hopefully, this will help others:

apiVersion: v1
kind: Pod
metadata:
  name: sequential-job
spec:
  initContainers:
  - name: job-1
    image: busybox
    # runs for 15 seconds; echoes job name and timestamp
    command: ['sh', '-c', 'for i in 1 2 3; do echo "job-1 `date`" && sleep 5s; done;']
  - name: job-2
    image: busybox
    # runs for 15 seconds; echoes job name and timestamp
    command: ['sh', '-c', 'for i in 1 2 3; do echo "job-2 `date`" && sleep 5s; done;']
  # I don't really need the 'containers', but syntax requires 
  # it so, I'm using it as a place where I can report the 
  # completion status
  containers:
  - name: job-done
    image: busybox
    command: ['sh', '-c', 'echo "job-1 and job-2 completed"']
  restartPolicy: Never

Update

The same configuration as above also works inside a Job spec:

apiVersion: batch/v1
kind: Job
metadata:
  name: sequential-jobs
spec:
  template:
    metadata:
      name: sequential-job
    spec:
      initContainers:
      - name: job-1
        image: busybox
        command: ['sh', '-c', 'for i in 1 2 3; do echo "job-1 `date`" && sleep 5s; done;']
      - name: job-2
        image: busybox
        command: ['sh', '-c', 'for i in 1 2 3; do echo "job-2 `date`" && sleep 5s; done;']
      containers:
      - name: job-done
        image: busybox
        command: ['sh', '-c', 'echo "job-1 and job-2 completed"']
      restartPolicy: Never



回答2:


Broadly, there is no notion of sequence and capturing dependencies across containers/pods in a Kubernetes setup.

In your case, if you have 2 containers in a job spec (or a pod spec even), there is no sequencing for those 2 containers. Similarly, if you fire 2 jobs one after another there is no notion of sequencing for those jobs either.

Ideally, if anything requires sequencing you should capture it within a single unit (container).


Slightly tangential to your question, another common pattern that I've seen when a Job is dependent on another service existing (say a deployment fronted by a k8s service):

The container in the job makes an request to the k8s service and fails if the service does not respond as expected. That way the Job keeps restarting and eventually when the service is up, the job executes and completes successfully.




回答3:


Have you looked at Brigade - https://brigade.sh . Script simple and complex workflows using JavaScript. Chain together containers, running them in parallel or serially. Fire scripts based on times, GitHub events, Docker pushes, or any other trigger. Brigade is the tool for creating pipelines for Kubernetes.




回答4:


Argo workflow will fit for your usecase. Argo will support sequential, parallel, DAG job processing.

# This template demonstrates a steps template and how to control sequential vs. parallel steps.
# In this example, the hello1 completes before the hello2a, and hello2b steps, which run in parallel.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: steps-
spec:
  entrypoint: hello-hello-hello

  templates:
  - name: hello-hello-hello
    steps:
    - - name: hello1
        template: whalesay
        arguments:
          parameters: [{name: message, value: "hello1"}]
    - - name: hello2a
        template: whalesay
        arguments:
          parameters: [{name: message, value: "hello2a"}]
      - name: hello2b
        template: whalesay
        arguments:
          parameters: [{name: message, value: "hello2b"}]

  - name: whalesay
    inputs:
      parameters:
      - name: message
    container:
      image: docker/whalesay
      command: [cowsay]
      args: ["{{inputs.parameters.message}}"]




回答5:


Just came across this. As stated above there is no notion of job dependencies in Kubernetes so far as I know, but I've been working with a commercial entity (Univa) that has an add-on that provides this (and other) capabilities.

The offering is called Navops Command and it allows you to annotate Kubernetes jobs with a simple dependency notation. There is a blog with a brief explanation and example here. Basically Navops installs as a set of containers on Kubernetes, exposes it's own UI and CLI and supplements the Kubernetes scheduler with additional capabilities. You can download it at http://navops.io.

The technology comes from the Grid Engine scheduler used in HPC where complex workflow, array jobs and the like are common.



来源:https://stackoverflow.com/questions/40713573/how-to-run-containers-sequentially-as-a-kubernetes-job

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!