How do I prevent two pipeline jenkins jobs of the same type to run in parallel on the same node?

后端 未结 10 1944
佛祖请我去吃肉
佛祖请我去吃肉 2020-12-01 02:18

I do not want to allow two jobs of the same type (same repository) to run in parallel on the same node.

How can I do this using groovy inside Jenkinsfile ?

相关标签:
10条回答
  • 2020-12-01 03:08

    If you're like my team then you like having user-friendly parameterized Jenkins Jobs that pipeline scripts trigger in stages, instead of maintaining all that declarative/groovy soup. Unfortunately that means that each pipeline build takes up 2+ executor slots (one for the pipeline script and others for the triggered job(s)) so the danger of deadlock becomes very real.

    I've looked everywhere for solutions to that dilemma, and disableConcurrentBuilds() only prevents the same job (branch) from running twice. It won't make pipeline builds for different branches queue up and wait instead of taking up precious executor slots.

    A hacky (yet surprisingly elegant) solution for us was to limit the master node's executors to 1 and make the pipeline scripts stick to using it (and only it), then hook up a local slave agent to Jenkins in order to take care of all other jobs.

    0 讨论(0)
  • 2020-12-01 03:10

    The "Throttle Concurrent Builds Plugin" now supports pipeline since throttle-concurrents-2.0. So now you can do something like this:

    Fire the pipeline below twice, one immediately after the other and you will see. You can do this manually by double-clicking "Build Now" or by invoking it from a parallel step in another job.

    stage('pre'){
        echo "I can run in parallel"
        sleep(time: 10, unit:'SECONDS')
    }
    throttle(['my-throttle-category']) {
        
        // Because only the node block is really throttled.
        echo "I can also run in parallel" 
        
        node('some-node-label') {
            
            echo "I can only run alone"
            
            stage('work') {
                
                echo "I also can only run alone"
                sleep(time: 10, unit:'SECONDS')
                
            }
        }
    }
    stage('post') {
        echo "I can run in parallel again"
        // Let's wait enough for the next execution to catch
        // up, just to illustrate.
        sleep(time: 20, unit:'SECONDS')
    }
    

    From the pipeline stage view you'll be able to appreciate this:

    However, please be advised that this only works for node blocks within the throttle block. I do have other pipelines where I first allocate a node, then do some work which doesn't need throttling and then some which does.

    node('some-node-label') {
    
        //do some concurrent work
    
        //This WILL NOT work.
        throttle(['my-throttle-category']) {
            //do some non-concurrent work
        }
    }
    

    In this case the throttle step doesn't solve the problem because the throttle step is the one inside the node step and not the other way around. In this case the lock step is better suited for the task

    0 讨论(0)
  • 2020-12-01 03:18

    I think there are more than just one approach to this problem.

    Pipeline

    • Use latest version of Lockable Resources Plugin and its lock step, as suggested in other answer.
    • If building the same project:
      • Uncheck Execute concurrent builds if necessary.
    • If building different projects:
      • Set different node or label for each project.

    Jenkins

    • Limit number of node's executors to 1?

    Plug-ins

    • Build Blocker Plugin - supposedly supports Pipeline projects
    • Throttle Concurrent Builds Plugin - not compatible with Pipeline projects
    0 讨论(0)
  • 2020-12-01 03:22

    Until the "Throttle Concurrent Builds" plugin has Pipeline support, a solution would be to effectively run one executor of the master with a label that your job requires.

    To do this, create a new node in Jenkins, for example an SSH node that connects to localhost. You could also use the command option to run slave.jar/swarm.jar depending on your setup. Give the node one executor and a label like "resource-foo", and give your job this label as well. Now only one job of label "resource-foo" can run at a time because there is only one executor with that lable. If you set the node to be in use as much as possible (default) and reduce the number of master executors by one, it should behave exactly as desired without a change to total executors.

    0 讨论(0)
提交回复
热议问题