run jenkins pipeline agent with sudo

谁说胖子不能爱 提交于 2019-11-29 09:25:49

问题


I have an Jenkins Server running in an docker container and have access to docker an the host system, so far it is working well. Now I want to set up a pipeline testing an script inside an docker container.

Jenkinsfile:

pipeline {
    agent { docker 'nginx:1.11' }
    stages {
        stage('build') {
            steps {
                sh 'nginx -t'
            }
        }
    }
}

Error Message:

> + docker pull nginx:1.11
> 
> Warning: failed to get default registry endpoint from daemon (Got
> permission denied while trying to connect to the Docker daemon socket
> at unix:///var/run/docker.sock: Get
> http://%2Fvar%2Frun%2Fdocker.sock/v1.29/info: dial unix
> /var/run/docker.sock: connect: permission denied). Using system
> default: https://index.docker.io/v1/
> 
> Got permission denied while trying to connect to the Docker daemon
> socket at unix:///var/run/docker.sock: Post
> http://%2Fvar%2Frun%2Fdocker.sock/v1.29/images/create?fromImage=nginx&tag=1.11:
> dial unix /var/run/docker.sock: connect: permission denied
> 
> script returned exit code 1

My problem is that jenkins needs to run the docker command with sudo, but how to say the agent running the command with sudo?


回答1:


I have faced the same issue. After analysing the console log, I have found that the reason is that the Docker Jenkins Plugin starts a new container with a specific option -u 107:112:

...
docker run -t -d -u 107:112 ...
...

After trying many options such as: add jenkins to sudo group (it did not work because jenkins user does not exist in container), add USER root into Dockerfile, ... but none of them do the trick.

Finally I have found a solution that is using args in docker agent to overwrite the -u option. This is my Jenkinsfile:

pipeline {
    agent {
        docker {
            image 'ubuntu'
            args '-u root:sudo -v $HOME/workspace/myproject:/myproject'
        }
    }
    stages {
        stage("setup_env") {
            steps {
                sh 'apt-get update -y'
                sh 'apt-get install -y git build-essential gcc cmake make'
            }
        }

        stage("install_dependencies") {
            steps {
                sh 'apt-get install -y libxml2-dev'
            }
        }
        stage("compile_dpi") {
            steps {
                sh 'cd /myproject && make clean && make -j4'
            }
        }

        stage("install_dpi") {
            steps {
                sh 'cd /myproject && make install'
            }
        }

        stage("test") {
            steps {
                sh 'do some test here'
            }
        }
    }
    post {
        success {
            echo 'Do something when it is successful'
            bitbucketStatusNotify(buildState: 'SUCCESSFUL')
        }
        failure {
            echo 'Do something when it is failed'
            bitbucketStatusNotify(buildState: 'FAILED')
        }
    }
}

There's maybe a security issue here but it is not the problem in my case.




回答2:


I'd solve the problem differently, matching the jenkins group id inside the container to that of the docker socket you've mounted a volume. I do this with an entrypoint that runs as root, looks up the gid of the socket, and if that doesn't match that of the gid inside the current container, it does a groupmod to correct it inside the container. Then I drop privileges to the jenkins user to launch Jenkins. This entrypoint run on every startup, but fairly transparently to the Jenkins app that is launched.

All the steps to perform this are included in this github repo: https://github.com/sudo-bmitch/jenkins-docker/




回答3:


You can work around that by:

1- In your Dockerfile add jenkins to the sudoers file:

RUN echo "jenkins ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

2- Add an extra step in your Jenkinsfile to give jenkins the right permissions to use docker:

pipeline {

    agent none

    stages {

        stage("Fix the permission issue") {

            agent any

            steps {
                sh "sudo chown root:jenkins /run/docker.sock"
            }

        }

        stage('Step 1') {

            agent {
                docker {
                    image 'nezarfadle/tools'
                    reuseNode true
                }
            }

            steps {
                sh "ls /"
            }

        }

    }
}



回答4:


What worked for me was

node() {
    String jenkinsUserId = sh(returnStdout: true, script: 'id -u jenkins').trim()
    String dockerGroupId = sh(returnStdout: true, script: 'getent group docker | cut -d: -f3').trim()
    String containerUserMapping = "-u $jenkinsUserId:$dockerGroupId "
    docker.image('image')
        .inside(containerUserMapping + ' -v /var/run/docker.sock:/var/run/docker.sock:ro') {
             sh "..."
         }
}

This way the user in the container still uses the jenkins user id + group id to avoid permissions conflicts with shared data but is also member of the docker group inside container which is required to access the docker socket (/var/run/docker.sock)

I prefer this solution as it doesn't require any additional scripts or dockerfiles




回答5:


I just had the same exact issue. You need to add jenkins user to docker group:

DOCKER_SOCKET=/var/run/docker.sock
DOCKER_GROUP=docker
JENKINS_USER=jenkins

if [ -S ${DOCKER_SOCKET} ]; then
DOCKER_GID=$(stat -c '%g' ${DOCKER_SOCKET})
sudo groupadd -for -g ${DOCKER_GID} ${DOCKER_GROUP}
sudo usermod -aG ${DOCKER_GROUP} ${JENKINS_USER}
fi

# Start Jenkins service
sudo service jenkins restart

After you run the above, pipelines successfully start docker




回答6:


I might have found a reasonably good solution for this.

Setup

I run Jenkins as a container and use it to build containers on the dockerhost it's running on. To do this, I pass /var/run/docker.sock as a volume to the container.

Just to reiterate the disclaimer some other people already stated: Giving access to the docker socket is essentially like giving root access to the machine - be careful!

I assume that you've already installed docker into your Jenkins Image.

Solution

This is based on the fact, that the docker binary is not in the first directory of $PATH. We basically place a shell script that runs sudo docker instead of just the plain docker command (and passes the parameters along).

Add a file like this to your jenkins repository and call it docker_sudo_overwrite.sh:

#! /bin/sh 
# This basically is a workaround to add sudo to the docker command, because aliases don't seem to work 
# To be honest, this is a horrible workaround that depends on the order in $PATH
# This file needs to be place in /usr/local/bin with execute permissions
sudo /usr/bin/docker $@

Then extend your Jenkins Dockerfile like this:

# Now we need to allow jenkins to run docker commands! (This is not elegant, but at least it's semi-portable...)
USER root

## allowing jenkins user to run docker without specifying a password
RUN echo "jenkins ALL=(ALL) NOPASSWD: /usr/bin/docker" >> /etc/sudoers

# Create our alias file that allows us to use docker as sudo without writing sudo
COPY docker_sudo_overwrite.sh /usr/local/bin/docker
RUN chmod +x /usr/local/bin/docker

# switch back to the jenkins-user
USER jenkins

This gives the jenkins service user the ability to run the docker binary as root with sudo (without providing a password). Then we copy our script to /usr/local/bin/docker which "overlays" the actual binary and runs it with sudo. If it helps, you can look at my example on Github.




回答7:


Same issue here where.

[...]
agent { docker 'whatever_I_try_doesnt_work'} # sudo, jenkins user in dockerroot group etc
[...]

So my workaround is to add it as one of the steps in the the build stage of the pipeline as follow:

pipeline {
    agent any
    stages {
        stage('Build') {
            steps {
                sh 'sudo docker pull python:3.5.1'
            }
        }
    }
}


来源:https://stackoverflow.com/questions/44791060/run-jenkins-pipeline-agent-with-sudo

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!