Jenkins declarative pipeline with Docker/Dockerfile agent from SCM Jenkins declarative pipeline with Docker/Dockerfile agent from SCM jenkins jenkins

Jenkins declarative pipeline with Docker/Dockerfile agent from SCM


You could try to declare agent for each stage separately, for checkout stage you could use some default agent and docker agent for others.

pipeline {    agent none    stage ('Checkout') {        agent any        steps {            git(                url: 'https://www.github.com/...',                credentialsId: 'CREDENTIALS',                branch: "develop"            )        }    }    stage ('Build') {        agent {            dockerfile {            filename 'Dockerfile.ci'        }        steps {            [...]        }}    }    [...]}


If you're using a multi-branch pipeline it automatically checks out your SCM before evaluating the agent. So in that case you can specify the agent from a file in the SCM.


The answer is in the Jenkins documentation on the Dockerfile parameter:

In order to use this option, the Jenkinsfile must be loaded from either a Multibranch Pipeline or a Pipeline from SCM.

Just scroll down to the Dockerfile section, and it's documented there.

The obvious problem with this approach is that it impairs pipeline development. Now instead of testing code in a pipeline field on the server, it must be committed to the source repository for each testable change. NOTE also that the Jenkinsfile checkout cannot be sparse or lightweight as that will only pick up the script -- and not any accompanying Dockerfile to be built.

I can think of a couple ways to work around this.

  1. Develop against agents in nodes with the reuseNode true directive. Then when code is stable, the separate agent blocks can be combined together at the top of the Jenkinsfile which must then be loaded from the SCM.
  2. Develop using the dir() solution that specs the exact workspace directory, or alternately use one of the other examples in this solution.