Jenkins scripted pipeline or declarative pipeline Jenkins scripted pipeline or declarative pipeline jenkins jenkins

Jenkins scripted pipeline or declarative pipeline


When Jenkins Pipeline was first created, Groovy was selected as the foundation. Jenkins has long shipped with an embedded Groovy engine to provide advanced scripting capabilities for admins and users alike. Additionally, the implementors of Jenkins Pipeline found Groovy to be a solid foundation upon which to build what is now referred to as the "Scripted Pipeline" DSL.

As it is a fully featured programming environment, Scripted Pipeline offers a tremendous amount of flexibility and extensibility to Jenkins users. The Groovy learning-curve isn’t typically desirable for all members of a given team, so Declarative Pipeline was created to offer a simpler and more opinionated syntax for authoring Jenkins Pipeline.

The two are both fundamentally the same Pipeline sub-system underneath. They are both durable implementations of "Pipeline as code." They are both able to use steps built into Pipeline or provided by plugins. Both are able to utilize Shared Libraries

Where they differ however is in syntax and flexibility. Declarative limits what is available to the user with a more strict and pre-defined structure, making it an ideal choice for simpler continuous delivery pipelines. Scripted provides very few limits, insofar that the only limits on structure and syntax tend to be defined by Groovy itself, rather than any Pipeline-specific systems, making it an ideal choice for power-users and those with more complex requirements. As the name implies, Declarative Pipeline encourages a declarative programming model. Whereas Scripted Pipelines follow a more imperative programming model.

Copied from Syntax Comparison


Another thing to consider is declarative pipelines have a script() step. This can run any scripted pipeline. So my recommendation would be to use declarative pipelines, and if needed use script() for scripted pipelines. Therefore you get the best of both worlds.


I made the switch to declarative recently from scripted with the kubernetes agent. Up until July '18 declarative pipelines didn't have the full ability to specify kubernetes pods. However with the addition of the yamlFile step you can now read your pod template from a yaml file in your repo.

This then lets you use e.g. vscode's great kubernetes plugin to validate your pod template, then read it into your Jenkinsfile and use the containers in steps as you please.

pipeline {  agent {    kubernetes {      label 'jenkins-pod'      yamlFile 'jenkinsPodTemplate.yml'    }  }  stages {    stage('Checkout code and parse Jenkinsfile.json') {      steps {        container('jnlp'){          script{            inputFile = readFile('Jenkinsfile.json')            config = new groovy.json.JsonSlurperClassic().parseText(inputFile)            containerTag = env.BRANCH_NAME + '-' + env.GIT_COMMIT.substring(0, 7)            println "pipeline config ==> ${config}"          } // script        } // container('jnlp')      } // steps    } // stage

As mentioned above you can add script blocks. Example pod template with custom jnlp and docker.

apiVersion: v1kind: Podmetadata:  name: jenkins-podspec:  containers:  - name: jnlp    image: jenkins/jnlp-slave:3.23-1    imagePullPolicy: IfNotPresent    tty: true  - name: rsync    image: mrsixw/concourse-rsync-resource    imagePullPolicy: IfNotPresent    tty: true    volumeMounts:      - name: nfs        mountPath: /dags  - name: docker    image: docker:17.03    imagePullPolicy: IfNotPresent    command:    - cat    tty: true    volumeMounts:      - name: docker        mountPath: /var/run/docker.sock  volumes:  - name: docker    hostPath:      path: /var/run/docker.sock  - name: nfs    nfs:      server: 10.154.0.3      path: /airflow/dags