How to build my own docker images in CloudBees Core on Modern Cloud Platforms

Issue

  • I would like to build my own docker images in CloudBees Core on Modern Cloud Platforms

Environment

Resolution

There are two approaches

  • 1) Use DinD by creating a new Pod Template and including the Container Template required
  • 2) Use Kaniko to build Dockerfiles without Docker

1) DinD approach

Pod Templates can be created at CJOC or Master level.

  • In CJOC, access to CJOC main page, click on the view All and open the kubernetes shared cloud config page.
  • In a Master, go to Manage Jenkins > Kubernetes Pod Template.

Add a new Pod Template

Click on the Add Pod Template button and fill the following fields

Name   : pod-dind
Labels : pod-dind
Usage  : Only build jobs with label expressions matching the node

Add the DinD container

Click on the Add Container button and use one of the docker DinD images:

Name                   : dind
Docker image           : docker:18.06.1-ce-dind
Working directory      : /home/jenkins
Allocate pseudo-TTY    : checked
Run in privileged mode : checked

Testing job

In order to test the Pod Template, create a new Pipeline job using the following code.

podTemplate(){
    node('pod-dind') {
        container('dind') {

            stage('Build My Docker Image') { 
                sh 'docker info'

                sh 'touch Dockerfile'
                sh 'echo "FROM centos:7" > Dockerfile'
                
                sh "cat Dockerfile" 
    
                sh "docker -v" 
                sh "docker info" 
                sh "docker build -t my-centos:1 ." 
            } 
        }
    }
}

Expected output:

[Pipeline] node
Agent pod-dind-xwhnz is provisioned from template Kubernetes Pod Template
Agent specification [Kubernetes Pod Template] (pod-dind): 
* [dind] docker:18.06.1-ce-dind(resourceRequestCpu: , resourceRequestMemory: , resourceLimitCpu: , resourceLimitMemory: )

Running on pod-dind-xwhnz in /home/jenkins/workspace/dind-test
...
[Pipeline] sh
[dind-test] Running shell script
+ docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 18.06.1-ce
...
[Pipeline] sh
[dind-test] Running shell script
+ docker build -t my-centos:1 .
Sending build context to Docker daemon  2.048kB

Step 1/1 : FROM centos:7
7: Pulling from library/centos
aeb7866da422: Pulling fs layer
aeb7866da422: Verifying Checksum
aeb7866da422: Download complete
aeb7866da422: Pull complete
Digest: sha256:67dad89757a55bfdfabec8abd0e22f8c7c12a1856514726470228063ed86593b
Status: Downloaded newer image for centos:7
 ---> 75835a67d134
Successfully built 75835a67d134
Successfully tagged my-centos:1
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS

2) Kaniko approach

Pre-requisites

Kaniko builds an image and pushes it to a registry. Because of that, it’s required to create a secret in the k8s cluster that holds the authorization token.

kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>

Note: the token name has to be regcred. See Pull an Image from a Private Registry

Create a Pipeline job

Declare the pod and the containers in the Pipeline code by using the following code snippet

Scripted Pipeline

def label = "kaniko-${UUID.randomUUID().toString()}"
 
podTemplate(name: 'kaniko', label: label, yaml: """
kind: Pod
metadata:
  name: kaniko
spec:
  containers:
  - name: kaniko
    image: gcr.io/kaniko-project/executor:debug
    imagePullPolicy: Always
    command:
    - /busybox/cat
    tty: true
    volumeMounts:
      - name: jenkins-docker-cfg
        mountPath: /root
  volumes:
  - name: jenkins-docker-cfg
    projected:
      sources:
      - secret:
          name: regcred
          items:
            - key: .dockerconfigjson
              path: .docker/config.json
"""
  ) {
 
   node(label) {
     stage ('Dockerfile provision') {
       // do the stuff to get/generate a valid Dockerfile
       
       // i.e. checkout the Dockerfile from https://github.com/jenkinsci/docker-jnlp-slave.git
       git 'https://github.com/jenkinsci/docker-jnlp-slave.git'
     }

     stage('Build with Kaniko') {
       container(name: 'kaniko', shell: '/busybox/sh') {
           sh '''#!/busybox/sh
           /kaniko/executor -f `pwd`/Dockerfile -c `pwd` --destination=<$PROJECT/$IMAGE:$TAG>
           '''
       }
     }
   }
 }

Declarative Pipeline

pipeline {
  agent {
    kubernetes {
      label 'kaniko'
      yaml """
kind: Pod
metadata:
  name: kaniko
spec:
  containers:
  - name: kaniko
    image: gcr.io/kaniko-project/executor:debug
    imagePullPolicy: Always
    command:
    - /busybox/cat
    tty: true
    volumeMounts:
      - name: jenkins-docker-cfg
        mountPath: /root
  volumes:
  - name: jenkins-docker-cfg
    projected:
      sources:
      - secret:
          name: regcred
          items:
            - key: .dockerconfigjson
              path: .docker/config.json
"""
    }
  }
  stages {
    stage('Dockerfile provision') {
      // do the stuff to get/build a valid Dockerfile      
      // i.e. checkout the Dockerfile from https://github.com/jenkinsci/docker-jnlp-slave.git
      git 'https://github.com/jenkinsci/docker-jnlp-slave.git'
    }

    stage('Build with Kaniko') {
      environment {
        PATH = "/busybox:$PATH"
      }
      steps {
        container(name: 'kaniko', shell: '/busybox/sh') {
            sh '''#!/busybox/sh
            /kaniko/executor -f `pwd`/Dockerfile -c `pwd` --destination=<$PROJECT/$IMAGE:$TAG>
            '''
        }
      }
    }
  }
}

Note: replace --destination=<$PROJECT/$IMAGE:$TAG> with the proper values.

Resources

Have more questions? Submit a request

0 Comments

Please sign in to leave a comment.