Issue
- I would like to build my own docker images in CloudBees Core on Modern Cloud Platforms
Environment
Resolution
There are two approaches
- 1) Use Kaniko to build Dockerfiles without Docker
- 2) Use DinD by creating a new
Pod Template
and including theContainer Template
required
CloudBees recommends to follow the Kaniko approach
1) Kaniko approach
Please refer to Using Kaniko with CloudBees Core
2) DinD approach
If kaniko
is not a option, docker
can be used with the DinD images.
Risks of DooD
There are usually 2 solutions when running docker inside docker container:
- “Docker outside of Docker” (DooD): uses the underlying host socket from a container by bind mounting the docker socket
/var/run/docker.sock
- “Docker inside Docker” (DinD): uses its own docker installation and engine inside a container
There are some risks from using “Docker outside of Docker” (DooD) and mounting the Docker socket /var/run/docker.sock
for building containers on CloudBees CI for Modern Cloud Platforms.
- resources launched outside of the k8s scheduler’s oversight are not properly accounted for and may lead to resource over-utilisation
- resources launched by Docker will not be cleaned up by the Kubernetes scheduler and may consume CPU and memory resources until the instances are removed
- exposing the
docker.sock
to processes effectively grants host root access to the container processes
It is strongly discouraged to use DooD and mounting the docker socket. Use “Docker inside Docker” (DinD) instead.
Instructions
Following we provide two options to define your DIND Agent Pod Templates:
- Kubernetes Pod Templates GUI
- Jenkinsfile
Kubernetes Pod Templates GUI
Kubernetes Pod Templates can be added at CJOC or controller level.
- In CJOC, access to CJOC main page, click on the view
All
and open thekubernetes shared cloud
config page. - In a controller, go to
Manage Jenkins > Kubernetes Pod Template
.
Add a new Pod Template
Click on the Add Pod Template
button and fill the following fields
Name : pod-dind
Labels : pod-dind
Usage : Only build jobs with label expressions matching the node
Add the DinD
container spec
In the “Raw YAML for the Pod” field, add the following spec:
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: dind-agent
spec:
containers:
- name: dind
image: docker:19.03.11-dind
tty: true
securityContext:
privileged: true
volumeMounts:
- name: docker-graph-storage
mountPath: /var/lib/docker
volumes:
- name: docker-graph-storage
emptyDir: {}
Note: Using a volume for the docker graph storage can help improve performances. The docker graph storage is the storage solution used by Docker for container / images layers, used by the docker daemon when running and building images. Docker relies on a specific kind of file system that uses layers and that can generate some overhead. Volumes bypass this file system which should generate less overhead and maybe speed up some operations.
Testing job
In order to test the Pod Template, create a new Pipeline job using the following code.
node('pod-dind') {
container('dind') {
stage('Build My Docker Image') {
sh 'docker info'
sh 'touch Dockerfile'
sh 'echo "FROM centos:7" > Dockerfile'
sh "cat Dockerfile"
sh "docker -v"
sh "docker info"
sh "docker build -t my-centos:1 ."
}
}
}
Jenkinsfile
pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: dind-agent
spec:
containers:
- name: dind
image: docker:19.03.11-dind
imagePullPolicy: Always
tty: true
securityContext:
privileged: true
volumeMounts:
- name: docker-graph-storage
mountPath: /var/lib/docker
volumes:
- name: docker-graph-storage
emptyDir: {}
"""
}
}
stages {
stage('Build My Docker Image') {
steps {
container('dind') {
sh 'docker info'
sh 'touch Dockerfile'
sh 'echo "FROM centos:7" > Dockerfile'
sh "cat Dockerfile"
sh "docker -v"
sh "docker info"
sh "docker build -t my-centos:1 ."
}
}
}
}
}
Expected output
It would be similar to the following
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Created Pod: kubernetes cbci/mm-1-dind-test-185-lzfs5-h1xr5-84bx5
[...]
Agent mm-1-dind-test-185-lzfs5-h1xr5-84bx5 is provisioned from template mm-1_dind-test_185-lzfs5-h1xr5
[...]
Running on mm-1-dind-test-185-lzfs5-h1xr5-84bx5 in /home/jenkins/agent/workspace/dind-test
[Pipeline] sh
+ docker info
Client:
Debug Mode: false
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 19.03.11
...
[Pipeline] sh
+ docker build -t my-centos:1 .
Sending build context to Docker daemon 2.048kB
Step 1/1 : FROM centos:7
7: Pulling from library/centos
2d473b07cdd5: Pulling fs layer
2d473b07cdd5: Verifying Checksum
2d473b07cdd5: Download complete
2d473b07cdd5: Pull complete
Digest: sha256:0f4ec88e21daf75124b8a9e5ca03c37a5e937e0e108a255d890492430789b60e
Status: Downloaded newer image for centos:7
---> 8652b9f0cb4c
Successfully built 8652b9f0cb4c
Successfully tagged my-centos:1
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS
Depending on the version you are running, you may receive an error during your test build similar to process apparently never started in /home/jenkins/workspace/dind-test
. This is due to a change in the default working directory for containers. If you receive this error, change the Working directory to: Working directory: /home/jenkins/agent
. This should be auto-populated when creating your container if you need to use this new location.
Tested product/plugin versions
The latest update of this article has been tested with
0 Comments