Files owned by root after running the 2.150.3.2 or the 2.150.2.3 release

Issue

For users who ran CloudBees Core on Modern Cloud Platforms version 2.150.3.2 or CloudBees Core on Modern Cloud Platforms version 2.150.2.3, the default container USER was removed, so the container was running as the root user, and some files will be written to the JENKINS_HOME that will be owned by the root user. All Kubernetes variants except for OpenShift are affected, since OpenShift always schedules containers using a generated UID.

You may not notice this issue when running the 2.150.3.2 or the 2.150.2.3 release, but when you upgrade to a newer release, you may see the following error in the Kubernetes Pod logs for your CloudBees Jenkins Operations Center, or any of your Masters:

kubectl logs cjoc-0
+ touch /var/jenkins_home/copy_reference_file.log
touch: cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied
Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
+ echo 'Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?'
+ exit 1

Another sample of errors:

ln: failed to create symbolic link ‘/var/jenkins_home/configure-jenkins.groovy.d’: Permission denied 
cp: cannot create directory ‘/var/jenkins_home/configure-jenkins.groovy.d’: Permission denied 

The specific files can be different, but you will likely find error messages such as:

  • Can not write to ... Wrong volume permissions?
  • cannot touch ... Permission denied
  • failed to create ... Permission denied
  • cannot create ... Permission denied

Environment

Resolution

The fix is documented under Known issues in the release notes for CloudBees Core on Modern Cloud Platforms version 2.164.1.2.

  1. On your local machine, create a file called patch-permissions.yaml with the following contents:
kind: StatefulSet
spec:
 template:
   spec:
       containers:
       - name: jenkins
         securityContext:
           runAsUser: 1000
       initContainers:
       - name: init-chown
         image: alpine
         env:
         - name: JENKINS_HOME
           value: /var/jenkins_home
         - name: MARKER
           value: .cplt2-5503
         - name: UID
           value: '1000'
         command:
         - sh
         - -c
         - if [ ! -f $JENKINS_HOME/$MARKER ]; then chown $UID:$UID -R $JENKINS_HOME; touch $JENKINS_HOME/$MARKER; chown $UID:$UID $JENKINS_HOME/$MARKER; fi
         volumeMounts:
         - mountPath: "/var/jenkins_home"
           name: "jenkins-home"
  1. From your local machine, execute the following patch command on the Kubernetes cluster:

kubectl patch statefulset.apps/cjoc -p "$(cat patch-permissions.yaml)"

  1. On each affected master, go into the Core UI and select Master > Configure > Advanced Configuration YAML, and add the same YAML code:
kind: StatefulSet
spec:
 template:
   spec:
       containers:
       - name: jenkins
         securityContext:
           runAsUser: 1000
       initContainers:
       - name: init-chown
         image: alpine
         env:
         - name: JENKINS_HOME
           value: /var/jenkins_home
         - name: MARKER
           value: .cplt2-5503
         - name: UID
           value: '1000'
         command:
         - sh
         - -c
         - if [ ! -f $JENKINS_HOME/$MARKER ]; then chown $UID:$UID -R $JENKINS_HOME; touch $JENKINS_HOME/$MARKER; chown $UID:$UID $JENKINS_HOME/$MARKER; fi
         volumeMounts:
         - mountPath: "/var/jenkins_home"
           name: "jenkins-home"
  1. Restart each master after applying the changes.

Have more questions?

0 Comments

Please sign in to leave a comment.