How to use Kubernetes Pod Security Policies with CloudBees Core on Modern Cloud Platforms

Issue

Environment

Resolution

Pod Security Polices are an optional Kubernetes feature (and still beta but very stable and available for all major cloud providers) so they are not enabled by default on most Kubernetes distributions - to include GCP GKE, and Azure AKS. PSPs can be created and applied to a ClusterRole or a Role resource definition without enabling the PodSecurityPolicy admission controller. This is important, because once you enable the PodSecurityPolicy admission controller any pod that does not have a PSP applied to it will not get scheduled.

It is recommended that you define at least two Pod Security Policies for your Core v2 Kubernetes cluster:

  1. A restrictive Pod Security Policy used for all CloudBees components, additional Kubernetes services being leveraged with Core v2 and the majority of dynamic ephemeral Kubernetes based agents used by you Core v2 cluster
  2. The second Pod Security Policy will be almost identical except for RunAsUser will be set to RunAsAny to allow running as root - this is specifically to run Kaniko containers (Please refer to Using Kaniko with CloudBees Core), but you may have other uses cases that require containers to run as root

Restrictive Pod Security Policy

  1. Create the following restrictive Pod Security Policy (PSP) (or one like it):

    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      name: cb-restricted
      annotations:
        seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default'
        apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
        seccomp.security.alpha.kubernetes.io/defaultProfileName:  'docker/default'
        apparmor.security.beta.kubernetes.io/defaultProfileName:  'runtime/default'
    spec:
      # prevents container from manipulating the network stack, accessing devices on the host and prevents ability to run DinD
      privileged: false
      fsGroup:
        rule: 'MustRunAs'
        ranges:
          # Forbid adding the root group.
          - min: 1
            max: 65535
      runAsUser:
        rule: 'MustRunAs'
        ranges:
          # Don't allow containers to run as ROOT
          - min: 1
            max: 65535
      seLinux:
        rule: RunAsAny
      supplementalGroups:
        rule: RunAsAny
      # Allow core volume types. But more specifically, don't allow mounting host volumes to include the Docker socket - '/var/run/docker.sock'
      volumes:
      - 'emptyDir'
      - 'secret'
      - 'downwardAPI'
      - 'configMap'
      # persistentVolumes are required for CJOC and Managed Master StatefulSets
      - 'persistentVolumeClaim'
      - 'projected'
      hostPID: false
      hostIPC: false
      hostNetwork: false
      # Ensures that no child process of a container can gain more privileges than its parent
      allowPrivilegeEscalation: false
    
  2. Create a ClusterRole that uses the cb-restricted PSP (this can be applied to as many ServiceAccounts as necessary):

    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: psp-restricted-clusterrole
    rules:
    - apiGroups:
      - extensions
      resources:
      - podsecuritypolicies
      resourceNames:
      - cb-restricted
      verbs:
      - use
    
  3. Bind the restricted ClusterRole to all the ServiceAccounts in the cloudbees-core Namespace (or whatever Namespace you deployed CloudBees Core). The following RoleBinding will apply to both of the CloudBees Core v2 defined Roles - the cjoc ServiceAccount for provisioning Managed/Team Masters StatefulSets from CJOC and the jenkins ServiceAccount for scheduling dynamic ephemeral agent pods from Managed/Team Masters:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: cb-core-psp-restricted
      namespace: cloudbees-core
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: psp-restricted-clusterrole
    subjects:
    # All service accounts in ingress-nginx namespace
    - apiGroup: rbac.authorization.k8s.io
      kind: Group
      name: system:serviceaccounts
    

Kaniko Pod Security Policy (RunAsRoot)

  1. Create the following PSP for running Kaniko jobs and other Pods that must run as root:

    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      name: kaniko-psp
      annotations:
        seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default'
        apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
        seccomp.security.alpha.kubernetes.io/defaultProfileName:  'docker/default'
        apparmor.security.beta.kubernetes.io/defaultProfileName:  'runtime/default'
    spec:
      # prevents container from manipulating the network stack, accessing devices on the host and prevents ability to run DinD
      privileged: false
      fsGroup:
        rule: 'RunAsAny'
      runAsUser:
        rule: 'RunAsAny'
      seLinux:
        rule: RunAsAny
      supplementalGroups:
        rule: RunAsAny
      # Allow core volume types. But more specifically, don't allow mounting host volumes to include the Docker socket - '/var/run/docker.sock'
      volumes:
      - 'emptyDir'
      - 'secret'
      - 'downwardAPI'
      - 'configMap'
      # persistentVolumes are required for CJOC and Managed Master StatefulSets
      - 'persistentVolumeClaim'
      - 'projected'
      hostPID: false
      hostIPC: false
      hostNetwork: false
      # Ensures that no child process of a container can gain more privileges than its parent
      allowPrivilegeEscalation: false
    
  2. Create a ServiceAccount, Role and RoleBindings for use with Kaniko Pods:

    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: kaniko
    
    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: kaniko
    rules:
    - apiGroups: ['extensions']
      resources: ['podsecuritypolicies']
      verbs:     ['use']
      resourceNames:
      - kaniko-psp
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: RoleBinding
    metadata:
      name: kaniko
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kaniko
    subjects:
    - kind: ServiceAccount
      name: kaniko
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: cjoc-kaniko-role-binding
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: cjoc-agents
    subjects:
    - kind: ServiceAccount
      name: kaniko
    
  3. Update all Kaniko related Pod Templates and/or Pod Template yaml to use the kaniko ServiceAccount instead of the default jenkins ServiceAccount. Here is an example yaml based Jenkins Kubernetes Pod Template configuration:

    kind: Pod
    metadata:
      name: kaniko
    spec:
      serviceAccountName: kaniko
      containers:
      - name: kaniko
        image: gcr.io/kaniko-project/executor:debug-v0.10.0
        imagePullPolicy: Always
        command:
        - /busybox/cat
        tty: true
        volumeMounts:
          - name: kaniko-secret
            mountPath: /secret
        env:
          - name: GOOGLE_APPLICATION_CREDENTIALS
            value: /secret/kaniko-secret.json
      volumes:
        - name: kaniko-secret
          secret:
            secretName: kaniko-secret
      securityContext:
        runAsUser: 0
    

Bind Restrictive PSP Role for Ingress Nginx

CloudBees recommends the ingress-nginx Ingress controller to manage external access to Core v2. The NGINX Ingress Controller is a top-level Kubernetes project and provides an example for using Pod Security Policies with the ingress-nginx Deployment. Basically, all you have to do is run the following command before installing the NGINX Ingress controller:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/psp/psp.yaml

The above command will create the following PSP, Role and RoleBinding:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  annotations:
    # Assumes apparmor available
    apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
    apparmor.security.beta.kubernetes.io/defaultProfileName:  'runtime/default'
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default'
    seccomp.security.alpha.kubernetes.io/defaultProfileName:  'docker/default'
  name: ingress-nginx
spec:
  allowedCapabilities:
  - NET_BIND_SERVICE
  allowPrivilegeEscalation: true
  fsGroup:
    rule: 'MustRunAs'
    ranges:
    - min: 1
      max: 65535
  hostIPC: false
  hostNetwork: false
  hostPID: false
  hostPorts:
  - min: 80
    max: 65535
  privileged: false
  readOnlyRootFilesystem: false
  runAsUser:
    rule: 'MustRunAsNonRoot'
    ranges:
    - min: 33
      max: 65535
  seLinux:
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'MustRunAs'
    ranges:
    # Forbid adding the root group.
    - min: 1
      max: 65535
  volumes:
  - 'configMap'
  - 'downwardAPI'
  - 'emptyDir'
  - 'projected'
  - 'secret'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-psp
  namespace: ingress-nginx
rules:
- apiGroups:
  - policy
  resourceNames:
  - ingress-nginx
  resources:
  - podsecuritypolicies
  verbs:
  - use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-psp
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-psp
subjects:
- kind: ServiceAccount
  name: default
- kind: ServiceAccount
  name: nginx-ingress-serviceaccount

NOTE: You can also run that command after you have already installed the NGINX Ingress controller but PSP will only be applied after restarting or recreating the ingress-nginx Deployment.

Enable the Pod Security Policy Admission Controller

Once PSPs have been applied to all the ServiceAccounts in your Kubernetes cluster you can enable the Pod Security Policy Admission Controller:

External documentation on enabling and using Pod Security Policies:

Restrictive Pod Security Policies and Jenkins Kubernetes Pod Template Agents

The Jenkins Kubernetes plugin (for ephemeral K8s agents) defaults to using a K8s emptyDir volume type for the Jenkins agent workspace. This causes issues when using a restrictive PSP such at the cb-restricted PSP above. Kubernetes defaults to mounting emptyDir volumes as root:root with permissions set to 750 - as detailed by this GitHub issue. In order to run containers in a Pod Template as a non-root user you must specify a securityContext at the container or pod level. There are at least two ways to do this (in both cases at the pod level):

  1. In the Kubernetes cloud configuration UI via the Raw yaml for Pod field:

  2. In the raw yaml of a pod spec that you embed or load into your Jenkins job from a file:

    kind: Pod
    metadata:
      name: nodejs-app
    spec:
      containers:
      - name: nodejs
        image: node:10.10.0-alpine
        command:
        - cat
        tty: true
      - name: testcafe
        image: gcr.io/technologists/testcafe:0.0.2
        command:
        - cat
        tty: true
      securityContext:
        runAsUser: 1000
    

Have more questions?

0 Comments

Please sign in to leave a comment.