How to leverage Kubernetes Shared Cloud from External Client Masters

Issue

  • I have connected external Client Masters to CloudBees Core Modern Operations Center and I would like the client masters to be able to use the kubernetes shared cloud like any of the Managed Masters

Environment

Explanation

The default “kubernetes shared cloud” is created at root and visible to any connected master. So as soon as an external Client Master that has the kubernetes plugin installed connect to Operations Center, the kubernetes cloud configuration is pushed to the master and can be used.

There are however 2 problems for client masters:

  • Authentication: the kubernetes shared cloud configuration does not define any means of authentication to kubernetes. In that case, it relies on the behavior of the fabric8/kubernetes-client that can infer authentication details from the file system:
    • either from a kubeconfig file whose default location is $HOME/.kube/kubeconfig
    • or from ServiceAccount details under /var/run/secrets/kubernetes.io/serviceaccount/
      This works out of the box when inside a pod where service account details are automatically injected at /var/run/secrets/kubernetes.io/serviceaccount/. But from a client master, this cannot work unless the client master is running inside a pod in the same kubernetes cluster.
  • Routing: The Client Master URL (configured under Manage Jenkins > Configure System > Jenkins Location) must be reachable from inside the kubernetes cluster. If not, the KUBERNETES_JENKINS_URL system property or Environment variable can be used to define a URL that takes precedence over the Master URL and that kubernetes pod agents should use to connect to the master. Managed Masters for example are started with the KUBERNETES_JENKINS_URL environment variable that points to their internal endpoint (i.e. their kubernetes service URL). Agent connect directly to the masters through the kubernetes network.

Resolution

There are 2 viable solutions to leverage the “kubernetes shared cloud” from a client master:

  • either provide a kubeconfig file to the client master
  • or inject a service account details in the expected location /var/run/secrets/kubernetes.io/serviceaccount/

The advantage of those solutions is that it does not require any specific configuration from Jenkins or the Operations Center.

Pre-Requisites

  • A Service Account that has the kubernetes plugin required permissions in the namespace where agent must be spun up. By default, CloudBees Core define a jenkins service account in the cloudbees-core namespace that will be used for the example here.

Solution 1: Create a kubeconfig file for the Client Master

  1. Create a kubeconfig file in $HOME directory of the user running the Client Master $HOME/.kube/config and populate it:

    apiVersion: v1
    kind: Config
    current-context: default-context
    clusters:
    - cluster:
        certificate-authority-data: ${CLUSTER_CA_CERT_BASE64}
        server: ${KUBERNETES_API_SERVER_URL}
      name: remote-cluster
    contexts:
    - context:
        cluster: remote-cluster
        namespace: ${NAMESPACE}
        user: jenkins
      name: default-context
    users:
    - name: jenkins
      user:
        token: ${SERVICEACCOUNT_TOKEN_CLEAR}
    

    Replacing the variables with the appropriate values:

    • $KUBERNETES_API_SERVER_URL is the Kubernetes server URL: kubectl config view --minify | grep server
    • $SERVICEACCOUNT_TOKEN_CLEAR is the Service Account token in clear: kubectl get secret $(kubectl get sa jenkins -n cloudbees-core -o jsonpath={.secrets[0].name}) -n cloudbees-core -o jsonpath={.data.token} | base64 --decode
    • $CLUSTER_CA_CERT_BASE64 is the Kubernetes API Server CA Certificate in base64: kubectl get secret $(kubectl get sa jenkins -n cloudbees-core -o jsonpath={.secrets[0].name}) -n cloudbees-core -o jsonpath={.data.'ca\.crt'}
    • $NAMESPACE is the default namespace (where agent should be spun up by default): cloudbees-core
  2. Add the system property DKUBERNETES_JENKINS_URL=$MASTER_URL to the client master startup argument, where $MASTER_URL is the URL of the Client Master and restart the master.

Note: Make sure that the user running the Client Master has permissions to read its $HOME/.kube/config.

Solution 2: Inject the Service Account details in ‘/var/run/secrets/kubernetes.io/serviceaccount/’

  1. In the Client Master’s host, create the directory /var/run/secrets/kubernetes.io/serviceaccount and make sure that the user running the client master’s service, for example jenkins has permissions to read those files:

    sudo mkdir -p /var/run/secrets/kubernetes.io/serviceaccount
    sudo chown -R jenkins:jenkins /var/run/secrets/kubernetes.io/serviceaccount
    
  2. Then create the following files:
    • /var/run/secrets/kubernetes.io/serviceaccount/token: must contain the jenkins service account token in clear kubectl get secret $(kubectl get sa jenkins -n cloudbees-core -o jsonpath={.secrets[0].name}) -n cloudbees-core -o jsonpath={.data.token} | base64 --decode >
    • /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: must contain the Kubernetes API Server CA Certificate in clear kubectl get secret $(kubectl get sa jenkins -n cloudbees-core -o jsonpath={.secrets[0].name}) -n cloudbees-core -o jsonpath={.data.'ca\.crt'} | base64 --decode
    • /var/run/secrets/kubernetes.io/serviceaccount/namespace: must contain the name of the default namespace, for example cloudbees-core
  3. Add the system property -DKUBERNETES_JENKINS_URL=$MASTER_URL to the client master startup argument, where $MASTER_URL is the URL of the Client Master and restart the master.

Tested product/plugin versions

Resources

Have more questions?

0 Comments

Please sign in to leave a comment.