Required Data: Ingress Nginx Controller


  • I have deployed the NGINX Ingress Controller but I am not able to reach my instances

Required Data for NGINX Ingress Controller

This article describes how to collect the minimum required information for NGINX Ingress Controller on a CloudBees Core on Core Modern installation so that it can be efficiently troubleshooted.

If the required data is bigger than 50 MB you will not be able to use ZenDesk to upload all the information. In this case, we would like to encourage you to use our upload service in order to attach all the required information.


Automatic Data collection

This is the preferred method if you are using a product supported by the cbsupport CLI.

Current products supporting collecting ingress data are:

Steps to follow are:

  1. Install and configure cbsupport following Using cbsupport CLI to collect the requested data
  2. Run cbsupport required-data ingress
  3. In the command prompt, select the namespace where the ingress controller has been deployed, commonly ingress-nginx
  4. Collect the archive generated in the working directory of cbsupport and attach it to the ticket using our upload service

Manual Data collection

Required Data check list

  •  Details about the Load Balancer solution
  •  Kubernetes CloudBees Core resources details
  •  Kubernetes NGINX Ingress Controller resources details
  •  Kubernetes NGINX Ingress Controller pod logs
  •  Reachability of CloudBees Core via DNS from the workstation
  •  Reachability of CloudBees Core via Load Balancer from the workstation
  •  Reachability of CloudBees Core from Host


To facilitate the retrieval of data, export the following variables:



  • <domain-name> by the domain used for CloudBees Core, like
  • <cloudbees-core-namespace> by the namespace where CLoudBees Core is deployed
  • <ingress-namespace> by the namespace where NGINX Ingress controller is deployed, usually ingress-nginx
  • <nginx-service-name> by the name of the ingress controller service, that you can retrieve with kubectl get svc -n $NGINX_NAMESPACE (not the “default backend” service). Usually ingress-nginx for manual installation or nginx-ingress-controller for helm installation.
  • <loadbalancer-external-ip> by the IP address that the DNS resolves to in the output of nslookup <domain-name> command. You may also retrieve the load balancer IP with kubectl get svc $SERVICE_NAME -n $NGINX_NAMESPACE -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
  • <nginx-application-label> by the label of the nginx ingress application resources. If installed with helm, the label should be app=nginx-ingress. If installed manually, the label should be

Load Balancer solution

In a non cloud-managed environment or when not using a service of type LoadBalancer, please provide details about the Load Balancer solution in front of Ingress:

  • What Solution ? (HA Proxy, F5 VIP, AWS ELB, …)
  • What type ? (Layer 4, Layer 7, …) ?
  • Configuration files and / or evidences of the load balancer configuration (port mappings, headers settings, ssl, proxy protocol, …)

Kubernetes CloudBees Core resources details

Resources of CloudBees Core deployment in the cluster:

kubectl get node,pod,svc,ing,ep -o yaml -n $CB_NAMESPACE > k8s-details.yaml
kubectl get node,pod,svc,ing,ep -o wide -n $CB_NAMESPACE > k8s-details.txt

Kubernetes NGINX Ingress Controller resources details

Resources of the NGINX Ingress Controller deployment in the cluster:

kubectl get daemonset,deployment,pod,svc,ep,cm -o wide -n $NGINX_NAMESPACE > ingress-nginx-details.txt
kubectl get daemonset,deployment,pod,svc,ep,cm -o yaml -n $NGINX_NAMESPACE > ingress-nginx-details.yaml

NGINX Ingress Controller pod logs

Collect the logs of each ingress-controller pod. You can find the NGINX ingress controller pods by running:

kubectl get pod -l "${NGINX_LABEL}" -o wide -n $NGINX_NAMESPACE

Then, for each pod, collect the logs:

kubectl logs $POD_NAME -n $NGINX_NAMESPACE > $POD_NAME.log

Reachability of the CloudBees Core via DNS from the workstation

The output of the following command to ensure that the DNS resolves to an IP address and that CJOC can be reached :

nslookup $DOMAIN_NAME > nslookup.log
curl -IvL http://$DOMAIN_NAME/cjoc --max-time 10 > curl-ing-through-dns-http.log 2>&1
curl -IkvL https://$DOMAIN_NAME/cjoc --max-time 10 > curl-ing-through-dns-https.log 2>&1

Reachability of CloudBees Core via Load Balancer IP from the workstation

Get the “External IP” or IP that the DNS resolves to:

nslookup $DOMAIN_NAME

Test if the ingress controller can be reached through the Load Balancer:

curl -IvL -H "Host: $DOMAIN_NAME" --resolve $DOMAIN_NAME:80:$EXTERNAL_IP http://$DOMAIN_NAME/cjoc --max-time 10 > curl-ing-through-lb-http.log 2>&1
curl -IkvL -H "Host: $DOMAIN_NAME" --resolve $DOMAIN_NAME:443:$EXTERNAL_IP https://$DOMAIN_NAME/cjoc --max-time 10 > curl-ing-through-lb-https.log 2>&1

Reachability of CloudBees Core from Kubernetes Nodes

Retrieve the HTTP and HTTPS node ports that the NGINX ingress controller service is exposing:

kubectl get svc $SERVICE_NAME -n $NGINX_NAMESPACE -o jsonpath="{.spec.ports[?('http')].nodePort}"
kubectl get svc $SERVICE_NAME -n $NGINX_NAMESPACE -o jsonpath="{.spec.ports[?('https')].nodePort}"

Retrieve the node where the NGINX ingress controller pods are running:

If the controller is deployed with helm:

kubectl get pods -l "${NGINX_LABEL}" -o jsonpath='{.items[*].spec.nodeName}' -n $NGINX_NAMESPACE

Log in to each kubernetes node and test if the ingress controller can be reached directly from there (replace <domain-name>, <http-node-port> and <https-node-port> by their corresponding values):

curl -IvL -H "Host: $DOMAIN_NAME"$HTTP_NODE_PORT/cjoc --max-time 10 > curl-ing-from-node-http.log 2>&1
curl -IkvL -H "Host: $DOMAIN_NAME"$HTTPS_NODE_PORT/cjoc --max-time 10 > curl-ing-from-node-https.log 2>&1
curl -IvL -H "Host: $DOMAIN_NAME"$HTTP_NODE_PORT/cjoc --max-time 10 --haproxy-protocol --ipv4 > curl-ing-from-node-http-proxy-protocol.log 2>&1
curl -IkvL -H "Host: $DOMAIN_NAME"$HTTPS_NODE_PORT/cjoc --haproxy-protocol --ipv4 --max-time 10 > curl-ing-from-node-https-proxy-protocol.log 2>&1

Collect the files.


Have more questions?


Please sign in to leave a comment.