Restarting a managed controller fails after upgrading the Docker image

Issue

Restarting a managed controller fails after upgrading the Docker image.

When updating a managed controller, the restart process deletes all previous Kubernetes objects such as Ingress, Service and StatefulSet. Then it creates a new set of Kubernetes objects.
In some cases, the new objects get created before the deletion process completes.

This scenario results in an invalid service error because it appears to Kubernetes API that an immutable field such as ClusterIP is modified.
Here is an example of the error:

Message: Service “xxxxx” is invalid: spec.clusterIP: Invalid value: “”: field is immutable. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.clusterIP, message=Invalid value: “”: field is immutable, reason=FieldValueInvalid, additionalProperties={})], group=null, kind=Service, name=jeff-test, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Service “jeff-test” is invalid: spec.clusterIP: Invalid value: “”: field is immutable, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}).

Environment

Resolution

To resolve the error:

  1. Acknowledge the error
  2. If Managed controller is still running, stop it.
  3. Once the Managed controller has stopped, restart the Managed controller.

Have more questions?

0 Comments

Please sign in to leave a comment.