Can not delete pods in Kubernetes Can not delete pods in Kubernetes kubernetes kubernetes

Can not delete pods in Kubernetes


I did face same issue. Run command:

kubectl get deployment

you will get respective deployment to your pod. Copy it and then run command:

kubectl delete deployment xyz

then check. No new pods will be created.


The link provided by the op may be unavailable. See the update section

As you specified you created your dgraph server using this https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml, So just use this one to delete the resources you created:

$ kubectl delete -f https://raw.githubusercontent.com/dgraph-io/dgraph/master/contrib/config/kubernetes/dgraph-single.yaml

Update

Basically, this is an explanation for the reason.

Kubernetes has some workloads (those contain PodTemplate in their manifest). These are:

See, who controls whom:

  • ReplicationController -> Pod(s)
  • ReplicaSet -> Pod(s)
  • Deployment -> ReplicaSet(s) -> Pod(s)
  • StatefulSet -> Pod(s)
  • DaemonSet -> Pod(s)
  • Job -> Pod
  • CronJob -> Job(s) -> Pod

a -> b means a creates and controls b and the value of field.metadata.ownerReference in b's manifest is the reference of a. Forexample,

apiVersion: v1kind: Podmetadata:  ...  ownerReferences:  - apiVersion: apps/v1    controller: true    blockOwnerDeletion: true    kind: ReplicaSet    name: my-repset    uid: d9607e19-f88f-11e6-a518-42010a800195  ...

This way, deletion of the parent object will also delete the child object via garbase collection.

So, a's controller ensures that a's current status matches witha's spec. Say, if one deletes b, then b will be deleted. Buta is still alive and a's controller sees that there is adifference between a's current status and a's spec. So a'scontroller recreates a new b obj to match with the a's spec.

The ops created a Deployment that created ReplicaSet that further created Pod(s). So here the soln was to delete the root obj which was the Deployment.

$ kubectl get deploy -n {namespace}$ kubectl delete deploy {deployment name} -n {namespace}

Note Book

Another problem may arise during deletion is as follows:If there is any finalizer in the .metadata.finalizers[] section, then only after completing the task(s) performed by the associated controller, the deletion will be performed. If one wants to delete the object without performing the finalizer(s)' action(s), then he/she has to delete those finalizer(s) first. For example,

$ kubectl patch -n {namespace} deploy {deployment name} --patch '{"metadata":{"finalizers":[]}}'$ kubectl delete -n {namespace} deploy {deployment name}


You can perform a graceful pod deletion with the following command:

kubectl delete pods <pod>

If you want to delete a Pod forcibly using kubectl version >= 1.5, do the following:

kubectl delete pods <pod> --grace-period=0 --force

If you’re using any version of kubectl <= 1.4, you should omit the --force option and use:

kubectl delete pods <pod> --grace-period=0

If even after these commands the pod is stuck on Unknown state, use the following command to remove the pod from the cluster:

kubectl patch pod <pod> -p '{"metadata":{"finalizers":null}}'