In the Kubernetes environment, we could often see that pods might fail due to various reasons. Most of the time, it could be related to config issues or capacity-related issues. In such conditions, you could see the pod status could be “pending”, “Crashloopbackoff”, “evicted”, “Failed”, “ContainerCannotRun”, “Error”, “ContainerCreating” etc. As a DevOps engineer, you need to set up a periodic job to clean up the failed pods or clean up the pods manually by using the “kubectl” command. This article will give you simple commands to clean up the failed pods quickly from all the namespaces.
Note: For the Openshift environment, Kindly use the “oc” command instead of “kubectl”.
Cleanup/Delete the “Crashloopbackoff” state pods:
You can delete all pods in the “CrashLoopBackOff” state from all namespaces in a Kubernetes cluster using the following command.
1. List the running pods
uxpro-$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-7fcd4fd975-hjtbr 2/2 Running 2 18d nginx-web1 0/1 CrashLoopBackOff 1 6s uxpro-$
2. Delete the “CrashLoopBackOff” pods.
uxpro-$ kubectl delete pods $(kubectl get pod --all-namespaces -o jsonpath='{.items[?(@.status.containerStatuses[*].state.waiting.reason=="CrashLoopBackOff")].metadata.name}') pod "nginx-web1" deleted uxpro-$
This command uses kubectl get to retrieve all pods in all namespaces and filters them based on their “containerStatuses”. The output of the command is passed to kubectl delete pods to delete all the pods in the “CrashLoopBackOff” state.
Cleanup/Delete the “ImagePullBackOff” state pods:
You can delete all pods in the “ImagePullBackOff” state from all namespaces in a Kubernetes cluster using the following command.
1. List the pods using the kubectl command.
uxpro-$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-7fcd4fd975-hjtbr 2/2 Running 2 18d nginx-web1 0/1 ImagePullBackOff 0 47h nginx-web2 0/1 ImagePullBackOff 0 47h
2. Use the following command to delete the “ImagePullBackOff” state pods from all the namespaces.
uxpro-$ kubectl delete pods $(kubectl get pod --all-namespaces -o jsonpath='{.items[?(@.status.containerStatuses[*].state.waiting.reason=="ImagePullBackOff")].metadata.name}') pod "nginx-web1" deleted pod "nginx-web2" deleted uxpro-$
Cleanup/Delete the “ErrImagePull” state pods:
Similar to the “ImagePullBackOff” state running pods, you can also clean up the “ErrImagePull” state pods. We just need to update the waiting reason as “ErrImagePull”.
1. List the running pods.
uxpro-$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-7fcd4fd975-hjtbr 2/2 Running 2 18d nginx-web1 0/1 ErrImagePull 0 58s
2. Delete the “ErrImagePull” pods from all the namespaces.
uxpro-$ kubectl delete pods $(kubectl get pod --all-namespaces -o jsonpath='{.items[?(@.status.containerStatuses[*].state.waiting.reason=="ErrImagePull")].metadata.name}') pod "nginx-web1" deleted uxpro-$
How to delete the Pending pods?
You could clean up the pending pods using the following command on kubernetes. For Openshift, kindly substitute kubectl with “oc”.
1. List the pods.
uxpro-$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-7fcd4fd975-hjtbr 2/2 Running 2 18d nginx-web1 0/1 Pending 0 5s
2. Delete the pending pods using the following command.
uxpro-$ kubectl delete pods --field-selector status.phase=Pending --all-namespaces pod "nginx-web1" deleted uxpro-$
How to delete the Evicted/Failed pods?
If the pod status is failed or evicted, you can clean up using the following command.
uxpro-$ kubectl delete pods --field-selector status.phase=Failed --all-namespaces
Conclusion:
Kubernetes clusters are very robust and powerful computing platforms to orchestrate the containers effectively. sometimes, you might need to perform the cleanup on the non-functional pods to bring the resources back. Leaving pods running on the environment would drain more power & CPU cycles. It’s an excellent practice to clean up the failed pods and keep the environment clean. Hope the above-mentioned commands help to perform the cleanup in a short time by performing the operations on all the namespaces in one go.
Hope this article is informative to you.
Leave a Reply