This article will give a high-level view of Kubernetes Pods and Services. Kubernetes runs containers but always inside the pods. You can’t directly deploy container without pods in Kubernetes. The shared context of the pod is a set of Linux namespaces, cgroups, and other facets of isolation. Docker container does the same but the application gets the further sub-isolation since pods host the container itself in it (Docker or rkt). The pod can consist of one or more containers. But containers within the pod must share the same IP address and port. The Pod dictates the container that how to run and where to run within the k8s cluster.
In the VMware world, a virtual machine is an atomic unit and “container” in docker the world. In the Kubernetes world, Pods are the atomic units.
Pods are the highest level of the ring-fenced environment. In this environment, it creates the network stack, kernel namespace and etc.. All containers in a pod share the pod environment including kernel namespace, shared memory.
Pod Lifecycle:
Pod lifecycle is similar to human life. Born! Live !! Die !!! There is no way to restart or reboot the pods. when a Pod dies, the brand new pod will be deployed by replication controller in the Kubernertes cluster.
Value | Description |
Pending | The Pod has been accepted by the k8s cluster, but one or more of the container images are yet to be created. This includes time before being scheduled as well as time spent downloading images over the network, which could take a while. |
Running | The Pod has been bound to a node, and all of the Containers have been created. At least one Container is still running or is in the process of starting or restarting. |
Succeeded | All Containers in the Pod have terminated in success, and will not be restarted. |
Failed | All Containers in the Pod have terminated, and at least one Container has terminated in failure. That is, the Container either exited with non-zero status or was terminated by the system. |
Unknown | For some reason, the state of the Pod could not be obtained, typically due to an error in communicating with the host of the |
Refer: https://kubernetes.io/docs/
If any of the running Pod dies, it will be re-deployed anywhere within the cluster using a new IP address. In the below example, Database pod on Node 2 has failed and brand new pod has been re-deployed in Node 5 with new Pod IP.
Why did Service need in Kubernetes?
In the below example, three application pods (front-end) communicates with the two backend database pods. If anyone of the backend database Pod dies/terminates, the brand new pod will be deployed with the new IP address with exact configuration of the terminated Pod.
- When the Pod’s are re-deployed with the new IP address, the front-end application servers might not aware of the changes.
- We might encounter the same issue when you scale-up or scale down the environment since Pod’s will be spin up with new IP’s.
- All the existing Pod’s will be replaced with the newer one when you do the rolling updates.
How to overcome the above-mentioned limitations? Service !!!
Service creates the bridge between front-end and back-end pods in Kubernetes cluster. It also provides load balancing functionality to the pods. Service is a Kubernetes object and needs to be defined in the YAML manifest. Once the service object in place, it provides stable IP and DNS names to the backend pods.
In the above example, frontend pods are reaching the service object and load balances to the backend database pods with stable IP and DNS names. If one of thePod dies and get replaced with another, service updates and maintains the replaced pod IP details.
If you scale up the DB pods, service will update the newly created pod’s IP and it will spread the upcoming requests to the newly created pods to balance the load. In the upcoming article, we will discuss Labels in Kubernetes.
Hope this article is informative to you. Share it! Be Sociable !!!
Leave a Reply