What Is Kubernetes Pod Disruption?

What Is Kubernetes Pod Disruption?

Kubernetes pods are the smallest deployable units in the Kubernetes platform. Each pod signals a single running process within the system and functions from a node or worker machine within Kubernetes, which may take on a virtual or physical form. 

Occasionally, Kubernetes pod disruptions may occur within a system, either from voluntary or involuntary causes. Kubernetes pod disruptions are more likely to occur in highly available applications and prove to be a concern for cluster administrators who perform automated cluster actions. 

Essentially, pods will remain in Kubernetes unless a user or controller removes them or there is a faulty system process. System administrators may apply Kubernetes pod disruption budgets (PDBs) to ensure that systems run undisrupted by creating an allowance/buffer for simultaneous pod disruptions. 

Contents

What is Kubernetes Pod Disruption?

Every pod in the Kubernetes system follows a defined life cycle across phases such as pending, running, succeeded, or failed. Within a Kubernetes API, each pod features a specification and active status determined by a set of conditions. Kubernetes enables users to schedule pods to nodes only once, from which they will run until it stops or gets terminated. 

In some system scenarios, Kubernetes nodes may experience a lack of RAM or disk space, which forces the system (i.e., controller) to disrupt pods (i.e., their life cycles) to keep the nodes running. Cluster administrators/controllers may deliberately disrupt the pods (voluntary), or disruption may occur from a software or hardware error. 

What is Pod Disruption Budget (PDB)?

PDB is a solution to Kubernetes pod disruption managed across various controllers such as ReplicaSet, StatefulSet, ReplicationController, and Deployment. PDBs prevent server downtime/outages by shutting down too many pods at a given period. 

In practical terms, a PDB maintains the minimum amount of pods required to support an SLA (service-level agreement) without incurring losses. Kubernetes users may also define PDB as a platform object that defines the minimum number of available replicas required to keep the cluster functioning stably during a voluntary eviction. 

PDBs are used by clusterautoscaler to determine how to drain a node during scale down operation. It controls the pace of pod eviction during node upgrades. For example, for a service with four pods and a minAvailable setting of three, the ReplicaSet controller will evict one pod and wait for it to be replaced with a new one before evicting another pod.

To set a pod disruption budget for a service running NGINX, use the following command:

kubectl create poddisruptionbudget my-pdb –selector=app=nginx –min-available=80%

In the example above, the PDB sets the requirement that 80% of nginx pods must stay healthy at all times. When users call for a pod eviction, the cluster will enable the graceful process only if it fulfills the PDB requirement. 

Before starting with a PDB, users should visit a few considerations. 

Firstly, users should establish the type of application protected by the PDB. The process proceeds with examining how applications respond to pod disruptions. Users will then need to create YAML files of PDB definitions and create the PDB object from those files. 

However, users must note that PDBs only apply to voluntary disruptions under deliberate admin/user commands. Therefore, PDBs will not work with fleets of involuntarily disrupted applications/pods. 

If users attempt to disrupt more pods than the stipulated value voluntarily, they will encounter an error code 429 message that prevents pod eviction due to a violation of the PDB value. 

What’s the Difference Between a Voluntary and Involuntary Disruption?

There are mainly two types of Kubernetes Pod Disruptions: voluntary disruptions caused by the deliberate actions of controllers and users and unavoidable involuntary disruptions resulting from hardware or software faults. 

Some common examples of involuntary disruptions include the hardware failure of physical machines, nodes disappearing due to node network partitions, and kernel panics. Examples of voluntary pod disruptions include cluster administrator actions such as draining nodes to scale clusters or removing a pod from a node in line with system updates and maintenance. 

It is important to remember that PDBs only apply to voluntary pod disruptions/evictions, where users and administrators temporarily evict pods for specific cluster actions. Users may apply other solutions for involuntary pod disruptions, such as replicating applications and spreading them across zones. 

Pod disruptions may occur in the form of node-pressure eviction, where controllers proactively delete pods to reclaim resources on nodes, which avoids starving the system. In such cases, the kubelet ignores your PDB. Alternatively, an API-initiated eviction respects a user’s preconfigured PDB and terminalgraceperiodseconds (i.e., the time permissible for a graceful deletion of pods). 

The graceful shutdown of pods, which has a default time frame of 30 seconds, is essential for Kubernetes cluster management, preventing potential workload disruptions and facilitating proper clean-up procedures. From a business/organizational perspective, a graceful termination of pods enables faster system recovery with minimal impact on the end-user experience. 

Therefore, PDB is not a foolproof solution for all instances of unavailability but rather an object specifically for API-initiated evictions. 

How to Specify a Disruption Budget

PDBs comprise three fields: .spec.selector, .spec.minAvailable, and .spec.maxAvailable. Essentially, .spec.selector serves as the label for the selected set of pods within the system.  

With a PDB in place, users/admins can set the minimum or maximum quantity of replicas and control pod disruption with the .spec.minAvailable and .spec.maxAvailable fields. .spec.minAvailable determines the number of active pods required at all times while .spec.maxAvailable states the maximum amount of disrupted pods allowed. 

Cluster administrators/controllers may only choose one between .spec.maxAvailable and .spec.minAvailable fields for each PDB. Setting a 0 value for .spec.maxAvailable or 100% .spec.minAvailable means that users forbid pod evictions. 

Additionally, there are some factors to consider before specifying a Kubernetes PDB. Users/administrators should have a Kubernetes system running higher than or equal to v1.21; if not, it is necessary to upgrade the program to fulfill the required compatibilities. 

Additionally, users who apply PDB should be owners of applications running on Kubernetes clusters that require high availability, such as quorum-based applications. It is also essential to affirm that service providers or cluster owners (if the user requires permission) agree to budget usage before beginning. 

Understand Application Reactions 

Various application types display different responses to the pod disruption process. Therefore, users should always consider PDB implementation based on the type of Kubernetes application they handle. By assessing application reactions, users can optimize PDB functions and avoid extra processes in some scenarios. 

For example, in restartable jobs where users need to complete the jobs, the respective job controller will create replacement pods without PDBs. Similarly, for single-instance stateful applications that require permission, users may choose to tolerate downtime without applying PDBs or Set PDB with maxUnavailable=0, prepare for downtime/update, and delete the PDB, since users may recreate it later if necessary. 

Rounding Logic 

Users may express the required value of their PDBs with integers or in percentage form. Specifically, eight for minAvailable states that there should be a minimum of eight active pods at all times, while 50% minAvailable means that at least half of the total pods should always remain active. 

Kubernetes rounds up pod values. For example, in the cluster scenario with a total of nine pods and 50% minAvailable, the PDB will ensure that at least five pods stay online at all times. 

Assessing Pod Disruption Status 

Kubernetes users should regularly check on the PDB status for a better understanding of system performance and to keep systems online. Some important factors include the current number of healthy pods, the minimum number of desired healthy pods (i.e., .spec.maxAvailable value), and the acceptable reasons for disruption (e.g., SufficientPods – where the cluster has the minimum number of healthy pods to proceed with the disruption). 

How to Avoid Outages Using Poddisruption Budget (PDB)

The first step to creating a PDB involves creating a Poddisruptionbudget resource, which matches targeted pods. These resources will help drive the Kubernetes system toward timing pod drain requests to achieve nondisruptive eviction.  

With a PDB in place at the start of a draining process, users can determine selectors and the state of all associated pods. By doing so, users can effectively drain nodes (i.e., during system updates) while maintaining the minimum number of active pods to avoid a negative impact. As such, PDBs can reduce or eliminate system outages to maintain cluster performance.  

Other Useful Details in Kubernetes PDB

Kubernetes 1.21 brings a score of updates and changes to the platform, including the PDB APIs. Notably, an empty selector once matched zero pods by default, but with the recent patch, it matches every pod within a given namespace. 

At times, users may experience various PDB configuration complications during a node upgrade or cluster action. Therefore, it is crucial to identify some common scenarios to facilitate a quick response and minimal downtime. 

Here are some potential PDB issues:

Caution When Using Horizontal Pod Autoscalers

Horizontal pod autoscalers enable users to scale Kubernetes resources according to system loads based on an entered metric. However, a poorly configured PDB may lead to a mismatch of values, which calculates the existing pods, without considering the shifting values of an autoscaler. 

For best practices using a pod scaler with PDB, users should define the PDB for applications or system pods that may block a scale-down. Alternatively, users may use pause pods that provide systems with the boost required to handle additional requests during a spike in server activity. 

Additionally, some users may not realize that their clusters run PDBs (since they may come packaged in Kubernetes software extensions such as Operator). Therefore, users must pay close attention to PDB configurations and the possible complications and issues that may stem from platform actions such as node upgrades. 

PDB With a Single Replica

Users who apply PDB in deployments with a single pod will cause kubectl drain to remain stuck. In such scenarios, users need to manage pod drains and updates manually. Hence, users should always perform PDB on deployments with more than one replica, necessary for a high-accessibility Kubernetes system.   

Indefinite Deadlocks With Multiple PDBs

Multiple PDBs may result in confusion (i.e., overlapping selectors) within the cluster, causes draining processes to hang indefinitely. Therefore, for best practices, users should apply meaningful selector names linked to each set of pods along with a matchLabel that fits the controller selector. 

Summary

Kubernetes remains one of the most widely used workload management platforms worldwide due to its highly intuitive functions, such as PDBs. PDBs give users greater control over their API-eviction processes, minimizing the risks of workload disruption and outages. However, users need to note that PDBs have their share of limitations and should only apply them according to specific Kubernetes scenarios. 

PDBs are suitable for:

  • Voluntary pod disruptions (i.e., cluster administrator actions such as running routine maintenance).
  • High-accessibility deployments.

PDBs are unsuitable for:

  • Involuntary pod disruptions (i.e., large-scale hardware or software errors).
  • Node-pressure evictions.
  • Deployments involving a single replica.

By creating a PDB for each application, users can maintain highly available applications despite frequent instances of voluntary pod disruptions (e.g., software extensions). 

While the Kubernetes scheduler can help users allocate pods to nodes based on available resources, complexities may arise when there is a need to drain or remove excess nodes during system rescheduling while some pods continue to run (leading to potential downtime). With PDB resources in place, users can keep k8 applications functional to accept incoming requests with minimal delay.