StatefulSet refers to the workload API object that plays a key role in the management of stateful applications. It consists of a set of pods that have unique persistent IDs and hostnames. It is used in running stateful applications in Kubernetes (K8s) with persistent storage.
Stateful applications are applications that save data to persistent disk storage to be used by the server or client. An example of this is a database from which data is stored and retrieved by other applications.
When working with the Kubernetes environment, it will be inevitable to deal with StatefulSet, especially when there are errors encountered. It is important to have some troubleshooting know-how to quickly restore functions and avoid disruptions.
The persistent storage of a StatefulSet ensures that data is saved even if the pods that run as part of a StatefulSet shut down. This allows StatefulSets to run replicated databases with unique and persistent ID for every individual pod that runs with them. The identity of the pods is maintained even if they are moved to another data center or reassigned to a different machine. The use of persistent identifiers makes it possible to associate storage volumes with pods in all of their existence, even if they shut down temporarily.
In other words, StatefulSets ensure that there is an established order and uniqueness for the pods. It is crucial in the deployment and scaling of a set of pods. StatefulSets is an essential part of the solution to use storage volumes that provide persistence to workloads.
StatefulSets are used in applications that need any or all of these attributes: stable unique network identifiers, stable persistent storage, ordered deployment and scaling, and ordered automated updates.
With the release of Kubernetes 1.14, the local persistent volumes feature was introduced. This allows a local disk to be connected directly to a specific Kubernetes node, which then serves as persistent storage for other Kubernetes nodes. This provides the convenience of being able to use a single disk to serve as the persistent storage resource for different machines. The disk can be attached to one machine then removed and connected to another without the need for remote services.
When creating StatefulSets, there are crucial details to bear in mind. For one, StatefulSets need to comply with the “at most one” semantic, which requires that a maximum of only one pod with a specific ID should be running in the cluster at any given instance. Violating the “at most one” semantic can lead to a system error or data loss.
Another possible issue is the absence of a Headless Service, which should be initiated by the admin. This service is responsible for the network identity of the pods and is not automatically created. The failure to create this service results in a StatefulSet failure.
Additionally, it is important to remember that the deletion or scaling down of a StatefulSet does not result in the corresponding deletion of the storage volumes associated with it. This is a data protection measure baked into the system to prevent the accidental elimination of data. The convenience of being able to instantly rid of all StatefulSet resources upon deletion is deemed less significant than the risk of deleting crucial data as a consequence of the StatefulSet erasure.
Moreover, the deletion of a StatefulSet may not terminate the associated pods. To make sure that the pods are terminated, the StatefulSet needs to be scaled down to 0 first before deleting. This deletion complexity of StatefulSets is an important topic to discuss in view of Kubernetes troubleshooting. Bigger problems can happen because of the dynamics between StatefulSet and deletion. More on this below.
The first step in debugging a StatefulSet is the listing of all in it. To do this, you need to run the command “kubectl get pods -l app=myapp”, where “myapp” is the label indicated in the StatefulSet manifest.
Look out for the resulting status. If it indicates “Failed,” it means that all containers in the pod have terminated, and a minimum of one container was forcibly stopped by K8s or exited with a non-zero status. If it shows “Unknown”, Kubernetes is unable to obtain the pod status usually because of a communication error with the node.
Once the pod status is determined, the next step is to debug the individual faulty pods. This can be a little tricky to do because StatefulSet, by design, automatically terminates faulty pods. However, StatefulSet provides a feature to enable debugging: an annotation that pauses all controller actions in a pod.
The annotation is “initialized=false”, which is entered in the following command:
“kubectl annotate pods [pod-name]
Running this command forcibly suspends all operations of the StatefulSet including its scaling down or the deletion of pods. It also makes the StatefulSet unresponsive if the pod is faulty or not available. Once the defective pod is located and debugged, the annotation should be reverted to “true” to restore the StatefulSet to its normal state.
If the debugging is unsuccessful, it is likely due to the presence of race conditions when Kubernetes bootstrapped StatefulSet. Hence, a third step called “step-wise initialization” is in order. To do this, the initialized=”false” annotation needs to be added to the StatefulSet manifest.
Going back to the at-most-one semantic mentioned earlier, it is important to remember that a StatefulSet should not have more than one stateful pod instance with the same identity bound to a common PersistentVolumeClaim.
A StatefulSet may be deleted in the process of troubleshooting. Take note that this does not necessarily delete the pods and will not automatically delete persistent storage volumes. If a new StatefulSet is created and new pods (with identities based on old pods) are added, the system is set to fail.
The admin may forget or mistakenly presume that the deletion of the old StatefulSet also deletes the old pods and storage volumes, so they use pod identities based on the old pods and assign storage volumes based on what was used previously. This violates the “at most one” semantic.
It is important to keep track of all pod identities, objects, services, and storage volumes when working with StatefulSets. Establishing pod identities and associating pods with persistent storage volumes should not result in having pods with duplicate identities and common storage.
The debugging and solutions described above may sound easy, but they are mostly an oversimplification of the possible problems. In real-world situations, the troubleshooting process will be considerably more complex and time-consuming. Identifying problematic pods and debugging them entails repetitive processes until the issues are resolved. It significantly helps to have a specialized Kubernetes troubleshooting tool to ensure the quick gathering of relevant information to identify errors and proceed with the appropriate solutions quickly.