Skip to content

Latest commit

 

History

History
93 lines (68 loc) · 2.7 KB

README.adoc

File metadata and controls

93 lines (68 loc) · 2.7 KB

Singleton Service

In this example, we see how a PodDisruptionBudget works. As in the other examples, we assume a Kubernetes installation is available. Check the INSTALL documentation for the installation of how you can use an online Kubernetes playground. We can’t use Minikube in this example because we need a 'real' cluster from where we can drain a node to see the effect of a PodDisruptionBudget. Still, we can use kind to simulate a multinode cluster as described in INSTALL.

You can start a three-node kind cluster with

curl -s https://k8spatterns.io/SingletonService/kind-multinode.yml | \
kind create cluster --config -

To verify that the cluster is running, check the node:

kubectl get nodes

Now let’s create a Deployment with six Pods:

kubectl apply -f https://k8spatterns.io/SingletonService/deployment.yml

We can check on which nodes the Pods are running with

kubectl get pods -o=custom-columns=NAME:.metadata.name,NODE:.spec.nodeName

For being sure that always four Pods are running, we create a PodDisruptionBudget with

kubectl create -f https://k8spatterns.io/SingletonService/pdb.yml

Now let’s drain a node and see how the Pods are relocated. The second node is called kind-worker, like for the kind setup. But take any node from the output that is not the control-plane node.

kubectl drain --ignore-daemonsets kind-worker >/dev/null 2>&1 &

We do the removal of the node in the background so that we can watch how the Pods are relocated:

watch kubectl get pods -o=custom-columns=NAME:.metadata.name,NODE:.spec.nodeName

As you can see, at least four Pods are always running on both nodes until all six Pods are eventually running on the control-plane node.

You can undo the drain operation with

kubectl uncordon kind-worker

And restore the Deployment with

kubectl scale deployment random-generator --replicas 0
kubectl scale deployment random-generator --replicas 6

This down-up-scale will distribute the Pods again over all nodes.

To remove the kind cluster call

kind delete cluster