diff --git a/README.md b/README.md index 446c611..582dcaa 100644 --- a/README.md +++ b/README.md @@ -16,7 +16,6 @@ * [Let´s Encrypt issuer](#cert-manager-le-issuer) * [Deploying a LE-certificate](#cert-manager-ingress) * [Troubleshooting](#cert-manager-troubleshooting) -* [Keep your cluster balanced](#keep-cluster-balanced) * [HELM charts](#helm) * [Create a chart](#helm-create) * [Install local chart without packaging](#helm-install-without-packaging) @@ -28,6 +27,8 @@ * [Kubernetes in action](#kubernetes-in-action) * [Running DaemonSets on `hostPort`](#running-daemonsets) * [Running StatefulSet with NFS storage](#running-statefulset-nfs) + * [Keep your cluster balanced](#keep-cluster-balanced) + * [Node maintenance](#node-maintenance) * [What happens if a node goes down?](#what-happens-node-down) * [Dealing with disruptions](#disruptions) @@ -332,10 +333,6 @@ kubectl -n describe challenge After successfull setup perform a TLS-test: `https://www.ssllabs.com/ssltest/index.html` - -# Keep your cluster balanced -Kubernetes, in first place, takes care of high availability, but not of well balance of pod/node. In case of *stateless deployments* [this](https://itnext.io/keep-you-kubernetes-cluster-balanced-the-secret-to-high-availability-17edf60d9cb7) project could be a solution! Pod/Node balance is not a subject to *DaemonSets*. - # HELM charts Docs: * https://helm.sh/docs/intro/using_helm/ @@ -609,6 +606,28 @@ Kubernetes knows something like a `--pod-eviction-timeout`, which is a grace per Docs: https://kubernetes.io/docs/concepts/scheduling-eviction/eviction-policy/ +## Keep your cluster balanced +Kubernetes, in first place, takes care of high availability, but not of well balance of pod/node. [This](https://itnext.io/keep-you-kubernetes-cluster-balanced-the-secret-to-high-availability-17edf60d9cb7) project could be a solution! Pod/Node balance is not a subject to *DaemonSets*. + +## Node maintenance +*Mark* a node for maintenance: +``` +$ kubectl drain k3s-node2 --ignore-daemonsets + +$ kubectl get node +NAME STATUS ROLES AGE VERSION +k3s-node1 Ready 105d v1.19.5+k3s2 +k3s-master Ready master 105d v1.19.5+k3s2 +k3s-node2 Ready,SchedulingDisabled 105d v1.19.5+k3s +``` +All Deployment as well as StatefulSet pods have been rescheduled on remaining nodes. DaemonSet pods were not touched! Node maintenance can be performed now. + +To bring the maintained node back in cluster: +``` +$ kubectl uncordon k3s-node2 +node/k3s-node2 uncordoned +``` + ## Dealing with disruptions * https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ -* https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/ \ No newline at end of file +* https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/