node maintenance
This commit is contained in:
parent
f939dbcd6d
commit
fb7c1407b5
29
README.md
29
README.md
@ -16,7 +16,6 @@
|
||||
* [Let´s Encrypt issuer](#cert-manager-le-issuer)
|
||||
* [Deploying a LE-certificate](#cert-manager-ingress)
|
||||
* [Troubleshooting](#cert-manager-troubleshooting)
|
||||
* [Keep your cluster balanced](#keep-cluster-balanced)
|
||||
* [HELM charts](#helm)
|
||||
* [Create a chart](#helm-create)
|
||||
* [Install local chart without packaging](#helm-install-without-packaging)
|
||||
@ -28,6 +27,8 @@
|
||||
* [Kubernetes in action](#kubernetes-in-action)
|
||||
* [Running DaemonSets on `hostPort`](#running-daemonsets)
|
||||
* [Running StatefulSet with NFS storage](#running-statefulset-nfs)
|
||||
* [Keep your cluster balanced](#keep-cluster-balanced)
|
||||
* [Node maintenance](#node-maintenance)
|
||||
* [What happens if a node goes down?](#what-happens-node-down)
|
||||
* [Dealing with disruptions](#disruptions)
|
||||
|
||||
@ -332,10 +333,6 @@ kubectl -n <stage> describe challenge <object>
|
||||
|
||||
After successfull setup perform a TLS-test: `https://www.ssllabs.com/ssltest/index.html`
|
||||
|
||||
|
||||
# Keep your cluster balanced <a name="user-content-keep-cluster-balanced"></a>
|
||||
Kubernetes, in first place, takes care of high availability, but not of well balance of pod/node. In case of *stateless deployments* [this](https://itnext.io/keep-you-kubernetes-cluster-balanced-the-secret-to-high-availability-17edf60d9cb7) project could be a solution! Pod/Node balance is not a subject to *DaemonSets*.
|
||||
|
||||
# HELM charts <a name="user-content-helm"></a>
|
||||
Docs:
|
||||
* https://helm.sh/docs/intro/using_helm/
|
||||
@ -609,6 +606,28 @@ Kubernetes knows something like a `--pod-eviction-timeout`, which is a grace per
|
||||
|
||||
Docs: https://kubernetes.io/docs/concepts/scheduling-eviction/eviction-policy/
|
||||
|
||||
## Keep your cluster balanced <a name="user-content-keep-cluster-balanced"></a>
|
||||
Kubernetes, in first place, takes care of high availability, but not of well balance of pod/node. [This](https://itnext.io/keep-you-kubernetes-cluster-balanced-the-secret-to-high-availability-17edf60d9cb7) project could be a solution! Pod/Node balance is not a subject to *DaemonSets*.
|
||||
|
||||
## Node maintenance <a name="user-content-node-maintenance"></a>
|
||||
*Mark* a node for maintenance:
|
||||
```
|
||||
$ kubectl drain k3s-node2 --ignore-daemonsets
|
||||
|
||||
$ kubectl get node
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
k3s-node1 Ready <none> 105d v1.19.5+k3s2
|
||||
k3s-master Ready master 105d v1.19.5+k3s2
|
||||
k3s-node2 Ready,SchedulingDisabled <none> 105d v1.19.5+k3s
|
||||
```
|
||||
All Deployment as well as StatefulSet pods have been rescheduled on remaining nodes. DaemonSet pods were not touched! Node maintenance can be performed now.
|
||||
|
||||
To bring the maintained node back in cluster:
|
||||
```
|
||||
$ kubectl uncordon k3s-node2
|
||||
node/k3s-node2 uncordoned
|
||||
```
|
||||
|
||||
## Dealing with disruptions <a name="user-content-disruptions"></a>
|
||||
* https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
|
||||
* https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
|
||||
Loading…
Reference in New Issue
Block a user