pod tolerations

This commit is contained in:
Dominik Chilla 2021-11-27 16:03:32 +01:00
parent 38d8132ae3
commit 5d851a6373

View File

@ -719,7 +719,7 @@ spec:
- name: DEMO_GREETING - name: DEMO_GREETING
value: Hello from the environment value: Hello from the environment
image: dockreg-zdf.int.zwackl.de/alpine/latest/amd64:prod image: dockreg-zdf.int.zwackl.de/alpine/latest/amd64:prod
imagePullPolicy: Always imagePullPolicy: IfNotPresent
name: netcat-daemonset name: netcat-daemonset
ports: ports:
- containerPort: 23456 - containerPort: 23456
@ -851,12 +851,64 @@ web-2 1/1 Running 0 26
ds-test-c6xx8 1/1 Running 0 18m ds-test-c6xx8 1/1 Running 0 18m
ds-test-w45dv 1/1 Running 5 28h ds-test-w45dv 1/1 Running 5 28h
``` ```
Kubernetes knows something like a `--pod-eviction-timeout`, which is a grace period (**default: 5 minutes**) for deleting pods on failed nodes. This timeout is useful to keep pods on nodes, which are rebooted in term of maintenance reasons. So, first of all, nothing happens to the pods on failed nodes until *pod eviction timeout* exceeded. If the *pod eviction period* times out, Kubernetes re-schedules *workloads* (Deployments, StatefulSets) to working nodes. As *DaemonSets* are bound to a specific node they will not be re-scheduled on other nodes. Kubernetes supports [Pod-tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration), which are per default configured with a *timeout* of `300s` (5 minutes!). This means, that affected Pods will *remain* for a timespan of 300s on a *broken* node before eviction takes place
```
$ kubectl -n <namespace> describe pod <pod-name>
Docs: https://kubernetes.io/docs/concepts/scheduling-eviction/eviction-policy/ [...]
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
[...]
```
To be more reactive Pod-tolerations can be configured as follows:
```
kind: Deployment or StatefulSet
apiVersion: apps/v1
metadata:
[...]
spec:
[...]
template:
[...]
spec:
tolerations:
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 30
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 30
[...]
```
## Keep your cluster balanced <a name="user-content-keep-cluster-balanced"></a> ## Keep your cluster balanced <a name="user-content-keep-cluster-balanced"></a>
Kubernetes, in first place, takes care of high availability, but not of well balance of pod/node. [This](https://itnext.io/keep-you-kubernetes-cluster-balanced-the-secret-to-high-availability-17edf60d9cb7) project could be a solution! Pod/Node balance is not a subject to *DaemonSets*. Kubernetes, in first place, takes care of high availability, but not of well balance of pod/node.
In case of `Deployment` or `StatefulSet` a `topologySpreadConstraint` needs to be specified:
```
kind: Deployment or StatefulSet
apiVersion: apps/v1
metadata:
[...]
spec:
[...]
template:
[...]
spec:
# Prevent to schedule more than one pod per node
topologySpreadConstraints:
- labelSelector:
matchLabels:
app: {{ .Chart.Name }}-{{ .Values.stage }}
maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
[...]
```
`DaemonSet` workloads do not support `topologySpreadConstraints` at all.
## Node maintenance <a name="user-content-node-maintenance"></a> ## Node maintenance <a name="user-content-node-maintenance"></a>
*Mark* a node for maintenance: *Mark* a node for maintenance: