Kubernetes in action

This commit is contained in:
Dominik Chilla 2021-04-10 23:02:37 +02:00
parent 64bf4753c5
commit f939dbcd6d

View File

@ -16,6 +16,7 @@
* [Let´s Encrypt issuer](#cert-manager-le-issuer) * [Let´s Encrypt issuer](#cert-manager-le-issuer)
* [Deploying a LE-certificate](#cert-manager-ingress) * [Deploying a LE-certificate](#cert-manager-ingress)
* [Troubleshooting](#cert-manager-troubleshooting) * [Troubleshooting](#cert-manager-troubleshooting)
* [Keep your cluster balanced](#keep-cluster-balanced)
* [HELM charts](#helm) * [HELM charts](#helm)
* [Create a chart](#helm-create) * [Create a chart](#helm-create)
* [Install local chart without packaging](#helm-install-without-packaging) * [Install local chart without packaging](#helm-install-without-packaging)
@ -24,9 +25,11 @@
* [Get status of deployed chart](#helm-status) * [Get status of deployed chart](#helm-status)
* [Get deployment history](#helm-history) * [Get deployment history](#helm-history)
* [Rollback](#helm-rollback) * [Rollback](#helm-rollback)
* [Examples](#examples) * [Kubernetes in action](#kubernetes-in-action)
* [Running DaemonSets on `hostPort`](#running-daemonsets) * [Running DaemonSets on `hostPort`](#running-daemonsets)
* [Running StatefulSet with NFS storage](#running-statefulset-nfs) * [Running StatefulSet with NFS storage](#running-statefulset-nfs)
* [What happens if a node goes down?](#what-happens-node-down)
* [Dealing with disruptions](#disruptions)
# kubectl - BASH autocompletion <a name="user-content-kubectl-bash-autocompletion"></a> # kubectl - BASH autocompletion <a name="user-content-kubectl-bash-autocompletion"></a>
For current shell only: For current shell only:
@ -110,6 +113,8 @@ https://rancher.com/docs/k3s/latest/en/storage/
## NFS <a name="user-content-pv-nfs"></a> ## NFS <a name="user-content-pv-nfs"></a>
If you want to use NFS based storage... If you want to use NFS based storage...
**All Nodes need to have the NFS-client package (Ubuntu: `nfs-common`) installed**
``` ```
helm3 repo add ckotzbauer https://ckotzbauer.github.io/helm-charts helm3 repo add ckotzbauer https://ckotzbauer.github.io/helm-charts
helm3 install my-nfs-client-provisioner --set nfs.server=<nfs-server/ip-addr> --set nfs.path=</data/nfs> ckotzbauer/nfs-client-provisioner helm3 install my-nfs-client-provisioner --set nfs.server=<nfs-server/ip-addr> --set nfs.path=</data/nfs> ckotzbauer/nfs-client-provisioner
@ -163,8 +168,9 @@ ExecStart=/usr/local/bin/k3s \
``` ```
Finally `systemctl daemon-reload` and `systemctl restart k3s` Finally `systemctl daemon-reload` and `systemctl restart k3s`
## Enable NGINX-ingress with OCSP stapling <a name="user-content-enable-nginx-ingress"></a> ## Enable K8s own NGINX-ingress with OCSP stapling <a name="user-content-enable-nginx-ingress"></a>
### Installation <a name="user-content-install-nginx-ingress"></a> ### Installation <a name="user-content-install-nginx-ingress"></a>
This is the helm chart of the K8s own nginx ingress controller:
https://kubernetes.github.io/ingress-nginx/deploy/#using-helm https://kubernetes.github.io/ingress-nginx/deploy/#using-helm
``` ```
@ -327,6 +333,8 @@ kubectl -n <stage> describe challenge <object>
After successfull setup perform a TLS-test: `https://www.ssllabs.com/ssltest/index.html` After successfull setup perform a TLS-test: `https://www.ssllabs.com/ssltest/index.html`
# Keep your cluster balanced <a name="user-content-keep-cluster-balanced"></a>
Kubernetes, in first place, takes care of high availability, but not of well balance of pod/node. In case of *stateless deployments* [this](https://itnext.io/keep-you-kubernetes-cluster-balanced-the-secret-to-high-availability-17edf60d9cb7) project could be a solution! Pod/Node balance is not a subject to *DaemonSets*.
# HELM charts <a name="user-content-helm"></a> # HELM charts <a name="user-content-helm"></a>
Docs: Docs:
@ -460,7 +468,7 @@ NOTES:
kubectl --namespace default port-forward $POD_NAME 8080:80 kubectl --namespace default port-forward $POD_NAME 8080:80
``` ```
# Examples <a name="user-content-examples"></a> # Kubernetes in action <a name="user-content-kubernetes-in-action"></a>
## Running DaemonSets on `hostPort` <a name="user-content-running-daemonsets"></a> ## Running DaemonSets on `hostPort` <a name="user-content-running-daemonsets"></a>
* Docs: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ * Docs: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
* Good article: https://medium.com/stakater/k8s-deployments-vs-statefulsets-vs-daemonsets-60582f0c62d4 * Good article: https://medium.com/stakater/k8s-deployments-vs-statefulsets-vs-daemonsets-60582f0c62d4
@ -521,6 +529,13 @@ spec:
## Running StatefulSet with NFS storage <a name="user-content-running-statefulset-nfs"></a> ## Running StatefulSet with NFS storage <a name="user-content-running-statefulset-nfs"></a>
* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ * https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
* [NFS dynamic volume provisioning deployed](#pv-nfs) * [NFS dynamic volume provisioning deployed](#pv-nfs)
**Be careful:** *StatefulSets* are designed for stateful applications (like databases). To avoid split-brain scenarios StatefulSets behave as static as possible. If a node goes down, the StatefulSet controller will not reschedule the pods to another functioning nodes! This only happens to stateless *Deployments*! In this case you need to force the rescheduling by hand like this:
`kubectl delete pod web-1 --grace-period=0 --force`
More details on this can be found [here](https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/)
If you want DaemonSet-like Node-affinity on StatefulSets then read [this](https://medium.com/@johnjjung/building-a-kubernetes-daemonstatefulset-30ad0592d8cb)
``` ```
--- ---
apiVersion: v1 apiVersion: v1
@ -571,4 +586,29 @@ spec:
resources: resources:
requests: requests:
storage: 32Mi storage: 32Mi
``` ```
## What happens if a node goes down? <a name="user-content-what-happens-node-down"></a>
If a node goes down kubernetes marks this node as *NotReady*, but nothing else:
```
$ kubectl get node
NAME STATUS ROLES AGE VERSION
k3s-node2 Ready <none> 103d v1.19.5+k3s2
k3s-master Ready master 103d v1.19.5+k3s2
k3s-node1 NotReady <none> 103d v1.19.5+k3s2
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
ds-test-5mlkt 1/1 Running 14 28h
my-nfs-client-provisioner-57ff8c84c7-p75ck 1/1 Running 0 31m
web-1 1/1 Running 0 26m
web-2 1/1 Running 0 26m
ds-test-c6xx8 1/1 Running 0 18m
ds-test-w45dv 1/1 Running 5 28h
```
Kubernetes knows something like a `--pod-eviction-timeout`, which is a grace period (**default: 5 minutes**) for deleting pods on failed nodes. This timeout is useful to keep pods on nodes, which are rebooted in term of maintenance reasons. So, first of all, nothing happens to the pods on failed nodes until *pod eviction timeout* exceeded. If the *pod eviction period* times out, Kubernetes reschedules *stateless Deployments* to working nodes. *DaemonSets* as well as *StatefulSets* will not be rescheduled on other nodes at all.
Docs: https://kubernetes.io/docs/concepts/scheduling-eviction/eviction-policy/
## Dealing with disruptions <a name="user-content-disruptions"></a>
* https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
* https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/