* [kubectl - BASH autocompletion](#kubectl-bash-autocompletion)
* [Install k3s](#install-k3s)
* [Configure upstream DNS-resolver](#upstream-dns-resolver)
* [Change NodePort range](#nodeport-range)
* [Namespaces and resource limits](#namespaces-limits)
* [Persistent volumes (StorageClass - dynamic provisioning)](#pv)
* [Rancher Local](#pv-local)
* [Rancher Longhorn - distributed in local cluster](#pv-longhorn)
* [NFS](#pv-nfs)
* [Ingress controller](#ingress-controller)
* [Disable Traefik-ingress](#disable-traefik-ingress)
* [Enable NGINX-ingress with OCSP stapling](#enable-nginx-ingress)
* [Installation](#install-nginx-ingress)
* [Cert-Manager (references ingress controller)](#cert-manager)
* [Installation](#cert-manager-install)
* [Let´s Encrypt issuer](#cert-manager-le-issuer)
* [Deploying a LE-certificate](#cert-manager-ingress)
* [Troubleshooting](#cert-manager-troubleshooting)
* [HELM charts](#helm)
* [Create a chart](#helm-create)
* [Install local chart without packaging](#helm-install-without-packaging)
* [List deployed helm charts](#helm-list)
* [Upgrade local chart without packaging](#helm-upgrade)
* [Get status of deployed chart](#helm-status)
* [Get deployment history](#helm-history)
* [Rollback](#helm-rollback)
* [Kubernetes in action](#kubernetes-in-action)
* [Running DaemonSets on `hostPort`](#running-daemonsets)
* [Running StatefulSet with NFS storage](#running-statefulset-nfs)
* [Keep your cluster balanced](#keep-cluster-balanced)
* [Node maintenance](#node-maintenance)
* [What happens if a node goes down?](#what-happens-node-down)
* [Dealing with disruptions](#disruptions)
# kubectl - BASH autocompletion
For current shell only:
```
source <(kubectl completion bash)
```
Persistent:
```
echo "source <(kubectl completion bash)" >> ~/.bashrc
```
# Install k3s
https://k3s.io/:
```
curl -sfL https://get.k3s.io | sh -
```
If disired, set a memory consumption limit of the systemd-unit like so:
```
root#> mkdir /etc/systemd/system/k3s.service.d
root#> vi /etc/systemd/system/k3s.service.d/limits.conf
[Service]
MemoryMax=1024M
root#> systemctl daemon-reload
root#> systemctl restart k3s
root#> systemctl status k3s
k3s.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/k3s.service.d
└─limits.conf
Active: active (running) since Thu 2020-11-26 10:46:26 CET; 13min ago
Docs: https://k3s.io
Process: 9618 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 9619 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 9620 (k3s-server)
Tasks: 229
Memory: 510.6M (max: 1.0G)
CGroup: /system.slice/k3s.service
```
# Upstream DNS-resolver
Docs: https://rancher.com/docs/rancher/v2.x/en/troubleshooting/dns/
Default: 8.8.8.8 => does not resolve local domains!
1. local /etc/resolv.k3s.conf -> ip-of-dnsresolver (127.0.0.1 **does not work!**)
2. vi /etc/systemd/system/k3s.service:
```
[...]
ExecStart=/usr/local/bin/k3s \
server [...] --resolv-conf /etc/resolv.k3s.conf \
```
3. Re-load systemd config: `systemctl daemon-reload`
4. Re-start k3s: `systemctl restart k3s.service`
5. Re-deploy coredns-pods: `kubectl -n kube-system delete pod name-of-coredns-pods`
# Change NodePort range to 1 - 65535
1. vi /etc/systemd/system/k3s.service:
```
[...]
ExecStart=/usr/local/bin/k3s \
server [...] --kube-apiserver-arg service-node-port-range=1-65535 \
```
2. Re-load systemd config: `systemctl daemon-reload`
3. Re-start k3s: `systemctl restart k3s.service`
# Namespaces and resource limits
```
kubectl apply -f https://gitea.zwackl.de/dominik/k3s/raw/branch/master/namespaces_limits.yaml
```
# Persistent Volumes (StorageClass - dynamic provisioning)
## Rancher Local
https://rancher.com/docs/k3s/latest/en/storage/
## Longhorn (distributed in local cluster)
* Requirements: https://longhorn.io/docs/0.8.0/install/requirements/
* Debian: `apt install open-iscsi`
* Install: https://rancher.com/docs/k3s/latest/en/storage/
## NFS
If you want to use NFS based storage...
**All Nodes need to have the NFS-client package (Ubuntu: `nfs-common`) installed**
```
helm3 repo add ckotzbauer https://ckotzbauer.github.io/helm-charts
helm3 install my-nfs-client-provisioner --set nfs.server= --set nfs.path= ckotzbauer/nfs-client-provisioner
```
Check if NFS *StorageClass* is available:
```
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 101d
nfs-client cluster.local/my-nfs-client-provisioner Delete Immediate true 172m
```
Now you can use `nfs-client` as StorageClass like so:
```
apiVersion: apps/v1
kind: StatefulSet
[...]
volumeClaimTemplates:
- metadata:
name: nfs-backend
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "nfs-client"
resources:
requests:
storage: 32Mi
```
or so:
```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc-1
namespace:
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 32Mi
```
# Ingress controller
## Disable Traefik-ingress
edit /etc/systemd/system/k3s.service:
```
[...]
ExecStart=/usr/local/bin/k3s \
server --disable traefik --resolv-conf /etc/resolv.conf \
[...]
```
Finally `systemctl daemon-reload` and `systemctl restart k3s`
## Enable K8s own NGINX-ingress with OCSP stapling
### Installation
This is the helm chart of the K8s own nginx ingress controller:
https://kubernetes.github.io/ingress-nginx/deploy/#using-helm
```
kubectl create ns ingress-nginx
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install my-release ingress-nginx/ingress-nginx -n ingress-nginx
```
`kubectl -n ingress-nginx get all`:
```
NAME READY STATUS RESTARTS AGE
pod/svclb-my-release-ingress-nginx-controller-m6gxl 2/2 Running 0 110s
pod/my-release-ingress-nginx-controller-695774d99c-t794f 1/1 Running 0 110s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-release-ingress-nginx-controller-admission ClusterIP 10.43.116.191 443/TCP 110s
service/my-release-ingress-nginx-controller LoadBalancer 10.43.55.41 192.168.178.116 80:31110/TCP,443:31476/TCP 110s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/svclb-my-release-ingress-nginx-controller 1 1 1 1 1 110s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-release-ingress-nginx-controller 1/1 1 1 110s
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-release-ingress-nginx-controller-695774d99c 1 1 1 110s
```
As nginx ingress is hungry for memory, let´s reduce the number of workers to 1:
```
kubectl -n ingress-nginx edit configmap my-release-ingress-nginx-controller
apiVersion: v1
<<>>
data:
enable-ocsp: "true"
worker-processes: "1"
<<>>
kind: ConfigMap
[...]
```
Finally the deployment needs to be restarted:
`kubectl -n ingress-nginx rollout restart deployment my-release-ingress-nginx-controller`
**If you are facing deployment problems like the following one**
```
Error: UPGRADE FAILED: cannot patch "gitea-ingress-staging" with kind Ingress: Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://my-release-ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: context deadline exceeded
```
A possible fix: `kubectl -n ingress-nginx delete ValidatingWebhookConfiguration my-release-ingress-nginx-admission`
# Cert-Manager (references ingress controller)
## Installation
Docs: https://hub.helm.sh/charts/jetstack/cert-manager
```
helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.2/cert-manager.crds.yaml
kubectl create namespace cert-manager
helm install cert-manager --namespace cert-manager jetstack/cert-manager
kubectl -n cert-manager get all
```
## Let´s Encrypt issuer
Docs: https://cert-manager.io/docs/tutorials/acme/ingress/#step-6-configure-let-s-encrypt-issuer
```
ClusterIssuers are a resource type similar to Issuers. They are specified in exactly the same way,
but they do not belong to a single namespace and can be referenced by Certificate resources from
multiple different namespaces.
```
lets-encrypt-cluster-issuers.yaml:
```
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging-issuer
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: user@example.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-staging-account-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod-issuer
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: user@example.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod-account-key
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
```
`kubectl apply -f lets-encrypt-cluster-issuers.yaml`
## Deploying a LE-certificate
All you need is an `Ingress` resource of class `nginx` which references a ClusterIssuer (`letsencrypt-prod-issuer`) resource:
```
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace:
name: some-ingress-name
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod-issuer"
spec:
tls:
- hosts:
- some-certificate.name.san
secretName: target-certificate-secret-name
rules:
- host: some-certificate.name.san
http:
paths:
- path: /
backend:
serviceName: some-target-service
servicePort: some-target-service-port
```
## Troubleshooting
Docs: https://cert-manager.io/docs/faq/acme/
ClusterIssuer runs in default namespace:
```
kubectl get clusterissuer
kubectl describe clusterissuer