* [Install k3s](#install-k3s) * [Configure upstream DNS-resolver](#upstream-dns-resolver) * [Namespaces and resource limits](#namespaces) * [devel](#namespace-devel) * [staging](#namespace-staging) * [prod](#namespace-prod) * [Persistent volumes](#pv) * [Local provider](#pv-local) * [Longhorn - distributed/lightweight provider](#pv-longhorn) * [Ingress controller](#ingress-controller) * [Disable Traefik-ingress](#disable-traefik-ingress) * [Enable NGINX-ingress](#enable-nginx-ingress) * [Installation](#install-nginx-ingress) * [Change service type from NodePort to LoadBalancer](#nginx-ingress-loadbalancer) * [Enable nginx-ingress tcp- and udp-services for apps other than http/s](#nginx-ingress-tcp-udp-enabled) * [Enable client-IP transparency and expose TCP-port 9000](#enable-client-ip-transp-expose-tcp-9000) * [Deploy my-nginx-service](#deploy-my-nginx-service) * [Stick the nginx-ingress controler and my-nginx app together](#stick-nginx-ingress-and-tcp-service) * [Test exposed app on TCP-port 9000](#test-nginx-ingress-and-tcp-service) * [Cert-Manager (references ingress controller)](#cert-manager) * [Installation](#cert-manager-install) * [Let´s Encrypt issuer](#cert-manager-le-issuer) * [Troubleshooting](#cert-manager-troubleshooting) * [Running DaemonSets on `hostPort`](#running-daemonsets) * [HELM charts](#helm) * [Create a chart](#helm-create) * [Install local chart without packaging](#helm-install-without-packaging) * [List deployed helm charts](#helm-list) * [Upgrade local chart without packaging](#helm-upgrade) * [Get status of deployed chart](#helm-status) * [Get deployment history](#helm-history) * [Rollback](#helm-rollback) # Install k3s https://k3s.io/: ``` curl -sfL https://get.k3s.io | sh - ``` # Upstream DNS-resolver Docs: https://rancher.com/docs/rancher/v2.x/en/troubleshooting/dns/ Default: 8.8.8.8 => does not resolve local domains! 1. local /etc/resolv.conf -> ip-of-dnsresolver (127.0.0.1 **does not work!**) 2. vi /etc/systemd/system/k3s.service: ``` [...] ExecStart=/usr/local/bin/k3s \ server --resolv-conf /etc/resolv.conf \ ``` 3. Re-load systemd config: `systemctl daemon-reload` 4. Re-start k3s: `systemctl restart k3s.service` 5. Re-deploy coredns-pods: `kubectl -n kube-system delete name-of-coredns-pods` # Namespaces and resource limits ## devel namespace-devel-limitranges.yaml: ``` --- apiVersion: v1 kind: Namespace metadata: name: devel labels: name: devel --- apiVersion: v1 kind: LimitRange metadata: name: limit-range-devel namespace: devel spec: limits: - default: cpu: 500m memory: 1Gi defaultRequest: cpu: 10m memory: 4Mi max: cpu: 500m memory: 1Gi min: cpu: 10m memory: 4Mi type: Container ``` `kubectl apply -f namespace-devel-limitranges.yaml` ## staging namespace-staging-limitranges.yaml: ``` --- apiVersion: v1 kind: Namespace metadata: name: staging labels: name: staging --- apiVersion: v1 kind: LimitRange metadata: name: limit-range-staging namespace: staging spec: limits: - default: cpu: 500m memory: 1Gi defaultRequest: cpu: 10m memory: 4Mi max: cpu: 500m memory: 1Gi min: cpu: 10m memory: 4Mi type: Container ``` `kubectl apply -f namespace-staging-limitranges.yaml` ## prod namespace-prod-limitranges.yaml: ``` --- apiVersion: v1 kind: Namespace metadata: name: prod labels: name: prod --- apiVersion: v1 kind: LimitRange metadata: name: limit-range-prod namespace: prod spec: limits: - defaultRequest: cpu: 50m memory: 4Mi min: cpu: 10m memory: 4Mi type: Container ``` `kubectl apply -f namespace-prod-limitranges.yaml` # Persistent Volumes ## Local provider (local - ouf-of-the-box) https://rancher.com/docs/k3s/latest/en/storage/ ## Longhorn provider (lightweight/distributed) * Requirements: https://longhorn.io/docs/0.8.0/install/requirements/ * Debian: `apt install open-iscsi` * Install: https://rancher.com/docs/k3s/latest/en/storage/ # Ingress controller ## Disable Traefik-ingress edit /etc/systemd/system/k3s.service: ``` [...] ExecStart=/usr/local/bin/k3s \ server --disable traefik --resolv-conf /etc/resolv.conf \ [...] ``` Finally `systemctl daemon-reload` and `systemctl restart k3s` ## Enable NGINX-ingress ### Installation https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal ### Change service type from NodePort to LoadBalancer `kubectl edit service -n ingress-nginx ingress-nginx-controller` and change `type: NodePort` to `type: LoadBalancer` Port 80 and 443 should listen now on an *External-IP* `kubectl get all --all-namespaces`: ``` [...] NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE [...] ingress-nginx service/ingress-nginx-controller-admission ClusterIP 10.43.174.128 443/TCP 35m ingress-nginx service/ingress-nginx-controller LoadBalancer 10.43.237.255 10.62.94.246 80:30312/TCP,443:30366/TCP 35m [...] ``` Test: `curl -s http://` should return well known nginx-404-page: ``` dominik@muggler:~$ curl -s http://10.62.94.246 404 Not Found

404 Not Found


nginx/1.19.1
``` ### Enable nginx-ingress tcp- and udp-services for apps other than http/s Docs: https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/ `kubectl edit deployment -n ingress-nginx ingress-nginx-controller` and search for `spec:`/`template`/`spec`/`containers` section: ``` [...] spec: [...] template: metadata: creationTimestamp: null labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/name: ingress-nginx spec: containers: - args: - /nginx-ingress-controller - --election-id=ingress-controller-leader - --ingress-class=nginx - --configmap=ingress-nginx/ingress-nginx-controller - --validating-webhook=:8443 - --validating-webhook-certificate=/usr/local/certificates/cert - --validating-webhook-key=/usr/local/certificates/key >>> ADD - --tcp-services-configmap=ingress-nginx/tcp-services - --udp-services-configmap=ingress-nginx/udp-services <<< ADD env: [...] ``` ## Enable client-IP transparency and expose TCP-port 9000 Enable client-IP transparency (X-Original-Forwarded-For) and expose my-nginx app on nginx-ingress TCP-port 9000: `kubectl edit service -n ingress-nginx ingress-nginx-controller` Find the `ports:`-section of the `ingress-nginx-controller` service and *ADD* the definition for port 9000: ``` [...] spec: clusterIP: 10.43.237.255 >>> CHANGE externalTrafficPolicy from Cluster to Local if original client-IP is desirable externalTrafficPolicy: Local <<< CHANGE ports: - name: http nodePort: 30312 port: 80 protocol: TCP targetPort: http - name: https nodePort: 30366 port: 443 protocol: TCP targetPort: https >>> ADD - name: proxied-tcp-9000 port: 9000 protocol: TCP targetPort: 9000 <<< ADD [...] ``` Verify nginx-ingress-controller is a Loadbalancer and listening on port 9000 with `kubectl get services -n ingress-nginx`: ``` [...] NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE [...] ingress-nginx service/ingress-nginx-controller LoadBalancer 10.43.237.255 10.62.94.246 80:30312/TCP,443:30366/TCP,9000:31460/TCP 71m [...] ``` ### Deploy my-nginx deployment and service my-nginx-deployment.yml: ``` apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 1 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx:alpine ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: my-nginx labels: run: my-nginx spec: ports: - port: 80 protocol: TCP selector: run: my-nginx ``` Apply with `kubectl apply -f my-nginx-deployment.yml`: ``` deployment.apps/my-nginx created service/my-nginx created ``` Test: `kubectl get all | grep my-nginx`: ``` pod/my-nginx-65c68bbcdf-xkhqj 1/1 Running 4 2d7h service/my-nginx ClusterIP 10.43.118.13 80/TCP 2d7h deployment.apps/my-nginx 1/1 1 1 2d7h replicaset.apps/my-nginx-65c68bbcdf 1 1 1 2d7h ``` ## Stick the nginx-ingress-controler and my-nginx app together Finally, the nginx-ingress controller needs a port-mapping pointing to the my-nginx app. This will be done with a config-map `nginx-ingress-tcp-services-config-map.yml`, referenced earlier in the nginx-ingress deployment definition: ``` --- apiVersion: v1 kind: ConfigMap metadata: name: tcp-services namespace: ingress-nginx data: "9000": default/my-nginx:80 ``` Apply with `kubectl apply -f nginx-ingress-tcp-services-config-map.yml`: ``` configmap/tcp-services created ``` Subsequently the config-map can be edited with `kubectl -n ingress-nginx edit configmap tcp-services` **Changes to config-maps do not take effect on running pods! A re-scale to 0 and back can solve this problem: https://stackoverflow.com/questions/37317003/restart-pods-when-configmap-updates-in-kubernetes** ## Test exposed app on TCP-port 9000 ``` dominik@muggler:~$ curl -s http://10.62.94.246:9000 Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

``` Check logs of ingress-nginx-controller POD: ``` root@k3s-master:~# kubectl get pods --all-namespaces |grep ingress-nginx [...] ingress-nginx ingress-nginx-controller-d88d95c-khbv4 1/1 Running 0 4m36s [...] ``` ``` root@k3s-master:~# kubectl logs ingress-nginx-controller-d88d95c-khbv4 -f -n ingress-nginx [...] [10.62.94.1] [23/Aug/2020:16:38:33 +0000] TCP 200 850 81 0.001 [...] ``` Check logs of my-nginx POD: ``` root@k3s-master:/k3s# kubectl get pods NAME READY STATUS RESTARTS AGE my-nginx-65c68bbcdf-xkhqj 1/1 Running 0 90m ``` ``` kubectl logs my-nginx-65c68bbcdf-xkhqj -f [...] 10.42.0.18 - - [23/Aug/2020:16:38:33 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.0" "-" [...] ``` # Cert-Manager (references ingress controller) ## Installation Docs: https://hub.helm.sh/charts/jetstack/cert-manager ``` helm repo add jetstack https://charts.jetstack.io helm repo update kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.2/cert-manager.crds.yaml kubectl create namespace cert-manager helm install cert-manager --namespace cert-manager jetstack/cert-manager kubectl -n cert-manager get all ``` ## Let´s Encrypt issuer Docs: https://cert-manager.io/docs/tutorials/acme/ingress/#step-6-configure-let-s-encrypt-issuer ``` ClusterIssuers are a resource type similar to Issuers. They are specified in exactly the same way, but they do not belong to a single namespace and can be referenced by Certificate resources from multiple different namespaces. ``` lets-encrypt-cluster-issuers.yaml: ``` apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging-issuer spec: acme: # You must replace this email address with your own. # Let's Encrypt will use this to contact you about expiring # certificates, and issues related to your account. email: user@example.com server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: # Secret resource that will be used to store the account's private key. name: letsencrypt-staging-account-key # Add a single challenge solver, HTTP01 using nginx solvers: - http01: ingress: class: nginx --- apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-prod-issuer spec: acme: # The ACME server URL server: https://acme-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: user@example.com # Name of a secret used to store the ACME account private key privateKeySecretRef: name: letsencrypt-prod-account-key # Enable the HTTP-01 challenge provider solvers: - http01: ingress: class: nginx ``` `kubectl apply -f lets-encrypt-cluster-issuers.yaml` ## Deploying a LE-certificate All you need is an `Ingress` resource of class `nginx` which references a ClusterIssuer (`letsencrypt-prod-issuer`) resource: ``` apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: namespace: name: some-ingress-name annotations: # use the shared ingress-nginx kubernetes.io/ingress.class: "nginx" cert-manager.io/cluster-issuer: "letsencrypt-prod-issuer" spec: tls: - hosts: - some-certificate.name.san secretName: target-certificate-secret-name rules: - host: some-certificate.name.san http: paths: - path: / backend: serviceName: some-target-service servicePort: some-target-service-port ``` ## Troubleshooting Docs: https://cert-manager.io/docs/faq/acme/ ClusterIssuer runs in default namespace: ``` kubectl get clusterissuer kubectl describe clusterissuer ``` All other ingres-specific cert-manager resources are running specific namespaces: ``` kubectl -n get certificaterequest kubectl -n describe certificaterequest kubectl -n get certificate kubectl -n describe certificate kubectl -n get secret kubectl -n describe secret kubectl -n get challenge kubectl -n describe challenge ``` After successfull setup perform a TLS-test: `https://www.ssllabs.com/ssltest/index.html` # Running DaemonSets on `hostPort` * Docs: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ * Good article: https://medium.com/stakater/k8s-deployments-vs-statefulsets-vs-daemonsets-60582f0c62d4 In this case configuration of networking in context of services is not needed. This setup is suitable for legacy scenarios where static IP-address are required: * inbound mailserver * dns server ``` kind: DaemonSet apiVersion: apps/v1 metadata: name: netcat-daemonset labels: app: netcat-daemonset spec: selector: matchLabels: app: netcat-daemonset template: metadata: labels: app: netcat-daemonset spec: containers: - command: - nc - -lk - -p - "23456" - -v - -e - /bin/true env: - name: DEMO_GREETING value: Hello from the environment image: dockreg-zdf.int.zwackl.de/alpine/latest/amd64:prod imagePullPolicy: Always name: netcat-daemonset ports: - containerPort: 23456 hostPort: 23456 protocol: TCP resources: limits: cpu: 500m memory: 64Mi requests: cpu: 50m memory: 32Mi restartPolicy: Always securityContext: {} terminationGracePeriodSeconds: 30 updateStrategy: rollingUpdate: maxUnavailable: 1 type: RollingUpdate ``` # HELM charts Docs: * https://helm.sh/docs/intro/using_helm/ Prerequisites: * running kubernetes installation * kubectl with ENV[KUBECONFIG] pointing to appropriate config file * helm ## Create a chart `helm create helm-test` ``` ~/kubernetes/helm$ tree helm-test/ helm-test/ ├── charts ├── Chart.yaml ├── templates │   ├── deployment.yaml │   ├── _helpers.tpl │   ├── hpa.yaml │   ├── ingress.yaml │   ├── NOTES.txt │   ├── serviceaccount.yaml │   ├── service.yaml │   └── tests │   └── test-connection.yaml └── values.yaml ``` ## Install local chart without packaging `helm install helm-test-dev helm-test/ --set image.tag=latest --debug --wait` or just a *dry-run*: `helm install helm-test-dev helm-test/ --set image.tag=latest --debug --dry-run` ``` --wait: Waits until all Pods are in a ready state, PVCs are bound, Deployments have minimum (Desired minus maxUnavailable) Pods in ready state and Services have an IP address (and Ingress if a LoadBalancer) before marking the release as successful. It will wait for as long as the --timeout value. If timeout is reached, the release will be marked as FAILED. Note: In scenarios where Deployment has replicas set to 1 and maxUnavailable is not set to 0 as part of rolling update strategy, --wait will return as ready as it has satisfied the minimum Pod in ready condition. ``` ## List deployed helm charts ``` ~/kubernetes/helm$ helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION helm-test-dev default 4 2020-08-27 12:30:38.98457042 +0200 CEST deployed helm-test-0.1.0 1.16.0 ``` ## Upgrade local chart without packaging ``` ~/kubernetes/helm$ helm upgrade helm-test-dev helm-test/ --set image.tag=latest --wait --timeout 60s Release "helm-test-dev" has been upgraded. Happy Helming! NAME: helm-test-dev LAST DEPLOYED: Thu Aug 27 12:47:09 2020 NAMESPACE: default STATUS: deployed REVISION: 7 NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=helm-test,app.kubernetes.io/instance=helm-test-dev" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl --namespace default port-forward $POD_NAME 8080:80 ``` `helm upgrade [...] --wait` is synchronous and exit with 0 on success, otherwise with >0 on failure. `helm upgrade` will wait for 5 minutes Setting the `--timeout` (Default 5 minutes) flag makes This can be used in term of CI/CD deployments with Jenkins. ## Get status of deployed chart ``` ~/kubernetes/helm$ helm status helm-test-dev NAME: helm-test-dev LAST DEPLOYED: Thu Aug 27 12:47:09 2020 NAMESPACE: default STATUS: deployed REVISION: 7 NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=helm-test,app.kubernetes.io/instance=helm-test-dev" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl --namespace default port-forward $POD_NAME 8080:80 ``` ## Get deployment history ``` ~/kubernetes/helm$ helm history helm-test-dev REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION 10 Thu Aug 27 12:56:33 2020 failed helm-test-0.1.0 1.16.0 Upgrade "helm-test-dev" failed: timed out waiting for the condition 11 Thu Aug 27 13:08:34 2020 superseded helm-test-0.1.0 1.16.0 Upgrade complete 12 Thu Aug 27 13:09:59 2020 superseded helm-test-0.1.0 1.16.0 Upgrade complete 13 Thu Aug 27 13:10:24 2020 superseded helm-test-0.1.0 1.16.0 Rollback to 11 14 Thu Aug 27 13:23:22 2020 failed helm-test-0.1.1 blubb Upgrade "helm-test-dev" failed: timed out waiting for the condition 15 Thu Aug 27 13:26:43 2020 pending-upgrade helm-test-0.1.1 blubb Preparing upgrade 16 Thu Aug 27 13:27:12 2020 superseded helm-test-0.1.1 blubb Upgrade complete 17 Thu Aug 27 14:32:32 2020 superseded helm-test-0.1.1 Upgrade complete 18 Thu Aug 27 14:33:58 2020 superseded helm-test-0.1.1 Upgrade complete 19 Thu Aug 27 14:36:49 2020 failed helm-test-0.1.1 cosmetics Upgrade "helm-test-dev" failed: timed out waiting for the condition ``` ## Rollback `helm rollback helm-test-dev 18 --wait` ``` ~/kubernetes/helm$ helm history helm-test-dev REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION 10 Thu Aug 27 12:56:33 2020 failed helm-test-0.1.0 1.16.0 Upgrade "helm-test-dev" failed: timed out waiting for the condition 11 Thu Aug 27 13:08:34 2020 superseded helm-test-0.1.0 1.16.0 Upgrade complete 12 Thu Aug 27 13:09:59 2020 superseded helm-test-0.1.0 1.16.0 Upgrade complete 13 Thu Aug 27 13:10:24 2020 superseded helm-test-0.1.0 1.16.0 Rollback to 11 14 Thu Aug 27 13:23:22 2020 failed helm-test-0.1.1 blubb Upgrade "helm-test-dev" failed: timed out waiting for the condition 15 Thu Aug 27 13:26:43 2020 pending-upgrade helm-test-0.1.1 blubb Preparing upgrade 16 Thu Aug 27 13:27:12 2020 superseded helm-test-0.1.1 blubb Upgrade complete 17 Thu Aug 27 14:32:32 2020 superseded helm-test-0.1.1 Upgrade complete 18 Thu Aug 27 14:33:58 2020 superseded helm-test-0.1.1 Upgrade complete 19 Thu Aug 27 14:36:49 2020 failed helm-test-0.1.1 cosmetics Upgrade "helm-test-dev" failed: timed out waiting for the condition 20 Thu Aug 27 14:37:36 2020 deployed helm-test-0.1.1 Rollback to 18 ``` ``` ~/kubernetes/helm$ helm status helm-test-dev NAME: helm-test-dev LAST DEPLOYED: Thu Aug 27 14:37:36 2020 NAMESPACE: default STATUS: deployed REVISION: 20 NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=helm-test,app.kubernetes.io/instance=helm-test-dev" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl --namespace default port-forward $POD_NAME 8080:80 ```