k3s/README.md

760 lines
29 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

* [Install k3s](#install-k3s)
* [Configure upstream DNS-resolver](#upstream-dns-resolver)
* [Change NodePort range](#nodeport-range)
* [Namespaces and resource limits](#namespaces)
* [devel](#namespace-devel)
* [staging](#namespace-staging)
* [prod](#namespace-prod)
* [Persistent volumes](#pv)
* [Local provider](#pv-local)
* [Longhorn - distributed/lightweight provider](#pv-longhorn)
* [Ingress controller](#ingress-controller)
* [Disable Traefik-ingress](#disable-traefik-ingress)
* [Enable NGINX-ingress with OCSP stapling](#enable-nginx-ingress)
* [Installation](#install-nginx-ingress)
* [Cert-Manager (references ingress controller)](#cert-manager)
* [Installation](#cert-manager-install)
* [Let´s Encrypt issuer](#cert-manager-le-issuer)
* [Deploying a LE-certificate](#cert-manager-ingress)
* [Troubleshooting](#cert-manager-troubleshooting)
* [HELM charts](#helm)
* [Create a chart](#helm-create)
* [Install local chart without packaging](#helm-install-without-packaging)
* [List deployed helm charts](#helm-list)
* [Upgrade local chart without packaging](#helm-upgrade)
* [Get status of deployed chart](#helm-status)
* [Get deployment history](#helm-history)
* [Rollback](#helm-rollback)
* [Examples](#examples)
* [Enable nginx-ingress tcp- and udp-services for apps other than http/s](#nginx-ingress-tcp-udp-enabled)
* [Enable client-IP transparency and expose TCP-port 9000](#enable-client-ip-transp-expose-tcp-9000)
* [Deploy my-nginx-service](#deploy-my-nginx-service)
* [Stick the nginx-ingress controler and my-nginx app together](#stick-nginx-ingress-and-tcp-service)
* [Test exposed app on TCP-port 9000](#test-nginx-ingress-and-tcp-service)
* [Running DaemonSets on `hostPort`](#running-daemonsets)
# Install k3s <a name="install-k3s"></a>
https://k3s.io/:
```
curl -sfL https://get.k3s.io | sh -
```
If disired, set a memory consumption limit of the systemd-unit like so:
```
root#> mkdir /etc/systemd/system/k3s.service.d
root#> vi /etc/systemd/system/k3s.service.d/limits.conf
[Service]
MemoryMax=1024M
root#> systemctl daemon-reload
root#> systemctl restart k3s
root#> systemctl status k3s
k3s.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/k3s.service.d
└─limits.conf
Active: active (running) since Thu 2020-11-26 10:46:26 CET; 13min ago
Docs: https://k3s.io
Process: 9618 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 9619 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 9620 (k3s-server)
Tasks: 229
Memory: 510.6M (max: 512.0M)
CGroup: /system.slice/k3s.service
```
# Upstream DNS-resolver <a name="upstream-dns-resolver"></a>
Docs: https://rancher.com/docs/rancher/v2.x/en/troubleshooting/dns/
Default: 8.8.8.8 => does not resolve local domains!
1. local /etc/resolv.k3s.conf -> ip-of-dnsresolver (127.0.0.1 **does not work!**)
2. vi /etc/systemd/system/k3s.service:
```
[...]
ExecStart=/usr/local/bin/k3s \
server [...] --resolv-conf /etc/resolv.k3s.conf \
```
3. Re-load systemd config: `systemctl daemon-reload`
4. Re-start k3s: `systemctl restart k3s.service`
5. Re-deploy coredns-pods: `kubectl -n kube-system delete pod name-of-coredns-pods`
# Change NodePort range to 1 - 65535 <a name="nodeport-range"></a>
1. vi /etc/systemd/system/k3s.service:
```
[...]
ExecStart=/usr/local/bin/k3s \
server [...] --kube-apiserver-arg service-node-port-range=1-65535 \
```
2. Re-load systemd config: `systemctl daemon-reload`
3. Re-start k3s: `systemctl restart k3s.service`
# Namespaces and resource limits <a name="namespaces"></a>
## devel <a name="namespace-devel"></a>
namespace-devel-limitranges.yaml:
```
---
apiVersion: v1
kind: Namespace
metadata:
name: devel
labels:
name: devel
---
apiVersion: v1
kind: LimitRange
metadata:
name: limit-range-devel
namespace: devel
spec:
limits:
- default:
cpu: 500m
memory: 1Gi
defaultRequest:
cpu: 10m
memory: 4Mi
max:
cpu: 500m
memory: 1Gi
type: Container
```
`kubectl apply -f namespace-devel-limitranges.yaml`
## staging <a name="namespace-staging"></a>
namespace-staging-limitranges.yaml:
```
---
apiVersion: v1
kind: Namespace
metadata:
name: staging
labels:
name: staging
---
apiVersion: v1
kind: LimitRange
metadata:
name: limit-range-staging
namespace: staging
spec:
limits:
- default:
cpu: 500m
memory: 1Gi
defaultRequest:
cpu: 10m
memory: 4Mi
max:
cpu: 500m
memory: 1Gi
type: Container
```
`kubectl apply -f namespace-staging-limitranges.yaml`
## prod <a name="namespace-prod"></a>
namespace-prod-limitranges.yaml:
```
---
apiVersion: v1
kind: Namespace
metadata:
name: prod
labels:
name: prod
---
apiVersion: v1
kind: LimitRange
metadata:
name: limit-range-prod
namespace: prod
spec:
limits:
- defaultRequest:
cpu: 50m
memory: 4Mi
type: Container
```
`kubectl apply -f namespace-prod-limitranges.yaml`
# Persistent Volumes <a name="pv"></a>
## Local provider (local - ouf-of-the-box) <a name="pv-local"></a>
https://rancher.com/docs/k3s/latest/en/storage/
Do not forget to update container-image to version `>= 0.0.14`:
`kubectl -n kube-system edit deployment.apps/local-path-provisioner` and set image-version to 0.0.14
## Longhorn provider (lightweight/distributed) <a name="pv-longhorn"></a>
* Requirements: https://longhorn.io/docs/0.8.0/install/requirements/
* Debian: `apt install open-iscsi`
* Install: https://rancher.com/docs/k3s/latest/en/storage/
# Ingress controller <a name="ingress-controller"></a>
## Disable Traefik-ingress <a name="disable-traefik-ingress"></a>
edit /etc/systemd/system/k3s.service:
```
[...]
ExecStart=/usr/local/bin/k3s \
server --disable traefik --resolv-conf /etc/resolv.conf \
[...]
```
Finally `systemctl daemon-reload` and `systemctl restart k3s`
## Enable NGINX-ingress with OCSP stapling <a name="enable-nginx-ingress"></a>
### Installation <a name="install-nginx-ingress"></a>
https://kubernetes.github.io/ingress-nginx/deploy/#using-helm
```
kubectl create ns ingress-nginx
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install my-release ingress-nginx/ingress-nginx -n ingress-nginx
```
`kubectl -n ingress-nginx get all`:
```
NAME READY STATUS RESTARTS AGE
pod/svclb-my-release-ingress-nginx-controller-m6gxl 2/2 Running 0 110s
pod/my-release-ingress-nginx-controller-695774d99c-t794f 1/1 Running 0 110s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-release-ingress-nginx-controller-admission ClusterIP 10.43.116.191 <none> 443/TCP 110s
service/my-release-ingress-nginx-controller LoadBalancer 10.43.55.41 192.168.178.116 80:31110/TCP,443:31476/TCP 110s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/svclb-my-release-ingress-nginx-controller 1 1 1 1 1 <none> 110s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-release-ingress-nginx-controller 1/1 1 1 110s
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-release-ingress-nginx-controller-695774d99c 1 1 1 110s
```
As nginx ingress is hungry for memory, let´s reduce the number of workers to 1:
```
kubectl -n ingress-nginx edit configmap my-release-ingress-nginx-controller
apiVersion: v1
<<<ADD BEGINN>>>
data:
enable-ocsp: "true"
worker-processes: "1"
<<<ADD END>>>
kind: ConfigMap
[...]
```
Finally the deployment needs to be restarted:
`kubectl -n ingress-nginx rollout restart deployment my-release-ingress-nginx-controller`
**If you are facing deployment problems like the following one**
```
Error: UPGRADE FAILED: cannot patch "gitea-ingress-staging" with kind Ingress: Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://my-release-ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: context deadline exceeded
```
A possible fix: `kubectl -n ingress-nginx delete ValidatingWebhookConfiguration my-release-ingress-nginx-admission`
# Cert-Manager (references ingress controller) <a name="cert-manager"></a>
## Installation <a name="cert-manager-install"></a>
Docs: https://hub.helm.sh/charts/jetstack/cert-manager
```
helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.2/cert-manager.crds.yaml
kubectl create namespace cert-manager
helm install cert-manager --namespace cert-manager jetstack/cert-manager
kubectl -n cert-manager get all
```
## Let´s Encrypt issuer <a name="cert-manager-le-issuer"></a>
Docs: https://cert-manager.io/docs/tutorials/acme/ingress/#step-6-configure-let-s-encrypt-issuer
```
ClusterIssuers are a resource type similar to Issuers. They are specified in exactly the same way,
but they do not belong to a single namespace and can be referenced by Certificate resources from
multiple different namespaces.
```
lets-encrypt-cluster-issuers.yaml:
```
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging-issuer
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: user@example.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-staging-account-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod-issuer
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: user@example.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod-account-key
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
```
`kubectl apply -f lets-encrypt-cluster-issuers.yaml`
## Deploying a LE-certificate <a name="cert-manager-ingress"></a>
All you need is an `Ingress` resource of class `nginx` which references a ClusterIssuer (`letsencrypt-prod-issuer`) resource:
```
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: <stage>
name: some-ingress-name
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod-issuer"
spec:
tls:
- hosts:
- some-certificate.name.san
secretName: target-certificate-secret-name
rules:
- host: some-certificate.name.san
http:
paths:
- path: /
backend:
serviceName: some-target-service
servicePort: some-target-service-port
```
## Troubleshooting <a name="cert-manager-troubleshooting"></a>
Docs: https://cert-manager.io/docs/faq/acme/
ClusterIssuer runs in default namespace:
```
kubectl get clusterissuer
kubectl describe clusterissuer <object>
```
All other ingres-specific cert-manager resources are running <stage> specific namespaces:
```
kubectl -n <stage> get certificaterequest
kubectl -n <stage> describe certificaterequest <object>
kubectl -n <stage> get certificate
kubectl -n <stage> describe certificate <object>
kubectl -n <stage> get secret
kubectl -n <stage> describe secret <object>
kubectl -n <stage> get challenge
kubectl -n <stage> describe challenge <object>
```
After successfull setup perform a TLS-test: `https://www.ssllabs.com/ssltest/index.html`
# HELM charts <a name="helm"></a>
Docs:
* https://helm.sh/docs/intro/using_helm/
Prerequisites:
* running kubernetes installation
* kubectl with ENV[KUBECONFIG] pointing to appropriate config file
* helm
## Create a chart <a name="helm-create"></a>
`helm create helm-test`
```
~/kubernetes/helm$ tree helm-test/
helm-test/
├── charts
├── Chart.yaml
├── templates
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── NOTES.txt
│   ├── serviceaccount.yaml
│   ├── service.yaml
│   └── tests
│   └── test-connection.yaml
└── values.yaml
```
## Install local chart without packaging <a name="helm-install-without-packaging"></a>
`helm install helm-test-dev helm-test/ --set image.tag=latest --debug --wait`
or just a *dry-run*:
`helm install helm-test-dev helm-test/ --set image.tag=latest --debug --dry-run`
```
--wait: Waits until all Pods are in a ready state, PVCs are bound, Deployments have minimum (Desired minus maxUnavailable)
Pods in ready state and Services have an IP address (and Ingress if a LoadBalancer) before marking the release as successful.
It will wait for as long as the --timeout value. If timeout is reached, the release will be marked as FAILED. Note: In
scenarios where Deployment has replicas set to 1 and maxUnavailable is not set to 0 as part of rolling update strategy,
--wait will return as ready as it has satisfied the minimum Pod in ready condition.
```
## List deployed helm charts <a name="helm-list"></a>
```
~/kubernetes/helm$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
helm-test-dev default 4 2020-08-27 12:30:38.98457042 +0200 CEST deployed helm-test-0.1.0 1.16.0
```
## Upgrade local chart without packaging <a name="helm-upgrade"></a>
```
~/kubernetes/helm$ helm upgrade helm-test-dev helm-test/ --set image.tag=latest --wait --timeout 60s
Release "helm-test-dev" has been upgraded. Happy Helming!
NAME: helm-test-dev
LAST DEPLOYED: Thu Aug 27 12:47:09 2020
NAMESPACE: default
STATUS: deployed
REVISION: 7
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=helm-test,app.kubernetes.io/instance=helm-test-dev" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace default port-forward $POD_NAME 8080:80
```
`helm upgrade [...] --wait` is synchronous and exit with 0 on success, otherwise with >0 on failure. `helm upgrade` will wait for 5 minutes Setting the `--timeout` (Default 5 minutes) flag makes This can be used in term of CI/CD deployments with Jenkins.
## Get status of deployed chart <a name="helm-status"></a>
```
~/kubernetes/helm$ helm status helm-test-dev
NAME: helm-test-dev
LAST DEPLOYED: Thu Aug 27 12:47:09 2020
NAMESPACE: default
STATUS: deployed
REVISION: 7
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=helm-test,app.kubernetes.io/instance=helm-test-dev" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace default port-forward $POD_NAME 8080:80
```
## Get deployment history <a name="helm-history"></a>
```
~/kubernetes/helm$ helm history helm-test-dev
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
10 Thu Aug 27 12:56:33 2020 failed helm-test-0.1.0 1.16.0 Upgrade "helm-test-dev" failed: timed out waiting for the condition
11 Thu Aug 27 13:08:34 2020 superseded helm-test-0.1.0 1.16.0 Upgrade complete
12 Thu Aug 27 13:09:59 2020 superseded helm-test-0.1.0 1.16.0 Upgrade complete
13 Thu Aug 27 13:10:24 2020 superseded helm-test-0.1.0 1.16.0 Rollback to 11
14 Thu Aug 27 13:23:22 2020 failed helm-test-0.1.1 blubb Upgrade "helm-test-dev" failed: timed out waiting for the condition
15 Thu Aug 27 13:26:43 2020 pending-upgrade helm-test-0.1.1 blubb Preparing upgrade
16 Thu Aug 27 13:27:12 2020 superseded helm-test-0.1.1 blubb Upgrade complete
17 Thu Aug 27 14:32:32 2020 superseded helm-test-0.1.1 Upgrade complete
18 Thu Aug 27 14:33:58 2020 superseded helm-test-0.1.1 Upgrade complete
19 Thu Aug 27 14:36:49 2020 failed helm-test-0.1.1 cosmetics Upgrade "helm-test-dev" failed: timed out waiting for the condition
```
## Rollback <a name="helm-rollback"></a>
`helm rollback helm-test-dev 18 --wait`
```
~/kubernetes/helm$ helm history helm-test-dev
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
10 Thu Aug 27 12:56:33 2020 failed helm-test-0.1.0 1.16.0 Upgrade "helm-test-dev" failed: timed out waiting for the condition
11 Thu Aug 27 13:08:34 2020 superseded helm-test-0.1.0 1.16.0 Upgrade complete
12 Thu Aug 27 13:09:59 2020 superseded helm-test-0.1.0 1.16.0 Upgrade complete
13 Thu Aug 27 13:10:24 2020 superseded helm-test-0.1.0 1.16.0 Rollback to 11
14 Thu Aug 27 13:23:22 2020 failed helm-test-0.1.1 blubb Upgrade "helm-test-dev" failed: timed out waiting for the condition
15 Thu Aug 27 13:26:43 2020 pending-upgrade helm-test-0.1.1 blubb Preparing upgrade
16 Thu Aug 27 13:27:12 2020 superseded helm-test-0.1.1 blubb Upgrade complete
17 Thu Aug 27 14:32:32 2020 superseded helm-test-0.1.1 Upgrade complete
18 Thu Aug 27 14:33:58 2020 superseded helm-test-0.1.1 Upgrade complete
19 Thu Aug 27 14:36:49 2020 failed helm-test-0.1.1 cosmetics Upgrade "helm-test-dev" failed: timed out waiting for the condition
20 Thu Aug 27 14:37:36 2020 deployed helm-test-0.1.1 Rollback to 18
```
```
~/kubernetes/helm$ helm status helm-test-dev
NAME: helm-test-dev
LAST DEPLOYED: Thu Aug 27 14:37:36 2020
NAMESPACE: default
STATUS: deployed
REVISION: 20
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=helm-test,app.kubernetes.io/instance=helm-test-dev" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace default port-forward $POD_NAME 8080:80
```
# Examples <a name="examples"></a>
## Enable nginx-ingress tcp- and udp-services for apps other than http/s <a name="nginx-ingress-tcp-udp-enabled"></a>
Docs: https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
`kubectl -n ingress-nginx edit deployment.apps/my-release-ingress-nginx-controller` and search for `spec:`/`template`/`spec`/`containers` section:
```
[...]
spec:
[...]
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
spec:
containers:
- args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=ingress-nginx/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
>>> ADD
- --tcp-services-configmap=ingress-nginx/tcp-services
- --udp-services-configmap=ingress-nginx/udp-services
<<< ADD
env:
[...]
```
## Enable client-IP transparency and expose TCP-port 9000 <a name="enable-client-ip-transp-expose-tcp-9000"></a>
Enable client-IP transparency (X-Original-Forwarded-For) and expose my-nginx app on nginx-ingress TCP-port 9000
`kubectl edit service -n ingress-nginx ingress-nginx-controller`
Find the `ports:`-section of the `ingress-nginx-controller` service and *ADD* the definition for port 9000:
```
[...]
spec:
clusterIP: 10.43.237.255
>>> CHANGE externalTrafficPolicy from Cluster to Local if original client-IP is desirable
externalTrafficPolicy: Local
<<< CHANGE
ports:
- name: http
nodePort: 30312
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 30366
port: 443
protocol: TCP
targetPort: https
>>> ADD
- name: proxied-tcp-9000
port: 9000
protocol: TCP
targetPort: 9000
<<< ADD
[...]
```
Verify nginx-ingress-controller is listening on port 9000 with `kubectl -n ingress-nginx get service`:
```
[...]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
[...]
my-release-ingress-nginx-controller LoadBalancer 10.43.55.41 192.168.178.116 80:31110/TCP,443:31476/TCP 9m6s
[...]
```
## Deploy my-nginx deployment and service <a name="deploy-my-nginx-service"></a>
my-nginx-deployment.yml:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-nginx
```
Apply with `kubectl apply -f my-nginx-deployment.yml`:
```
deployment.apps/my-nginx created
service/my-nginx created
```
Test: `kubectl get all | grep my-nginx`:
```
pod/my-nginx-65c68bbcdf-xkhqj 1/1 Running 4 2d7h
service/my-nginx ClusterIP 10.43.118.13 <none> 80/TCP 2d7h
deployment.apps/my-nginx 1/1 1 1 2d7h
replicaset.apps/my-nginx-65c68bbcdf 1 1 1 2d7h
```
## Stick the nginx-ingress-controler and my-nginx app together <a name="stick-nginx-ingress-and-tcp-service"></a>
Finally, the nginx-ingress controller needs a port-mapping pointing to the my-nginx app. This will be done with a config-map `nginx-ingress-tcp-services-config-map.yml`, referenced earlier in the nginx-ingress deployment definition:
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
"9000": default/my-nginx:80
```
Apply with `kubectl apply -f nginx-ingress-tcp-services-config-map.yml`:
```
configmap/tcp-services created
```
Subsequently the config-map can be edited with `kubectl -n ingress-nginx edit configmap tcp-services`
**Changes to config-maps do not take effect on running pods! A re-scale to 0 and back can solve this problem: https://stackoverflow.com/questions/37317003/restart-pods-when-configmap-updates-in-kubernetes**
## Test exposed app on TCP-port 9000 <a name="test-nginx-ingress-and-tcp-service"></a>
```
dominik@muggler:~$ curl -s http://10.62.94.246:9000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
```
Check logs of ingress-nginx-controller POD:
```
root@k3s-master:~# kubectl get pods --all-namespaces |grep ingress-nginx
[...]
ingress-nginx ingress-nginx-controller-d88d95c-khbv4 1/1 Running 0 4m36s
[...]
```
```
root@k3s-master:~# kubectl logs ingress-nginx-controller-d88d95c-khbv4 -f -n ingress-nginx
[...]
[10.62.94.1] [23/Aug/2020:16:38:33 +0000] TCP 200 850 81 0.001
[...]
```
Check logs of my-nginx POD:
```
root@k3s-master:/k3s# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-65c68bbcdf-xkhqj 1/1 Running 0 90m
```
```
kubectl logs my-nginx-65c68bbcdf-xkhqj -f
[...]
10.42.0.18 - - [23/Aug/2020:16:38:33 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.0" "-"
[...]
```
## Running DaemonSets on `hostPort` <a name="running-daemonsets"></a>
* Docs: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
* Good article: https://medium.com/stakater/k8s-deployments-vs-statefulsets-vs-daemonsets-60582f0c62d4
In this case configuration of networking in context of services is not needed.
This setup is suitable for legacy scenarios where static IP-address are required:
* inbound mailserver
* dns server
```
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: netcat-daemonset
labels:
app: netcat-daemonset
spec:
selector:
matchLabels:
app: netcat-daemonset
template:
metadata:
labels:
app: netcat-daemonset
spec:
containers:
- command:
- nc
- -lk
- -p
- "23456"
- -v
- -e
- /bin/true
env:
- name: DEMO_GREETING
value: Hello from the environment
image: dockreg-zdf.int.zwackl.de/alpine/latest/amd64:prod
imagePullPolicy: Always
name: netcat-daemonset
ports:
- containerPort: 23456
hostPort: 23456
protocol: TCP
resources:
limits:
cpu: 500m
memory: 64Mi
requests:
cpu: 50m
memory: 32Mi
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
```