„README.md“ ändern

This commit is contained in:
Dominik Chilla 2021-04-07 18:33:08 +00:00
parent aa98f7deaf
commit 5c4e817e47

321
README.md
View File

@ -5,9 +5,10 @@
* [devel](#namespace-devel)
* [staging](#namespace-staging)
* [prod](#namespace-prod)
* [Persistent volumes](#pv)
* [Local provider](#pv-local)
* [Longhorn - distributed/lightweight provider](#pv-longhorn)
* [Persistent volumes (StorageClass - dynamic provisioning)](#pv)
* [Rancher Local](#pv-local)
* [Rancher Longhorn - distributed in local cluster](#pv-longhorn)
* [NFS](#pv-nfs)
* [Ingress controller](#ingress-controller)
* [Disable Traefik-ingress](#disable-traefik-ingress)
* [Enable NGINX-ingress with OCSP stapling](#enable-nginx-ingress)
@ -26,14 +27,10 @@
* [Get deployment history](#helm-history)
* [Rollback](#helm-rollback)
* [Examples](#examples)
* [Enable nginx-ingress tcp- and udp-services for apps other than http/s](#nginx-ingress-tcp-udp-enabled)
* [Enable client-IP transparency and expose TCP-port 9000](#enable-client-ip-transp-expose-tcp-9000)
* [Deploy my-nginx-service](#deploy-my-nginx-service)
* [Stick the nginx-ingress controler and my-nginx app together](#stick-nginx-ingress-and-tcp-service)
* [Test exposed app on TCP-port 9000](#test-nginx-ingress-and-tcp-service)
* [Running DaemonSets on `hostPort`](#running-daemonsets)
* [Running StatefulSet with NFS storage](#running-statefulset-dns)
# Install k3s <a name="install-k3s"></a>
# Install k3s <a name="#install-k3s"></a>
https://k3s.io/:
```
curl -sfL https://get.k3s.io | sh -
@ -59,7 +56,7 @@ k3s.service - Lightweight Kubernetes
Process: 9619 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 9620 (k3s-server)
Tasks: 229
Memory: 510.6M (max: 512.0M)
Memory: 510.6M (max: 1.0G)
CGroup: /system.slice/k3s.service
```
@ -178,19 +175,59 @@ spec:
`kubectl apply -f namespace-prod-limitranges.yaml`
# Persistent Volumes <a name="pv"></a>
## Local provider (local - ouf-of-the-box) <a name="pv-local"></a>
# Persistent Volumes (StorageClass - dynamic provisioning) <a name="pv"></a>
## Rancher Local <a name="pv-local"></a>
https://rancher.com/docs/k3s/latest/en/storage/
Do not forget to update container-image to version `>= 0.0.14`:
`kubectl -n kube-system edit deployment.apps/local-path-provisioner` and set image-version to 0.0.14
## Longhorn provider (lightweight/distributed) <a name="pv-longhorn"></a>
## Longhorn (distributed in local cluster) <a name="pv-longhorn"></a>
* Requirements: https://longhorn.io/docs/0.8.0/install/requirements/
* Debian: `apt install open-iscsi`
* Install: https://rancher.com/docs/k3s/latest/en/storage/
## NFS <a name="pv-nfs"></a>
If you want to use NFS based storage...
```
helm3 repo add ckotzbauer https://ckotzbauer.github.io/helm-charts
helm3 install my-nfs-client-provisioner --set nfs.server=<nfs-server/ip-addr> --set nfs.path=</data/nfs> ckotzbauer/nfs-client-provisioner
```
Check if NFS *StorageClass* is available:
```
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 101d
nfs-client cluster.local/my-nfs-client-provisioner Delete Immediate true 172m
```
Now you can use `nfs-client` as StorageClass like so:
```
apiVersion: apps/v1
kind: StatefulSet
[...]
volumeClaimTemplates:
- metadata:
name: nfs-backend
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "nfs-client"
resources:
requests:
storage: 32Mi
```
or so:
```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc-1
namespace: <blubb>
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 32Mi
```
# Ingress controller <a name="ingress-controller"></a>
## Disable Traefik-ingress <a name="disable-traefik-ingress"></a>
edit /etc/systemd/system/k3s.service:
@ -500,204 +537,6 @@ NOTES:
```
# Examples <a name="examples"></a>
## Enable nginx-ingress tcp- and udp-services for apps other than http/s <a name="nginx-ingress-tcp-udp-enabled"></a>
Docs: https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
`kubectl -n ingress-nginx edit deployment.apps/my-release-ingress-nginx-controller` and search for `spec:`/`template`/`spec`/`containers` section:
```
[...]
spec:
[...]
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
spec:
containers:
- args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=ingress-nginx/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
>>> ADD
- --tcp-services-configmap=ingress-nginx/tcp-services
- --udp-services-configmap=ingress-nginx/udp-services
<<< ADD
env:
[...]
```
## Enable client-IP transparency and expose TCP-port 9000 <a name="enable-client-ip-transp-expose-tcp-9000"></a>
Enable client-IP transparency (X-Original-Forwarded-For) and expose my-nginx app on nginx-ingress TCP-port 9000
`kubectl edit service -n ingress-nginx ingress-nginx-controller`
Find the `ports:`-section of the `ingress-nginx-controller` service and *ADD* the definition for port 9000:
```
[...]
spec:
clusterIP: 10.43.237.255
>>> CHANGE externalTrafficPolicy from Cluster to Local if original client-IP is desirable
externalTrafficPolicy: Local
<<< CHANGE
ports:
- name: http
nodePort: 30312
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 30366
port: 443
protocol: TCP
targetPort: https
>>> ADD
- name: proxied-tcp-9000
port: 9000
protocol: TCP
targetPort: 9000
<<< ADD
[...]
```
Verify nginx-ingress-controller is listening on port 9000 with `kubectl -n ingress-nginx get service`:
```
[...]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
[...]
my-release-ingress-nginx-controller LoadBalancer 10.43.55.41 192.168.178.116 80:31110/TCP,443:31476/TCP 9m6s
[...]
```
## Deploy my-nginx deployment and service <a name="deploy-my-nginx-service"></a>
my-nginx-deployment.yml:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-nginx
```
Apply with `kubectl apply -f my-nginx-deployment.yml`:
```
deployment.apps/my-nginx created
service/my-nginx created
```
Test: `kubectl get all | grep my-nginx`:
```
pod/my-nginx-65c68bbcdf-xkhqj 1/1 Running 4 2d7h
service/my-nginx ClusterIP 10.43.118.13 <none> 80/TCP 2d7h
deployment.apps/my-nginx 1/1 1 1 2d7h
replicaset.apps/my-nginx-65c68bbcdf 1 1 1 2d7h
```
## Stick the nginx-ingress-controler and my-nginx app together <a name="stick-nginx-ingress-and-tcp-service"></a>
Finally, the nginx-ingress controller needs a port-mapping pointing to the my-nginx app. This will be done with a config-map `nginx-ingress-tcp-services-config-map.yml`, referenced earlier in the nginx-ingress deployment definition:
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
"9000": default/my-nginx:80
```
Apply with `kubectl apply -f nginx-ingress-tcp-services-config-map.yml`:
```
configmap/tcp-services created
```
Subsequently the config-map can be edited with `kubectl -n ingress-nginx edit configmap tcp-services`
**Changes to config-maps do not take effect on running pods! A re-scale to 0 and back can solve this problem: https://stackoverflow.com/questions/37317003/restart-pods-when-configmap-updates-in-kubernetes**
## Test exposed app on TCP-port 9000 <a name="test-nginx-ingress-and-tcp-service"></a>
```
dominik@muggler:~$ curl -s http://10.62.94.246:9000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
```
Check logs of ingress-nginx-controller POD:
```
root@k3s-master:~# kubectl get pods --all-namespaces |grep ingress-nginx
[...]
ingress-nginx ingress-nginx-controller-d88d95c-khbv4 1/1 Running 0 4m36s
[...]
```
```
root@k3s-master:~# kubectl logs ingress-nginx-controller-d88d95c-khbv4 -f -n ingress-nginx
[...]
[10.62.94.1] [23/Aug/2020:16:38:33 +0000] TCP 200 850 81 0.001
[...]
```
Check logs of my-nginx POD:
```
root@k3s-master:/k3s# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-65c68bbcdf-xkhqj 1/1 Running 0 90m
```
```
kubectl logs my-nginx-65c68bbcdf-xkhqj -f
[...]
10.42.0.18 - - [23/Aug/2020:16:38:33 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.0" "-"
[...]
```
## Running DaemonSets on `hostPort` <a name="running-daemonsets"></a>
* Docs: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
* Good article: https://medium.com/stakater/k8s-deployments-vs-statefulsets-vs-daemonsets-60582f0c62d4
@ -758,3 +597,55 @@ spec:
maxUnavailable: 1
type: RollingUpdate
```
## Running StatefulSet with NFS storage <a name="running-statefulset-nfs"></a>
* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
```
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
name: web
volumeMounts:
- name: nfs-backend
mountPath: /nfs-backend
volumeClaimTemplates:
- metadata:
name: nfs-backend
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "nfs-client"
resources:
requests:
storage: 32Mi
```