Go to file
2020-09-25 23:32:25 +02:00
README.md paragraph order changed 2020-09-25 23:32:25 +02:00

Install k3s

https://k3s.io/:

curl -sfL https://get.k3s.io | sh -

Upstream DNS-resolver

Docs: https://rancher.com/docs/rancher/v2.x/en/troubleshooting/dns/

Default: 8.8.8.8 => does not resolve local domains!

  1. local /etc/resolv.conf -> ip-of-dnsresolver (127.0.0.1 does not work!)
  2. vi /etc/systemd/system/k3s.service:
[...]
ExecStart=/usr/local/bin/k3s \
    server --resolv-conf /etc/resolv.conf \
  1. Re-load systemd config: systemctl daemon-reload
  2. Re-start k3s: systemctl restart k3s.service
  3. Re-deploy coredns-pods: kubectl -n kube-system delete name-of-coredns-pods

Namespaces and resource limits

devel

namespace-devel-limitranges.yaml:

---
apiVersion: v1
kind: Namespace
metadata:
  name: devel
  labels:
    name: devel
---
apiVersion: v1
kind: LimitRange
metadata:
  name: limit-range-devel
  namespace: devel
spec:
  limits:
  - default:
      cpu: 500m
      memory: 1Gi
    defaultRequest:
      cpu: 10m
      memory: 4Mi
    max:
      cpu: 500m
      memory: 1Gi
    type: Container

kubectl apply -f namespace-devel-limitranges.yaml

staging

namespace-staging-limitranges.yaml:

---
apiVersion: v1
kind: Namespace
metadata:
  name: staging
  labels:
    name: staging
---
apiVersion: v1
kind: LimitRange
metadata:
  name: limit-range-staging
  namespace: staging
spec:
  limits:
  - default:
      cpu: 500m
      memory: 1Gi
    defaultRequest:
      cpu: 10m
      memory: 4Mi
    max:
      cpu: 500m
      memory: 1Gi
    type: Container

kubectl apply -f namespace-staging-limitranges.yaml

prod

namespace-prod-limitranges.yaml:

---
apiVersion: v1
kind: Namespace
metadata:
  name: prod
  labels:
    name: prod
---
apiVersion: v1
kind: LimitRange
metadata:
  name: limit-range-prod
  namespace: prod
spec:
  limits:
  - defaultRequest:
      cpu: 50m
      memory: 4Mi
    type: Container

kubectl apply -f namespace-prod-limitranges.yaml

Persistent Volumes

Local provider (local - ouf-of-the-box)

https://rancher.com/docs/k3s/latest/en/storage/

Longhorn provider (lightweight/distributed)

Ingress controller

Disable Traefik-ingress

edit /etc/systemd/system/k3s.service:

[...]
ExecStart=/usr/local/bin/k3s \
    server --disable traefik --resolv-conf /etc/resolv.conf \
[...]

Finally systemctl daemon-reload and systemctl restart k3s

Enable NGINX-ingress

Installation

https://kubernetes.github.io/ingress-nginx/deploy/#using-helm

kubectl create ns ingress-nginx
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install my-release ingress-nginx/ingress-nginx -n ingress-nginx

kubectl -n ingress-nginx get all:

NAME                                                       READY   STATUS    RESTARTS   AGE
pod/svclb-my-release-ingress-nginx-controller-m6gxl        2/2     Running   0          110s
pod/my-release-ingress-nginx-controller-695774d99c-t794f   1/1     Running   0          110s

NAME                                                    TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                      AGE
service/my-release-ingress-nginx-controller-admission   ClusterIP      10.43.116.191   <none>            443/TCP                      110s
service/my-release-ingress-nginx-controller             LoadBalancer   10.43.55.41     192.168.178.116   80:31110/TCP,443:31476/TCP   110s

NAME                                                       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/svclb-my-release-ingress-nginx-controller   1         1         1       1            1           <none>          110s

NAME                                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-release-ingress-nginx-controller   1/1     1            1           110s

NAME                                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/my-release-ingress-nginx-controller-695774d99c   1         1         1       110s

Cert-Manager (references ingress controller)

Installation

Docs: https://hub.helm.sh/charts/jetstack/cert-manager

helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.2/cert-manager.crds.yaml
kubectl create namespace cert-manager
helm install cert-manager --namespace cert-manager jetstack/cert-manager
kubectl -n cert-manager get all

Let´s Encrypt issuer

Docs: https://cert-manager.io/docs/tutorials/acme/ingress/#step-6-configure-let-s-encrypt-issuer

ClusterIssuers are a resource type similar to Issuers. They are specified in exactly the same way, 
but they do not belong to a single namespace and can be referenced by Certificate resources from 
multiple different namespaces.

lets-encrypt-cluster-issuers.yaml:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging-issuer
spec:
  acme:
    # You must replace this email address with your own.
    # Let's Encrypt will use this to contact you about expiring
    # certificates, and issues related to your account.
    email: user@example.com
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      # Secret resource that will be used to store the account's private key.
      name: letsencrypt-staging-account-key
    # Add a single challenge solver, HTTP01 using nginx
    solvers:
    - http01:
        ingress:
          class: nginx
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod-issuer
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: user@example.com
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod-account-key
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class: nginx

kubectl apply -f lets-encrypt-cluster-issuers.yaml

Deploying a LE-certificate

All you need is an Ingress resource of class nginx which references a ClusterIssuer (letsencrypt-prod-issuer) resource:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  namespace: <stage>
  name: some-ingress-name
  annotations:
    # use the shared ingress-nginx
    kubernetes.io/ingress.class: "nginx"
    cert-manager.io/cluster-issuer: "letsencrypt-prod-issuer"
spec:
  tls:
  - hosts:
    - some-certificate.name.san
    secretName: target-certificate-secret-name
  rules:
  - host: some-certificate.name.san
    http:
      paths:
      - path: /
        backend:
          serviceName: some-target-service
          servicePort: some-target-service-port

Troubleshooting

Docs: https://cert-manager.io/docs/faq/acme/

ClusterIssuer runs in default namespace:

kubectl get clusterissuer
kubectl describe clusterissuer <object>

All other ingres-specific cert-manager resources are running specific namespaces:

kubectl -n <stage> get certificaterequest
kubectl -n <stage> describe certificaterequest <object>
kubectl -n <stage> get certificate
kubectl -n <stage> describe certificate <object>
kubectl -n <stage> get secret
kubectl -n <stage> describe secret <object>
kubectl -n <stage> get challenge
kubectl -n <stage> describe challenge <object>

After successfull setup perform a TLS-test: https://www.ssllabs.com/ssltest/index.html

HELM charts

Docs:

Prerequisites:

  • running kubernetes installation
  • kubectl with ENV[KUBECONFIG] pointing to appropriate config file
  • helm

Create a chart

helm create helm-test

~/kubernetes/helm$ tree helm-test/
helm-test/
├── charts
├── Chart.yaml
├── templates
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── NOTES.txt
│   ├── serviceaccount.yaml
│   ├── service.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml

Install local chart without packaging

helm install helm-test-dev helm-test/ --set image.tag=latest --debug --wait

or just a dry-run:

helm install helm-test-dev helm-test/ --set image.tag=latest --debug --dry-run

--wait: Waits until all Pods are in a ready state, PVCs are bound, Deployments have minimum (Desired minus maxUnavailable) 
Pods in ready state and Services have an IP address (and Ingress if a LoadBalancer) before marking the release as successful. 
It will wait for as long as the --timeout value. If timeout is reached, the release will be marked as FAILED. Note: In 
scenarios where Deployment has replicas set to 1 and maxUnavailable is not set to 0 as part of rolling update strategy, 

--wait will return as ready as it has satisfied the minimum Pod in ready condition.

List deployed helm charts

~/kubernetes/helm$ helm list
NAME         	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART          	APP VERSION
helm-test-dev	default  	4       	2020-08-27 12:30:38.98457042 +0200 CEST	deployed	helm-test-0.1.0	1.16.0     

Upgrade local chart without packaging

~/kubernetes/helm$ helm upgrade helm-test-dev helm-test/ --set image.tag=latest --wait --timeout 60s
Release "helm-test-dev" has been upgraded. Happy Helming!
NAME: helm-test-dev
LAST DEPLOYED: Thu Aug 27 12:47:09 2020
NAMESPACE: default
STATUS: deployed
REVISION: 7
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=helm-test,app.kubernetes.io/instance=helm-test-dev" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:80

helm upgrade [...] --wait is synchronous and exit with 0 on success, otherwise with >0 on failure. helm upgrade will wait for 5 minutes Setting the --timeout (Default 5 minutes) flag makes This can be used in term of CI/CD deployments with Jenkins.

Get status of deployed chart

~/kubernetes/helm$ helm status helm-test-dev
NAME: helm-test-dev
LAST DEPLOYED: Thu Aug 27 12:47:09 2020
NAMESPACE: default
STATUS: deployed
REVISION: 7
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=helm-test,app.kubernetes.io/instance=helm-test-dev" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:80

Get deployment history

~/kubernetes/helm$ helm history helm-test-dev
REVISION	UPDATED                 	STATUS         	CHART          	APP VERSION	DESCRIPTION                                                        
10      	Thu Aug 27 12:56:33 2020	failed         	helm-test-0.1.0	1.16.0     	Upgrade "helm-test-dev" failed: timed out waiting for the condition
11      	Thu Aug 27 13:08:34 2020	superseded     	helm-test-0.1.0	1.16.0     	Upgrade complete                                                   
12      	Thu Aug 27 13:09:59 2020	superseded     	helm-test-0.1.0	1.16.0     	Upgrade complete                                                   
13      	Thu Aug 27 13:10:24 2020	superseded     	helm-test-0.1.0	1.16.0     	Rollback to 11                                                     
14      	Thu Aug 27 13:23:22 2020	failed         	helm-test-0.1.1	blubb      	Upgrade "helm-test-dev" failed: timed out waiting for the condition
15      	Thu Aug 27 13:26:43 2020	pending-upgrade	helm-test-0.1.1	blubb      	Preparing upgrade                                                  
16      	Thu Aug 27 13:27:12 2020	superseded     	helm-test-0.1.1	blubb      	Upgrade complete                                                   
17      	Thu Aug 27 14:32:32 2020	superseded     	helm-test-0.1.1	           	Upgrade complete                                                   
18      	Thu Aug 27 14:33:58 2020	superseded     	helm-test-0.1.1	           	Upgrade complete                                                   
19      	Thu Aug 27 14:36:49 2020	failed         	helm-test-0.1.1	cosmetics  	Upgrade "helm-test-dev" failed: timed out waiting for the condition

Rollback

helm rollback helm-test-dev 18 --wait

~/kubernetes/helm$ helm history helm-test-dev
REVISION	UPDATED                 	STATUS         	CHART          	APP VERSION	DESCRIPTION                                                        
10      	Thu Aug 27 12:56:33 2020	failed         	helm-test-0.1.0	1.16.0     	Upgrade "helm-test-dev" failed: timed out waiting for the condition
11      	Thu Aug 27 13:08:34 2020	superseded     	helm-test-0.1.0	1.16.0     	Upgrade complete                                                   
12      	Thu Aug 27 13:09:59 2020	superseded     	helm-test-0.1.0	1.16.0     	Upgrade complete                                                   
13      	Thu Aug 27 13:10:24 2020	superseded     	helm-test-0.1.0	1.16.0     	Rollback to 11                                                     
14      	Thu Aug 27 13:23:22 2020	failed         	helm-test-0.1.1	blubb      	Upgrade "helm-test-dev" failed: timed out waiting for the condition
15      	Thu Aug 27 13:26:43 2020	pending-upgrade	helm-test-0.1.1	blubb      	Preparing upgrade                                                  
16      	Thu Aug 27 13:27:12 2020	superseded     	helm-test-0.1.1	blubb      	Upgrade complete                                                   
17      	Thu Aug 27 14:32:32 2020	superseded     	helm-test-0.1.1	           	Upgrade complete                                                   
18      	Thu Aug 27 14:33:58 2020	superseded     	helm-test-0.1.1	           	Upgrade complete                                                   
19      	Thu Aug 27 14:36:49 2020	failed         	helm-test-0.1.1	cosmetics  	Upgrade "helm-test-dev" failed: timed out waiting for the condition
20      	Thu Aug 27 14:37:36 2020	deployed       	helm-test-0.1.1	           	Rollback to 18
~/kubernetes/helm$ helm status helm-test-dev
NAME: helm-test-dev
LAST DEPLOYED: Thu Aug 27 14:37:36 2020
NAMESPACE: default
STATUS: deployed
REVISION: 20
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=helm-test,app.kubernetes.io/instance=helm-test-dev" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:80

Examples

Enable nginx-ingress tcp- and udp-services for apps other than http/s

Docs: https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/

kubectl -n ingress-nginx edit deployment.apps/my-release-ingress-nginx-controller and search for spec:/template/spec/containers section:

[...]
spec:                                                                                  
[...]                                                                  
  template:                                                                            
    metadata:                                  
      creationTimestamp: null                  
      labels:                                  
        app.kubernetes.io/component: controller                 
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx    
    spec:                                        
      containers:                                
      - args:                                    
        - /nginx-ingress-controller              
        - --election-id=ingress-controller-leader
        - --ingress-class=nginx                  
        - --configmap=ingress-nginx/ingress-nginx-controller
        - --validating-webhook=:8443                        
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key
>>> ADD 
        - --tcp-services-configmap=ingress-nginx/tcp-services
        - --udp-services-configmap=ingress-nginx/udp-services
<<< ADD
        env:     
[...]

Enable client-IP transparency and expose TCP-port 9000

Enable client-IP transparency (X-Original-Forwarded-For) and expose my-nginx app on nginx-ingress TCP-port 9000

kubectl edit service -n ingress-nginx ingress-nginx-controller

Find the ports:-section of the ingress-nginx-controller service and ADD the definition for port 9000:

[...]
spec:   
    clusterIP: 10.43.237.255                                                              
>>> CHANGE externalTrafficPolicy from Cluster to Local if original client-IP is desirable
    externalTrafficPolicy: Local
<<< CHANGE
    ports:
    - name: http                                                                          
      nodePort: 30312                                                                     
      port: 80
      protocol: TCP                                                                       
      targetPort: http                                                                    
    - name: https                                                                         
      nodePort: 30366                                                                     
      port: 443
      protocol: TCP                                                                       
      targetPort: https      
>>> ADD
    - name: proxied-tcp-9000
      port: 9000
      protocol: TCP
      targetPort: 9000
<<< ADD 
[...]

Verify nginx-ingress-controller is listening on port 9000 with kubectl -n ingress-nginx get service:

[...]
NAME                                            TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                      AGE
[...]
my-release-ingress-nginx-controller             LoadBalancer   10.43.55.41     192.168.178.116   80:31110/TCP,443:31476/TCP   9m6s
[...]

Deploy my-nginx deployment and service

my-nginx-deployment.yml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 1
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx:alpine
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    run: my-nginx
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: my-nginx

Apply with kubectl apply -f my-nginx-deployment.yml:

deployment.apps/my-nginx created
service/my-nginx created

Test: kubectl get all | grep my-nginx:

pod/my-nginx-65c68bbcdf-xkhqj             1/1     Running   4          2d7h
service/my-nginx     ClusterIP   10.43.118.13   <none>        80/TCP    2d7h
deployment.apps/my-nginx             1/1     1            1           2d7h
replicaset.apps/my-nginx-65c68bbcdf             1         1         1       2d7h

Stick the nginx-ingress-controler and my-nginx app together

Finally, the nginx-ingress controller needs a port-mapping pointing to the my-nginx app. This will be done with a config-map nginx-ingress-tcp-services-config-map.yml, referenced earlier in the nginx-ingress deployment definition:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: ingress-nginx
data:
  "9000": default/my-nginx:80

Apply with kubectl apply -f nginx-ingress-tcp-services-config-map.yml:

configmap/tcp-services created

Subsequently the config-map can be edited with kubectl -n ingress-nginx edit configmap tcp-services

Changes to config-maps do not take effect on running pods! A re-scale to 0 and back can solve this problem: https://stackoverflow.com/questions/37317003/restart-pods-when-configmap-updates-in-kubernetes

Test exposed app on TCP-port 9000

dominik@muggler:~$ curl -s http://10.62.94.246:9000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Check logs of ingress-nginx-controller POD:

root@k3s-master:~# kubectl get pods --all-namespaces |grep ingress-nginx
[...]
ingress-nginx   ingress-nginx-controller-d88d95c-khbv4   1/1     Running     0          4m36s
[...]
root@k3s-master:~# kubectl logs ingress-nginx-controller-d88d95c-khbv4 -f -n ingress-nginx
[...]
[10.62.94.1] [23/Aug/2020:16:38:33 +0000] TCP 200 850 81 0.001
[...]

Check logs of my-nginx POD:

root@k3s-master:/k3s# kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
my-nginx-65c68bbcdf-xkhqj   1/1     Running   0          90m
kubectl logs my-nginx-65c68bbcdf-xkhqj -f
[...]
10.42.0.18 - - [23/Aug/2020:16:38:33 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.0" "-"
[...]

Running DaemonSets on hostPort

In this case configuration of networking in context of services is not needed.

This setup is suitable for legacy scenarios where static IP-address are required:

  • inbound mailserver
  • dns server
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: netcat-daemonset
  labels:
    app: netcat-daemonset
spec:                                                                     
  selector:                                                    
    matchLabels:                                               
      app: netcat-daemonset                                             
  template:                                                    
    metadata:                                                  
      labels:                                                  
        app: netcat-daemonset                                           
    spec:                                                                                                                                                                       
      containers:                                              
      - command:                                               
        - nc                                                   
        - -lk                              
        - -p                    
        - "23456"                          
        - -v                                                     
        - -e                                                     
        - /bin/true                                              
        env:                                                     
        - name: DEMO_GREETING                                    
          value: Hello from the environment                      
        image: dockreg-zdf.int.zwackl.de/alpine/latest/amd64:prod
        imagePullPolicy: Always                                  
        name: netcat-daemonset                                            
        ports:                                                   
        - containerPort: 23456 
          hostPort: 23456
          protocol: TCP                                          
        resources:                                               
          limits:                                                
            cpu: 500m                                            
            memory: 64Mi                                         
          requests:                                              
            cpu: 50m                                             
            memory: 32Mi                                         
      restartPolicy: Always                         
      securityContext: {}                           
      terminationGracePeriodSeconds: 30             
  updateStrategy:                                   
    rollingUpdate:                                  
      maxUnavailable: 1                             
    type: RollingUpdate