* [Install k3s](#install-k3s)
* [Configure upstream DNS-resolver](#upstream-dns-resolver)
* [Namespaces and resource limits](#namespaces)
* [devel](#namespace-devel)
* [staging](#namespace-staging)
* [prod](#namespace-prod)
* [Persistent volumes](#pv)
* [Local provider](#pv-local)
* [Longhorn - distributed/lightweight provider](#pv-longhorn)
* [Ingress controller](#ingress-controller)
* [Disable Traefik-ingress](#disable-traefik-ingress)
* [Enable NGINX-ingress](#enable-nginx-ingress)
* [Installation](#install-nginx-ingress)
* [Change service type from NodePort to LoadBalancer](#nginx-ingress-loadbalancer)
* [Enable nginx-ingress tcp- and udp-services for apps other than http/s](#nginx-ingress-tcp-udp-enabled)
* [Enable client-IP transparency and expose TCP-port 9000](#enable-client-ip-transp-expose-tcp-9000)
* [Deploy my-nginx-service](#deploy-my-nginx-service)
* [Stick the nginx-ingress controler and my-nginx app together](#stick-nginx-ingress-and-tcp-service)
* [Test exposed app on TCP-port 9000](#test-nginx-ingress-and-tcp-service)
* [Cert-Manager (references ingress controller)](#cert-manager)
* [Installation](#cert-manager-install)
* [Let´s Encrypt issuer](#cert-manager-le-issuer)
* [Troubleshooting](#cert-manager-troubleshooting)
* [Running DaemonSets on `hostPort`](#running-daemonsets)
* [HELM charts](#helm)
* [Create a chart](#helm-create)
* [Install local chart without packaging](#helm-install-without-packaging)
* [List deployed helm charts](#helm-list)
* [Upgrade local chart without packaging](#helm-upgrade)
* [Get status of deployed chart](#helm-status)
* [Get deployment history](#helm-history)
* [Rollback](#helm-rollback)
# Install k3s
https://k3s.io/:
```
curl -sfL https://get.k3s.io | sh -
```
# Upstream DNS-resolver
Docs: https://rancher.com/docs/rancher/v2.x/en/troubleshooting/dns/
Default: 8.8.8.8 => does not resolve local domains!
1. local /etc/resolv.conf -> ip-of-dnsresolver (127.0.0.1 **does not work!**)
2. vi /etc/systemd/system/k3s.service:
```
[...]
ExecStart=/usr/local/bin/k3s \
server --resolv-conf /etc/resolv.conf \
```
3. Re-load systemd config: `systemctl daemon-reload`
4. Re-start k3s: `systemctl restart k3s.service`
5. Re-deploy coredns-pods: `kubectl -n kube-system delete name-of-coredns-pods`
# Namespaces and resource limits
## devel
namespace-devel-limitranges.yaml:
```
---
apiVersion: v1
kind: Namespace
metadata:
name: devel
labels:
name: devel
---
apiVersion: v1
kind: LimitRange
metadata:
name: limit-range-devel
namespace: devel
spec:
limits:
- default:
cpu: 500m
memory: 1Gi
defaultRequest:
cpu: 10m
memory: 4Mi
max:
cpu: 500m
memory: 1Gi
min:
cpu: 10m
memory: 4Mi
type: Container
```
`kubectl apply -f namespace-devel-limitranges.yaml`
## staging
namespace-staging-limitranges.yaml:
```
---
apiVersion: v1
kind: Namespace
metadata:
name: staging
labels:
name: staging
---
apiVersion: v1
kind: LimitRange
metadata:
name: limit-range-staging
namespace: staging
spec:
limits:
- default:
cpu: 500m
memory: 1Gi
defaultRequest:
cpu: 10m
memory: 4Mi
max:
cpu: 500m
memory: 1Gi
min:
cpu: 10m
memory: 4Mi
type: Container
```
`kubectl apply -f namespace-staging-limitranges.yaml`
## prod
namespace-prod-limitranges.yaml:
```
---
apiVersion: v1
kind: Namespace
metadata:
name: prod
labels:
name: prod
---
apiVersion: v1
kind: LimitRange
metadata:
name: limit-range-prod
namespace: prod
spec:
limits:
- defaultRequest:
cpu: 50m
memory: 4Mi
min:
cpu: 10m
memory: 4Mi
type: Container
```
`kubectl apply -f namespace-prod-limitranges.yaml`
# Persistent Volumes
## Local provider (local - ouf-of-the-box)
https://rancher.com/docs/k3s/latest/en/storage/
## Longhorn provider (lightweight/distributed)
* Requirements: https://longhorn.io/docs/0.8.0/install/requirements/
* Debian: `apt install open-iscsi`
* Install: https://rancher.com/docs/k3s/latest/en/storage/
# Ingress controller
## Disable Traefik-ingress
edit /etc/systemd/system/k3s.service:
```
[...]
ExecStart=/usr/local/bin/k3s \
server --disable traefik --resolv-conf /etc/resolv.conf \
[...]
```
Finally `systemctl daemon-reload` and `systemctl restart k3s`
## Enable NGINX-ingress
### Installation
https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal
### Change service type from NodePort to LoadBalancer
`kubectl edit service -n ingress-nginx ingress-nginx-controller` and change `type: NodePort` to `type: LoadBalancer`
Port 80 and 443 should listen now on an *External-IP* `kubectl get all --all-namespaces`:
```
[...]
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
[...]
ingress-nginx service/ingress-nginx-controller-admission ClusterIP 10.43.174.128 443/TCP 35m
ingress-nginx service/ingress-nginx-controller LoadBalancer 10.43.237.255 10.62.94.246 80:30312/TCP,443:30366/TCP 35m
[...]
```
Test: `curl -s http://` should return well known nginx-404-page:
```
dominik@muggler:~$ curl -s http://10.62.94.246
404 Not Found
404 Not Found
nginx/1.19.1
```
### Enable nginx-ingress tcp- and udp-services for apps other than http/s
Docs: https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
`kubectl edit deployment -n ingress-nginx ingress-nginx-controller` and search for `spec:`/`template`/`spec`/`containers` section:
```
[...]
spec:
[...]
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
spec:
containers:
- args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=ingress-nginx/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
>>> ADD
- --tcp-services-configmap=ingress-nginx/tcp-services
- --udp-services-configmap=ingress-nginx/udp-services
<<< ADD
env:
[...]
```
## Enable client-IP transparency and expose TCP-port 9000
Enable client-IP transparency (X-Original-Forwarded-For) and expose my-nginx app on nginx-ingress TCP-port 9000: `kubectl edit service -n ingress-nginx ingress-nginx-controller`
Find the `ports:`-section of the `ingress-nginx-controller` service and *ADD* the definition for port 9000:
```
[...]
spec:
clusterIP: 10.43.237.255
>>> CHANGE externalTrafficPolicy from Cluster to Local if original client-IP is desirable
externalTrafficPolicy: Local
<<< CHANGE
ports:
- name: http
nodePort: 30312
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 30366
port: 443
protocol: TCP
targetPort: https
>>> ADD
- name: proxied-tcp-9000
port: 9000
protocol: TCP
targetPort: 9000
<<< ADD
[...]
```
Verify nginx-ingress-controller is a Loadbalancer and listening on port 9000 with `kubectl get services -n ingress-nginx`:
```
[...]
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
[...]
ingress-nginx service/ingress-nginx-controller LoadBalancer 10.43.237.255 10.62.94.246 80:30312/TCP,443:30366/TCP,9000:31460/TCP 71m
[...]
```
### Deploy my-nginx deployment and service
my-nginx-deployment.yml:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 1
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-nginx
```
Apply with `kubectl apply -f my-nginx-deployment.yml`:
```
deployment.apps/my-nginx created
service/my-nginx created
```
Test: `kubectl get all | grep my-nginx`:
```
pod/my-nginx-65c68bbcdf-xkhqj 1/1 Running 4 2d7h
service/my-nginx ClusterIP 10.43.118.13 80/TCP 2d7h
deployment.apps/my-nginx 1/1 1 1 2d7h
replicaset.apps/my-nginx-65c68bbcdf 1 1 1 2d7h
```
## Stick the nginx-ingress-controler and my-nginx app together
Finally, the nginx-ingress controller needs a port-mapping pointing to the my-nginx app. This will be done with a config-map `nginx-ingress-tcp-services-config-map.yml`, referenced earlier in the nginx-ingress deployment definition:
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
"9000": default/my-nginx:80
```
Apply with `kubectl apply -f nginx-ingress-tcp-services-config-map.yml`:
```
configmap/tcp-services created
```
Subsequently the config-map can be edited with `kubectl -n ingress-nginx edit configmap tcp-services`
**Changes to config-maps do not take effect on running pods! A re-scale to 0 and back can solve this problem: https://stackoverflow.com/questions/37317003/restart-pods-when-configmap-updates-in-kubernetes**
## Test exposed app on TCP-port 9000
```
dominik@muggler:~$ curl -s http://10.62.94.246:9000
Welcome to nginx!
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.
Thank you for using nginx.
```
Check logs of ingress-nginx-controller POD:
```
root@k3s-master:~# kubectl get pods --all-namespaces |grep ingress-nginx
[...]
ingress-nginx ingress-nginx-controller-d88d95c-khbv4 1/1 Running 0 4m36s
[...]
```
```
root@k3s-master:~# kubectl logs ingress-nginx-controller-d88d95c-khbv4 -f -n ingress-nginx
[...]
[10.62.94.1] [23/Aug/2020:16:38:33 +0000] TCP 200 850 81 0.001
[...]
```
Check logs of my-nginx POD:
```
root@k3s-master:/k3s# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-nginx-65c68bbcdf-xkhqj 1/1 Running 0 90m
```
```
kubectl logs my-nginx-65c68bbcdf-xkhqj -f
[...]
10.42.0.18 - - [23/Aug/2020:16:38:33 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.0" "-"
[...]
```
# Cert-Manager (references ingress controller)
## Installation
Docs: https://hub.helm.sh/charts/jetstack/cert-manager
```
helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.2/cert-manager.crds.yaml
kubectl create namespace cert-manager
helm install cert-manager --namespace cert-manager jetstack/cert-manager
kubectl -n cert-manager get all
```
## Let´s Encrypt issuer
Docs: https://cert-manager.io/docs/tutorials/acme/ingress/#step-6-configure-let-s-encrypt-issuer
```
ClusterIssuers are a resource type similar to Issuers. They are specified in exactly the same way,
but they do not belong to a single namespace and can be referenced by Certificate resources from
multiple different namespaces.
```
lets-encrypt-cluster-issuers.yaml:
```
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging-issuer
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: user@example.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-staging-account-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod-issuer
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: user@example.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod-account-key
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
```
`kubectl apply -f lets-encrypt-cluster-issuers.yaml`
## Deploying a LE-certificate
All you need is an `Ingress` resource of class `nginx` which references a ClusterIssuer (`letsencrypt-prod-issuer`) resource:
```
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace:
name: some-ingress-name
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod-issuer"
spec:
tls:
- hosts:
- some-certificate.name.san
secretName: target-certificate-secret-name
rules:
- host: some-certificate.name.san
http:
paths:
- path: /
backend:
serviceName: some-target-service
servicePort: some-target-service-port
```
## Troubleshooting
Docs: https://cert-manager.io/docs/faq/acme/
ClusterIssuer runs in default namespace:
```
kubectl get clusterissuer
kubectl describe clusterissuer