diff --git a/README.md b/README.md
index c83dcbd..f02e1d6 100644
--- a/README.md
+++ b/README.md
@@ -28,9 +28,9 @@
* [Rollback](#helm-rollback)
* [Examples](#examples)
* [Running DaemonSets on `hostPort`](#running-daemonsets)
- * [Running StatefulSet with NFS storage](#running-statefulset-dns)
+ * [Running StatefulSet with NFS storage](#running-statefulset-nfs)
-# Install k3s
+# Install k3s
https://k3s.io/:
```
curl -sfL https://get.k3s.io | sh -
@@ -61,7 +61,7 @@ k3s.service - Lightweight Kubernetes
```
-# Upstream DNS-resolver
+# Upstream DNS-resolver
Docs: https://rancher.com/docs/rancher/v2.x/en/troubleshooting/dns/
Default: 8.8.8.8 => does not resolve local domains!
@@ -76,7 +76,7 @@ ExecStart=/usr/local/bin/k3s \
4. Re-start k3s: `systemctl restart k3s.service`
5. Re-deploy coredns-pods: `kubectl -n kube-system delete pod name-of-coredns-pods`
-# Change NodePort range to 1 - 65535
+# Change NodePort range to 1 - 65535
1. vi /etc/systemd/system/k3s.service:
```
[...]
@@ -86,8 +86,8 @@ ExecStart=/usr/local/bin/k3s \
2. Re-load systemd config: `systemctl daemon-reload`
3. Re-start k3s: `systemctl restart k3s.service`
-# Namespaces and resource limits
-## devel
+# Namespaces and resource limits
+## devel
namespace-devel-limitranges.yaml:
```
---
@@ -118,7 +118,7 @@ spec:
```
`kubectl apply -f namespace-devel-limitranges.yaml`
-## staging
+## staging
namespace-staging-limitranges.yaml:
```
---
@@ -149,7 +149,7 @@ spec:
```
`kubectl apply -f namespace-staging-limitranges.yaml`
-## prod
+## prod
namespace-prod-limitranges.yaml:
```
---
@@ -175,16 +175,16 @@ spec:
`kubectl apply -f namespace-prod-limitranges.yaml`
-# Persistent Volumes (StorageClass - dynamic provisioning)
-## Rancher Local
+# Persistent Volumes (StorageClass - dynamic provisioning)
+## Rancher Local
https://rancher.com/docs/k3s/latest/en/storage/
-## Longhorn (distributed in local cluster)
+## Longhorn (distributed in local cluster)
* Requirements: https://longhorn.io/docs/0.8.0/install/requirements/
* Debian: `apt install open-iscsi`
* Install: https://rancher.com/docs/k3s/latest/en/storage/
-## NFS
+## NFS
If you want to use NFS based storage...
```
helm3 repo add ckotzbauer https://ckotzbauer.github.io/helm-charts
@@ -228,8 +228,8 @@ spec:
storage: 32Mi
```
-# Ingress controller
-## Disable Traefik-ingress
+# Ingress controller
+## Disable Traefik-ingress
edit /etc/systemd/system/k3s.service:
```
[...]
@@ -239,8 +239,8 @@ ExecStart=/usr/local/bin/k3s \
```
Finally `systemctl daemon-reload` and `systemctl restart k3s`
-## Enable NGINX-ingress with OCSP stapling
-### Installation
+## Enable NGINX-ingress with OCSP stapling
+### Installation
https://kubernetes.github.io/ingress-nginx/deploy/#using-helm
```
@@ -291,8 +291,8 @@ Error: UPGRADE FAILED: cannot patch "gitea-ingress-staging" with kind Ingress: I
```
A possible fix: `kubectl -n ingress-nginx delete ValidatingWebhookConfiguration my-release-ingress-nginx-admission`
-# Cert-Manager (references ingress controller)
-## Installation
+# Cert-Manager (references ingress controller)
+## Installation
Docs: https://hub.helm.sh/charts/jetstack/cert-manager
```
helm repo add jetstack https://charts.jetstack.io
@@ -302,7 +302,7 @@ kubectl create namespace cert-manager
helm install cert-manager --namespace cert-manager jetstack/cert-manager
kubectl -n cert-manager get all
```
-## Let´s Encrypt issuer
+## Let´s Encrypt issuer
Docs: https://cert-manager.io/docs/tutorials/acme/ingress/#step-6-configure-let-s-encrypt-issuer
```
ClusterIssuers are a resource type similar to Issuers. They are specified in exactly the same way,
@@ -353,7 +353,7 @@ spec:
```
`kubectl apply -f lets-encrypt-cluster-issuers.yaml`
-## Deploying a LE-certificate
+## Deploying a LE-certificate
All you need is an `Ingress` resource of class `nginx` which references a ClusterIssuer (`letsencrypt-prod-issuer`) resource:
```
apiVersion: networking.k8s.io/v1beta1
@@ -380,7 +380,7 @@ spec:
servicePort: some-target-service-port
```
-## Troubleshooting
+## Troubleshooting
Docs: https://cert-manager.io/docs/faq/acme/
ClusterIssuer runs in default namespace:
@@ -404,7 +404,7 @@ After successfull setup perform a TLS-test: `https://www.ssllabs.com/ssltest/ind
-# HELM charts
+# HELM charts
Docs:
* https://helm.sh/docs/intro/using_helm/
@@ -413,7 +413,7 @@ Prerequisites:
* kubectl with ENV[KUBECONFIG] pointing to appropriate config file
* helm
-## Create a chart
+## Create a chart
`helm create helm-test`
```
@@ -434,7 +434,7 @@ helm-test/
└── values.yaml
```
-## Install local chart without packaging
+## Install local chart without packaging
`helm install helm-test-dev helm-test/ --set image.tag=latest --debug --wait`
or just a *dry-run*:
@@ -450,14 +450,14 @@ scenarios where Deployment has replicas set to 1 and maxUnavailable is not set t
--wait will return as ready as it has satisfied the minimum Pod in ready condition.
```
-## List deployed helm charts
+## List deployed helm charts
```
~/kubernetes/helm$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
helm-test-dev default 4 2020-08-27 12:30:38.98457042 +0200 CEST deployed helm-test-0.1.0 1.16.0
```
-## Upgrade local chart without packaging
+## Upgrade local chart without packaging
```
~/kubernetes/helm$ helm upgrade helm-test-dev helm-test/ --set image.tag=latest --wait --timeout 60s
Release "helm-test-dev" has been upgraded. Happy Helming!
@@ -474,7 +474,7 @@ NOTES:
```
`helm upgrade [...] --wait` is synchronous and exit with 0 on success, otherwise with >0 on failure. `helm upgrade` will wait for 5 minutes Setting the `--timeout` (Default 5 minutes) flag makes This can be used in term of CI/CD deployments with Jenkins.
-## Get status of deployed chart
+## Get status of deployed chart
```
~/kubernetes/helm$ helm status helm-test-dev
NAME: helm-test-dev
@@ -489,7 +489,7 @@ NOTES:
kubectl --namespace default port-forward $POD_NAME 8080:80
```
-## Get deployment history
+## Get deployment history
```
~/kubernetes/helm$ helm history helm-test-dev
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
@@ -505,7 +505,7 @@ REVISION UPDATED STATUS CHART APP VERSION DE
19 Thu Aug 27 14:36:49 2020 failed helm-test-0.1.1 cosmetics Upgrade "helm-test-dev" failed: timed out waiting for the condition
```
-## Rollback
+## Rollback
`helm rollback helm-test-dev 18 --wait`
```
~/kubernetes/helm$ helm history helm-test-dev
@@ -536,8 +536,8 @@ NOTES:
kubectl --namespace default port-forward $POD_NAME 8080:80
```
-# Examples
-## Running DaemonSets on `hostPort`
+# Examples
+## Running DaemonSets on `hostPort`
* Docs: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
* Good article: https://medium.com/stakater/k8s-deployments-vs-statefulsets-vs-daemonsets-60582f0c62d4
@@ -597,7 +597,7 @@ spec:
maxUnavailable: 1
type: RollingUpdate
```
-## Running StatefulSet with NFS storage
+## Running StatefulSet with NFS storage
* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
```
apiVersion: v1