„README.md“ ändern

This commit is contained in:
Dominik Chilla 2021-04-07 18:37:17 +00:00
parent 5c4e817e47
commit 63f503278b

View File

@ -28,9 +28,9 @@
* [Rollback](#helm-rollback)
* [Examples](#examples)
* [Running DaemonSets on `hostPort`](#running-daemonsets)
* [Running StatefulSet with NFS storage](#running-statefulset-dns)
* [Running StatefulSet with NFS storage](#running-statefulset-nfs)
# Install k3s <a name="#install-k3s"></a>
# Install k3s <a name="user-content-install-k3s"></a>
https://k3s.io/:
```
curl -sfL https://get.k3s.io | sh -
@ -61,7 +61,7 @@ k3s.service - Lightweight Kubernetes
```
# Upstream DNS-resolver <a name="upstream-dns-resolver"></a>
# Upstream DNS-resolver <a name="user-content-upstream-dns-resolver"></a>
Docs: https://rancher.com/docs/rancher/v2.x/en/troubleshooting/dns/
Default: 8.8.8.8 => does not resolve local domains!
@ -76,7 +76,7 @@ ExecStart=/usr/local/bin/k3s \
4. Re-start k3s: `systemctl restart k3s.service`
5. Re-deploy coredns-pods: `kubectl -n kube-system delete pod name-of-coredns-pods`
# Change NodePort range to 1 - 65535 <a name="nodeport-range"></a>
# Change NodePort range to 1 - 65535 <a name="user-content-nodeport-range"></a>
1. vi /etc/systemd/system/k3s.service:
```
[...]
@ -86,8 +86,8 @@ ExecStart=/usr/local/bin/k3s \
2. Re-load systemd config: `systemctl daemon-reload`
3. Re-start k3s: `systemctl restart k3s.service`
# Namespaces and resource limits <a name="namespaces"></a>
## devel <a name="namespace-devel"></a>
# Namespaces and resource limits <a name="user-content-namespaces"></a>
## devel <a name="user-content-namespace-devel"></a>
namespace-devel-limitranges.yaml:
```
---
@ -118,7 +118,7 @@ spec:
```
`kubectl apply -f namespace-devel-limitranges.yaml`
## staging <a name="namespace-staging"></a>
## staging <a name="user-content-namespace-staging"></a>
namespace-staging-limitranges.yaml:
```
---
@ -149,7 +149,7 @@ spec:
```
`kubectl apply -f namespace-staging-limitranges.yaml`
## prod <a name="namespace-prod"></a>
## prod <a name="user-content-namespace-prod"></a>
namespace-prod-limitranges.yaml:
```
---
@ -175,16 +175,16 @@ spec:
`kubectl apply -f namespace-prod-limitranges.yaml`
# Persistent Volumes (StorageClass - dynamic provisioning) <a name="pv"></a>
## Rancher Local <a name="pv-local"></a>
# Persistent Volumes (StorageClass - dynamic provisioning) <a name="user-content-pv"></a>
## Rancher Local <a name="user-content-pv-local"></a>
https://rancher.com/docs/k3s/latest/en/storage/
## Longhorn (distributed in local cluster) <a name="pv-longhorn"></a>
## Longhorn (distributed in local cluster) <a name="user-content-pv-longhorn"></a>
* Requirements: https://longhorn.io/docs/0.8.0/install/requirements/
* Debian: `apt install open-iscsi`
* Install: https://rancher.com/docs/k3s/latest/en/storage/
## NFS <a name="pv-nfs"></a>
## NFS <a name="user-content-pv-nfs"></a>
If you want to use NFS based storage...
```
helm3 repo add ckotzbauer https://ckotzbauer.github.io/helm-charts
@ -228,8 +228,8 @@ spec:
storage: 32Mi
```
# Ingress controller <a name="ingress-controller"></a>
## Disable Traefik-ingress <a name="disable-traefik-ingress"></a>
# Ingress controller <a name="user-content-ingress-controller"></a>
## Disable Traefik-ingress <a name="user-content-disable-traefik-ingress"></a>
edit /etc/systemd/system/k3s.service:
```
[...]
@ -239,8 +239,8 @@ ExecStart=/usr/local/bin/k3s \
```
Finally `systemctl daemon-reload` and `systemctl restart k3s`
## Enable NGINX-ingress with OCSP stapling <a name="enable-nginx-ingress"></a>
### Installation <a name="install-nginx-ingress"></a>
## Enable NGINX-ingress with OCSP stapling <a name="user-content-enable-nginx-ingress"></a>
### Installation <a name="user-content-install-nginx-ingress"></a>
https://kubernetes.github.io/ingress-nginx/deploy/#using-helm
```
@ -291,8 +291,8 @@ Error: UPGRADE FAILED: cannot patch "gitea-ingress-staging" with kind Ingress: I
```
A possible fix: `kubectl -n ingress-nginx delete ValidatingWebhookConfiguration my-release-ingress-nginx-admission`
# Cert-Manager (references ingress controller) <a name="cert-manager"></a>
## Installation <a name="cert-manager-install"></a>
# Cert-Manager (references ingress controller) <a name="user-content-cert-manager"></a>
## Installation <a name="user-content-cert-manager-install"></a>
Docs: https://hub.helm.sh/charts/jetstack/cert-manager
```
helm repo add jetstack https://charts.jetstack.io
@ -302,7 +302,7 @@ kubectl create namespace cert-manager
helm install cert-manager --namespace cert-manager jetstack/cert-manager
kubectl -n cert-manager get all
```
## Let´s Encrypt issuer <a name="cert-manager-le-issuer"></a>
## Let´s Encrypt issuer <a name="user-content-cert-manager-le-issuer"></a>
Docs: https://cert-manager.io/docs/tutorials/acme/ingress/#step-6-configure-let-s-encrypt-issuer
```
ClusterIssuers are a resource type similar to Issuers. They are specified in exactly the same way,
@ -353,7 +353,7 @@ spec:
```
`kubectl apply -f lets-encrypt-cluster-issuers.yaml`
## Deploying a LE-certificate <a name="cert-manager-ingress"></a>
## Deploying a LE-certificate <a name="user-content-cert-manager-ingress"></a>
All you need is an `Ingress` resource of class `nginx` which references a ClusterIssuer (`letsencrypt-prod-issuer`) resource:
```
apiVersion: networking.k8s.io/v1beta1
@ -380,7 +380,7 @@ spec:
servicePort: some-target-service-port
```
## Troubleshooting <a name="cert-manager-troubleshooting"></a>
## Troubleshooting <a name="user-content-cert-manager-troubleshooting"></a>
Docs: https://cert-manager.io/docs/faq/acme/
ClusterIssuer runs in default namespace:
@ -404,7 +404,7 @@ After successfull setup perform a TLS-test: `https://www.ssllabs.com/ssltest/ind
# HELM charts <a name="helm"></a>
# HELM charts <a name="user-content-helm"></a>
Docs:
* https://helm.sh/docs/intro/using_helm/
@ -413,7 +413,7 @@ Prerequisites:
* kubectl with ENV[KUBECONFIG] pointing to appropriate config file
* helm
## Create a chart <a name="helm-create"></a>
## Create a chart <a name="user-content-helm-create"></a>
`helm create helm-test`
```
@ -434,7 +434,7 @@ helm-test/
└── values.yaml
```
## Install local chart without packaging <a name="helm-install-without-packaging"></a>
## Install local chart without packaging <a name="user-content-helm-install-without-packaging"></a>
`helm install helm-test-dev helm-test/ --set image.tag=latest --debug --wait`
or just a *dry-run*:
@ -450,14 +450,14 @@ scenarios where Deployment has replicas set to 1 and maxUnavailable is not set t
--wait will return as ready as it has satisfied the minimum Pod in ready condition.
```
## List deployed helm charts <a name="helm-list"></a>
## List deployed helm charts <a name="user-content-helm-list"></a>
```
~/kubernetes/helm$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
helm-test-dev default 4 2020-08-27 12:30:38.98457042 +0200 CEST deployed helm-test-0.1.0 1.16.0
```
## Upgrade local chart without packaging <a name="helm-upgrade"></a>
## Upgrade local chart without packaging <a name="user-content-helm-upgrade"></a>
```
~/kubernetes/helm$ helm upgrade helm-test-dev helm-test/ --set image.tag=latest --wait --timeout 60s
Release "helm-test-dev" has been upgraded. Happy Helming!
@ -474,7 +474,7 @@ NOTES:
```
`helm upgrade [...] --wait` is synchronous and exit with 0 on success, otherwise with >0 on failure. `helm upgrade` will wait for 5 minutes Setting the `--timeout` (Default 5 minutes) flag makes This can be used in term of CI/CD deployments with Jenkins.
## Get status of deployed chart <a name="helm-status"></a>
## Get status of deployed chart <a name="user-content-helm-status"></a>
```
~/kubernetes/helm$ helm status helm-test-dev
NAME: helm-test-dev
@ -489,7 +489,7 @@ NOTES:
kubectl --namespace default port-forward $POD_NAME 8080:80
```
## Get deployment history <a name="helm-history"></a>
## Get deployment history <a name="user-content-helm-history"></a>
```
~/kubernetes/helm$ helm history helm-test-dev
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
@ -505,7 +505,7 @@ REVISION UPDATED STATUS CHART APP VERSION DE
19 Thu Aug 27 14:36:49 2020 failed helm-test-0.1.1 cosmetics Upgrade "helm-test-dev" failed: timed out waiting for the condition
```
## Rollback <a name="helm-rollback"></a>
## Rollback <a name="user-content-helm-rollback"></a>
`helm rollback helm-test-dev 18 --wait`
```
~/kubernetes/helm$ helm history helm-test-dev
@ -536,8 +536,8 @@ NOTES:
kubectl --namespace default port-forward $POD_NAME 8080:80
```
# Examples <a name="examples"></a>
## Running DaemonSets on `hostPort` <a name="running-daemonsets"></a>
# Examples <a name="user-content-examples"></a>
## Running DaemonSets on `hostPort` <a name="user-content-running-daemonsets"></a>
* Docs: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
* Good article: https://medium.com/stakater/k8s-deployments-vs-statefulsets-vs-daemonsets-60582f0c62d4
@ -597,7 +597,7 @@ spec:
maxUnavailable: 1
type: RollingUpdate
```
## Running StatefulSet with NFS storage <a name="running-statefulset-nfs"></a>
## Running StatefulSet with NFS storage <a name="user-content-running-statefulset-nfs"></a>
* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
```
apiVersion: v1