k3d + Services
This commit is contained in:
parent
fb7c1407b5
commit
7e5066dd42
95
README.md
95
README.md
@ -1,7 +1,9 @@
|
||||
* [kubectl - BASH autocompletion](#kubectl-bash-autocompletion)
|
||||
* [Install k3s](#install-k3s)
|
||||
* [Configure upstream DNS-resolver](#upstream-dns-resolver)
|
||||
* [Change NodePort range](#nodeport-range)
|
||||
* [On on-premises](#install-k3s-on-premises)
|
||||
* [Configure upstream DNS-resolver](#upstream-dns-resolver)
|
||||
* [Change NodePort range](#nodeport-range)
|
||||
* [On Docker with k3d](#install-k3s-on-docker-k3d)
|
||||
* [Namespaces and resource limits](#namespaces-limits)
|
||||
* [Persistent volumes (StorageClass - dynamic provisioning)](#pv)
|
||||
* [Rancher Local](#pv-local)
|
||||
@ -27,6 +29,9 @@
|
||||
* [Kubernetes in action](#kubernetes-in-action)
|
||||
* [Running DaemonSets on `hostPort`](#running-daemonsets)
|
||||
* [Running StatefulSet with NFS storage](#running-statefulset-nfs)
|
||||
* [Services](#services)
|
||||
* [Client-IP transparency and loadbalancing](#services-client-ip-transparency)
|
||||
* [Session affinity/persistence](#services-session-persistence)
|
||||
* [Keep your cluster balanced](#keep-cluster-balanced)
|
||||
* [Node maintenance](#node-maintenance)
|
||||
* [What happens if a node goes down?](#what-happens-node-down)
|
||||
@ -43,6 +48,7 @@ echo "source <(kubectl completion bash)" >> ~/.bashrc
|
||||
```
|
||||
|
||||
# Install k3s <a name="user-content-install-k3s"></a>
|
||||
## On premises <a name="user-content-install-k3s-on-premises"></a>
|
||||
https://k3s.io/:
|
||||
```
|
||||
curl -sfL https://get.k3s.io | sh -
|
||||
@ -72,8 +78,7 @@ k3s.service - Lightweight Kubernetes
|
||||
CGroup: /system.slice/k3s.service
|
||||
|
||||
```
|
||||
|
||||
# Upstream DNS-resolver <a name="user-content-upstream-dns-resolver"></a>
|
||||
### Upstream DNS-resolver <a name="user-content-upstream-dns-resolver"></a>
|
||||
Docs: https://rancher.com/docs/rancher/v2.x/en/troubleshooting/dns/
|
||||
|
||||
Default: 8.8.8.8 => does not resolve local domains!
|
||||
@ -88,7 +93,7 @@ ExecStart=/usr/local/bin/k3s \
|
||||
4. Re-start k3s: `systemctl restart k3s.service`
|
||||
5. Re-deploy coredns-pods: `kubectl -n kube-system delete pod name-of-coredns-pods`
|
||||
|
||||
# Change NodePort range to 1 - 65535 <a name="user-content-nodeport-range"></a>
|
||||
### Change NodePort range to 1 - 65535 <a name="user-content-nodeport-range"></a>
|
||||
1. vi /etc/systemd/system/k3s.service:
|
||||
```
|
||||
[...]
|
||||
@ -98,14 +103,44 @@ ExecStart=/usr/local/bin/k3s \
|
||||
2. Re-load systemd config: `systemctl daemon-reload`
|
||||
3. Re-start k3s: `systemctl restart k3s.service`
|
||||
|
||||
## On Docker with K3d <a name="user-content-install-k3s-on-docker-k3d"></a>
|
||||
K3d is a terraforming orchestrator which deploys a K3s cluster (masters and nodes) directly on docker without the need for virtual machines for each node (master/worker).
|
||||
|
||||
* Prerequisites: a local docker installation **without user-namespaces enabled**.
|
||||
* **Warning**: K3d deploys privileged containers!
|
||||
|
||||
https://k3d.io/:
|
||||
```
|
||||
curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash
|
||||
```
|
||||
Create a K3s cluster without `traefik` as well as `metrics-server`
|
||||
```
|
||||
k3d cluster create cluster1 \
|
||||
--agents 2 \
|
||||
--k3s-server-arg '--disable=traefik' \
|
||||
--k3s-server-arg '--disable=metrics-server' \
|
||||
--k3s-server-arg '--kube-apiserver-arg=service-node-port-range=1-65535'
|
||||
```
|
||||
If you encounter `helm` throwing errors like this one:
|
||||
```
|
||||
Error: Kubernetes cluster unreachable
|
||||
```
|
||||
... just do:
|
||||
```
|
||||
$ kubectl config view --raw > ~/kubeconfig-k3d.yaml
|
||||
$ export KUBECONFIG=~/kubeconfig-k3d.yaml
|
||||
```
|
||||
|
||||
# Namespaces and resource limits <a name="user-content-namespaces-limits"></a>
|
||||
```
|
||||
kubectl apply -f https://gitea.zwackl.de/dominik/k3s/raw/branch/master/namespaces_limits.yaml
|
||||
```
|
||||
|
||||
# Persistent Volumes (StorageClass - dynamic provisioning) <a name="user-content-pv"></a>
|
||||
Read more about [AccessModes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes)
|
||||
## Rancher Local <a name="user-content-pv-local"></a>
|
||||
https://rancher.com/docs/k3s/latest/en/storage/
|
||||
https://rancher.com/docs/k3s/latest/en/storage/
|
||||
Only supports *AccessMode*: ReadWriteOnce (RWO)
|
||||
|
||||
## Longhorn (distributed in local cluster) <a name="user-content-pv-longhorn"></a>
|
||||
* Requirements: https://longhorn.io/docs/0.8.0/install/requirements/
|
||||
@ -113,12 +148,21 @@ https://rancher.com/docs/k3s/latest/en/storage/
|
||||
* Install: https://rancher.com/docs/k3s/latest/en/storage/
|
||||
|
||||
## NFS <a name="user-content-pv-nfs"></a>
|
||||
If you want to use NFS based storage...
|
||||
For testing purposes as well as simplicity you may use following [NFS container image](https://hub.docker.com/r/itsthenetwork/nfs-server-alpine):
|
||||
```
|
||||
mkdir -p
|
||||
docker run -d --name nfs-server \
|
||||
--net=host \
|
||||
--privileged \
|
||||
-v /data/docker/nfs-server/data/:/nfsshare \
|
||||
-e SHARED_DIRECTORY=/nfsshare \
|
||||
itsthenetwork/nfs-server-alpine:latest
|
||||
```
|
||||
|
||||
**All Nodes need to have the NFS-client package (Ubuntu: `nfs-common`) installed**
|
||||
```
|
||||
helm3 repo add ckotzbauer https://ckotzbauer.github.io/helm-charts
|
||||
helm3 install my-nfs-client-provisioner --set nfs.server=<nfs-server/ip-addr> --set nfs.path=</data/nfs> ckotzbauer/nfs-client-provisioner
|
||||
helm repo add ckotzbauer https://ckotzbauer.github.io/helm-charts
|
||||
helm install my-nfs-client-provisioner --set nfs.server=<nfs-server/ip-addr> --set nfs.path=</data/nfs> ckotzbauer/nfs-client-provisioner
|
||||
```
|
||||
Check if NFS *StorageClass* is available:
|
||||
```
|
||||
@ -584,6 +628,38 @@ spec:
|
||||
requests:
|
||||
storage: 32Mi
|
||||
```
|
||||
|
||||
## Services <a name="user-content-services"></a>
|
||||
### Client-IP transparency and loadbalancing <a name="user-content-services-client-ip-transparency"></a>
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
[...]
|
||||
spec:
|
||||
type: NodePort
|
||||
externalTrafficPolicy: <<Local|Cluster>>
|
||||
[...]
|
||||
```
|
||||
`externalTrafficPolicy: Cluster` (default) spreads the incoming traffic over all pods evenly. To achieve this the client ip-address must be source-NATted and therefore it´s not *visible* to the PODs.
|
||||
|
||||
`externalTrafficPolicy: Local` preserves the original client ip-address which is visible to the PODs. In any case (`DaemonSet` or `StatefulSet`) traffic remains on the Node which gets the traffic. In case of `StatefulSet` if more than one POD of a `ReplicaSet` is scheduled on the same Node, the workload gets balanced over all PODs on the same Node.
|
||||
|
||||
|
||||
### Session affinity/persistence <a name="user-content-services-session-persistence"></a>
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
[...]
|
||||
spec:
|
||||
type: NodePort
|
||||
sessionAffinity: <<ClientIP|None>>
|
||||
sessionAffinityConfig:
|
||||
clientIP:
|
||||
timeoutSeconds: 10
|
||||
[]
|
||||
```
|
||||
Session persistence is only possible
|
||||
|
||||
## What happens if a node goes down? <a name="user-content-what-happens-node-down"></a>
|
||||
If a node goes down kubernetes marks this node as *NotReady*, but nothing else:
|
||||
```
|
||||
@ -631,3 +707,4 @@ node/k3s-node2 uncordoned
|
||||
## Dealing with disruptions <a name="user-content-disruptions"></a>
|
||||
* https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
|
||||
* https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user