diff --git a/README.md b/README.md
index c1a6305..0bc99e4 100644
--- a/README.md
+++ b/README.md
@@ -9,7 +9,6 @@
* [Namespaces and resource limits](#namespaces-limits)
* [Persistent volumes (StorageClass - dynamic provisioning)](#pv)
* [Rancher Local (k3s default)](#pv-local)
- * [NFS](#pv-nfs)
* [Rancher Longhorn (distributed in local cluster) - MY FAVOURITE :-)](#pv-longhorn)
* [Custom StorageClass](#pv-longhorn-custom-storageclass)
* [Volume backups with S3 (compatible) storage](#pv-longhorn-s3-backup)
@@ -187,60 +186,6 @@ Read more about [AccessModes](https://kubernetes.io/docs/concepts/storage/persis
https://rancher.com/docs/k3s/latest/en/storage/
Only supports *AccessMode*: ReadWriteOnce (RWO)
-## NFS
-For testing purposes as well as simplicity you may use following [NFS container image](https://hub.docker.com/r/itsthenetwork/nfs-server-alpine):
-```
-mkdir -p
-docker run -d --name nfs-server \
- --net=host \
- --privileged \
- -v /data/docker/nfs-server/data/:/nfsshare \
- -e SHARED_DIRECTORY=/nfsshare \
- itsthenetwork/nfs-server-alpine:latest
-```
-
-**All Nodes need to have the NFS-client package (Ubuntu: `nfs-common`) installed**
-```
-helm repo add ckotzbauer https://ckotzbauer.github.io/helm-charts
-helm install my-nfs-client-provisioner --set nfs.server= --set nfs.path= ckotzbauer/nfs-client-provisioner
-```
-Check if NFS *StorageClass* is available:
-```
-$ kubectl get sc
-NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
-local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 101d
-nfs-client cluster.local/my-nfs-client-provisioner Delete Immediate true 172m
-```
-Now you can use `nfs-client` as StorageClass like so:
-```
-apiVersion: apps/v1
-kind: StatefulSet
-[...]
- volumeClaimTemplates:
- - metadata:
- name: nfs-backend
- spec:
- accessModes: [ "ReadWriteMany" ]
- storageClassName: "nfs-client"
- resources:
- requests:
- storage: 32Mi
-```
-or so:
-```
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
- name: nfs-pvc-1
- namespace:
-spec:
- storageClassName: "nfs-client"
- accessModes:
- - ReadWriteMany
- resources:
- requests:
- storage: 32Mi
-```
## Rancher Longhorn (distributed in local cluster) - MY FAVOURITE :-)
* Requirements: https://longhorn.io/docs/0.8.0/install/requirements/
* Debian/Ubuntu: `apt install open-iscsi`
@@ -779,67 +724,6 @@ spec:
imagePullPolicy: IfNotPresent
command: ["nc", "-lk", "-p", "23456", "-v", "-e", "/bin/true"]
```
-## Running StatefulSet with NFS storage
-* [Docs: StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/)
-* [NFS dynamic volume provisioning deployed](#pv-nfs)
-
-StatefulSets are designed for stateful applications (like databases). To avoid split-brain scenarios StatefulSets behave as static as possible. If a node goes down, the StatefulSet controller will reschedule the pods to another node, that meets the required conditions! If you want to force a re-scheduling:
-`kubectl delete pod web-1 --grace-period=0 --force`
-
-More details on this can be found [here](https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/)
-
-If you want DaemonSet-like Node-affinity with StatefulSets then read [this](https://medium.com/@johnjjung/building-a-kubernetes-daemonstatefulset-30ad0592d8cb)
-```
----
-apiVersion: v1
-kind: Service
-metadata:
- name: nginx
- labels:
- app: nginx
-spec:
- ports:
- - port: 80
- name: web
- clusterIP: None
- selector:
- app: nginx
----
-apiVersion: apps/v1
-kind: StatefulSet
-metadata:
- name: web
-spec:
- selector:
- matchLabels:
- app: nginx
- serviceName: "nginx"
- replicas: 2
- template:
- metadata:
- labels:
- app: nginx
- spec:
- terminationGracePeriodSeconds: 10
- containers:
- - name: nginx
- image: nginx:alpine
- ports:
- - containerPort: 80
- name: web
- volumeMounts:
- - name: nfs-backend
- mountPath: /nfs-backend
- volumeClaimTemplates:
- - metadata:
- name: nfs-backend
- spec:
- accessModes: [ "ReadWriteMany" ]
- storageClassName: "nfs-client"
- resources:
- requests:
- storage: 32Mi
-```
## Services
### Client-IP transparency and loadbalancing