584 lines
23 KiB
Markdown
584 lines
23 KiB
Markdown
# Snippets for k3s
|
|
|
|
* [Install k3s](#install-k3s)
|
|
* [Configure upstream DNS-resolver](#upstream-dns-resolver)
|
|
* [Namespaces and resource limits](#namespaces)
|
|
* [devel](#namespace-devel)
|
|
* [staging](#namespace-staging)
|
|
* [prod](#namespace-prod)
|
|
* [Persistent volumes](#pv)
|
|
* [Local provider](#pv-local)
|
|
* [Longhorn - distributed/lightweight provider](#pv-longhorn)
|
|
* [Disable Traefik-ingress](#disable-traefik-ingress)
|
|
* [Enable NGINX-ingress](#enable-nginx-ingress)
|
|
* [Installation](#install-nginx-ingress)
|
|
* [Change service type from NodePort to LoadBalancer](#nginx-ingress-loadbalancer)
|
|
* [Enable nginx-ingress tcp- and udp-services for apps other than http/s](#nginx-ingress-tcp-udp-enabled)
|
|
* [Enable client-IP transparency and expose TCP-port 9000](#enable-client-ip-transp-expose-tcp-9000)
|
|
* [Deploy my-nginx-service](#deploy-my-nginx-service)
|
|
* [Stick the nginx-ingress controler and my-nginx app together](#stick-nginx-ingress-and-tcp-service)
|
|
* [Test exposed app on TCP-port 9000](#test-nginx-ingress-and-tcp-service)
|
|
* [Running DaemonSets on `hostPort`](#running-daemonsets)
|
|
* [HELM charts](#helm)
|
|
* [Create a chart](#helm-create)
|
|
* [Install local chart without packaging](#helm-install-without-packaging)
|
|
* [List deployed helm charts](#helm-list)
|
|
* [Upgrade local chart without packaging](#helm-upgrade)
|
|
* [Get status of deployed chart](#helm-status)
|
|
* [Get deployment history](#helm-history)
|
|
* [Rollback](#helm-rollback)
|
|
|
|
|
|
## Install k3s <a name="install-k3s"></a>
|
|
https://k3s.io/:
|
|
```
|
|
curl -sfL https://get.k3s.io | sh -
|
|
```
|
|
|
|
# Confiugure upstream DNS-resolver <a name="upstream-dns-resolver"></a>
|
|
Default: 8.8.8.8
|
|
1. local /etc/resolv.conf -> ip-of-dnsresolver (127.0.0.1 **does not work!**)
|
|
2. vi /etc/systemd/system/k3s.service:
|
|
```
|
|
[...]
|
|
ExecStart=/usr/local/bin/k3s \
|
|
server --resolv-conf /etc/resolv.conf \
|
|
```
|
|
3. Re-load systemd config: `systemctl daemon-reload`
|
|
4. Re-start k3s: `systemctl restart k3s.service`
|
|
5. Re-deploy coredns-pods: `kubectl -n kube-system delete name-of-coredns-pods`
|
|
|
|
# Namespaces and resource limits <a name="namespaces"></a>
|
|
## devel <a name="namespace-devel"></a>
|
|
namespace-devel-limitranges.yaml:
|
|
```
|
|
---
|
|
apiVersion: v1
|
|
kind: Namespace
|
|
metadata:
|
|
name: devel
|
|
labels:
|
|
name: devel
|
|
|
|
---
|
|
apiVersion: v1
|
|
kind: LimitRange
|
|
metadata:
|
|
name: limit-range-devel
|
|
namespace: devel
|
|
spec:
|
|
limits:
|
|
- default:
|
|
cpu: 500m
|
|
memory: 1Gi
|
|
defaultRequest:
|
|
cpu: 10m
|
|
memory: 4Mi
|
|
max:
|
|
cpu: 500m
|
|
memory: 1Gi
|
|
min:
|
|
cpu: 10m
|
|
memory: 4Mi
|
|
type: Container
|
|
```
|
|
`kubectl apply -f namespace-devel-limitranges.yaml`
|
|
|
|
## staging <a name="namespace-staging"></a>
|
|
namespace-staging-limitranges.yaml:
|
|
```
|
|
---
|
|
apiVersion: v1
|
|
kind: Namespace
|
|
metadata:
|
|
name: staging
|
|
labels:
|
|
name: staging
|
|
---
|
|
apiVersion: v1
|
|
kind: LimitRange
|
|
metadata:
|
|
name: limit-range-staging
|
|
namespace: staging
|
|
spec:
|
|
limits:
|
|
- default:
|
|
cpu: 500m
|
|
memory: 1Gi
|
|
defaultRequest:
|
|
cpu: 10m
|
|
memory: 4Mi
|
|
max:
|
|
cpu: 500m
|
|
memory: 1Gi
|
|
min:
|
|
cpu: 10m
|
|
memory: 4Mi
|
|
type: Container
|
|
```
|
|
`kubectl apply -f namespace-staging-limitranges.yaml`
|
|
|
|
## prod <a name="namespace-prod"></a>
|
|
namespace-prod-limitranges.yaml:
|
|
```
|
|
---
|
|
apiVersion: v1
|
|
kind: Namespace
|
|
metadata:
|
|
name: prod
|
|
labels:
|
|
name: prod
|
|
---
|
|
apiVersion: v1
|
|
kind: LimitRange
|
|
metadata:
|
|
name: limit-range-prod
|
|
namespace: prod
|
|
spec:
|
|
limits:
|
|
- defaultRequest:
|
|
cpu: 50m
|
|
memory: 4Mi
|
|
min:
|
|
cpu: 50m
|
|
memory: 4Mi
|
|
type: Container
|
|
```
|
|
`kubectl apply -f namespace-prod-limitranges.yaml`
|
|
|
|
|
|
# Persistent Volumes <a name="pv"></a>
|
|
## Local provider (local - ouf-of-the-box) <a name="pv-local"></a>
|
|
https://rancher.com/docs/k3s/latest/en/storage/
|
|
|
|
## Longhorn provider (lightweight/distributed) <a name="pv-longhorn"></a>
|
|
* Requirements: https://longhorn.io/docs/0.8.0/install/requirements/
|
|
* Debian: `apt install open-iscsi`
|
|
* Install: https://rancher.com/docs/k3s/latest/en/storage/
|
|
|
|
## Disable Traefik-ingress <a name="disable-traefik-ingress"></a>
|
|
edit /etc/systemd/system/k3s.service:
|
|
```
|
|
[...]
|
|
ExecStart=/usr/local/bin/k3s \
|
|
server --disable traefik --resolv-conf /etc/resolv.conf \
|
|
[...]
|
|
```
|
|
Finally `systemctl daemon-reload` and `systemctl restart k3s`
|
|
|
|
## Enable NGINX-ingress <a name="enable-nginx-ingress"></a>
|
|
### Installation <a name="install-nginx-ingress"></a>
|
|
https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal
|
|
|
|
### Change service type from NodePort to LoadBalancer <a name="nginx-ingress-loadbalancer"></a>
|
|
`kubectl edit service -n ingress-nginx ingress-nginx-controller` and change `type: NodePort` to `type: LoadBalancer`
|
|
|
|
Port 80 and 443 should listen now on an *External-IP* `kubectl get all --all-namespaces`:
|
|
```
|
|
[...]
|
|
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
|
[...]
|
|
ingress-nginx service/ingress-nginx-controller-admission ClusterIP 10.43.174.128 <none> 443/TCP 35m
|
|
ingress-nginx service/ingress-nginx-controller LoadBalancer 10.43.237.255 10.62.94.246 80:30312/TCP,443:30366/TCP 35m
|
|
[...]
|
|
```
|
|
Test: `curl -s http://<External-IP>` should return well known nginx-404-page:
|
|
```
|
|
dominik@muggler:~$ curl -s http://10.62.94.246
|
|
<html>
|
|
<head><title>404 Not Found</title></head>
|
|
<body>
|
|
<center><h1>404 Not Found</h1></center>
|
|
<hr><center>nginx/1.19.1</center>
|
|
</body>
|
|
</html>
|
|
```
|
|
|
|
### Enable nginx-ingress tcp- and udp-services for apps other than http/s <a name="nginx-ingress-tcp-udp-enabled"></a>
|
|
Docs: https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
|
|
|
|
`kubectl edit deployment -n ingress-nginx ingress-nginx-controller` and search for `spec:`/`template`/`spec`/`containers` section:
|
|
```
|
|
[...]
|
|
spec:
|
|
[...]
|
|
template:
|
|
metadata:
|
|
creationTimestamp: null
|
|
labels:
|
|
app.kubernetes.io/component: controller
|
|
app.kubernetes.io/instance: ingress-nginx
|
|
app.kubernetes.io/name: ingress-nginx
|
|
spec:
|
|
containers:
|
|
- args:
|
|
- /nginx-ingress-controller
|
|
- --election-id=ingress-controller-leader
|
|
- --ingress-class=nginx
|
|
- --configmap=ingress-nginx/ingress-nginx-controller
|
|
- --validating-webhook=:8443
|
|
- --validating-webhook-certificate=/usr/local/certificates/cert
|
|
- --validating-webhook-key=/usr/local/certificates/key
|
|
>>> ADD
|
|
- --tcp-services-configmap=ingress-nginx/tcp-services
|
|
- --udp-services-configmap=ingress-nginx/udp-services
|
|
<<< ADD
|
|
env:
|
|
[...]
|
|
```
|
|
|
|
## Enable client-IP transparency and expose TCP-port 9000 <a name="enable-client-ip-transp-expose-tcp-9000"></a>
|
|
Enable client-IP transparency (X-Original-Forwarded-For) and expose my-nginx app on nginx-ingress TCP-port 9000: `kubectl edit service -n ingress-nginx ingress-nginx-controller`
|
|
Find the `ports:`-section of the `ingress-nginx-controller` service and *ADD* the definition for port 9000:
|
|
```
|
|
[...]
|
|
spec:
|
|
clusterIP: 10.43.237.255
|
|
>>> CHANGE externalTrafficPolicy from Cluster to Local if original client-IP is desirable
|
|
externalTrafficPolicy: Local
|
|
<<< CHANGE
|
|
ports:
|
|
- name: http
|
|
nodePort: 30312
|
|
port: 80
|
|
protocol: TCP
|
|
targetPort: http
|
|
- name: https
|
|
nodePort: 30366
|
|
port: 443
|
|
protocol: TCP
|
|
targetPort: https
|
|
>>> ADD
|
|
- name: proxied-tcp-9000
|
|
port: 9000
|
|
protocol: TCP
|
|
targetPort: 9000
|
|
<<< ADD
|
|
[...]
|
|
```
|
|
Verify nginx-ingress-controller is a Loadbalancer and listening on port 9000 with `kubectl get services -n ingress-nginx`:
|
|
```
|
|
[...]
|
|
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
|
[...]
|
|
ingress-nginx service/ingress-nginx-controller LoadBalancer 10.43.237.255 10.62.94.246 80:30312/TCP,443:30366/TCP,9000:31460/TCP 71m
|
|
[...]
|
|
```
|
|
|
|
### Deploy my-nginx deployment and service <a name="deploy-my-nginx-service"></a>
|
|
my-nginx-deployment.yml:
|
|
```
|
|
apiVersion: apps/v1
|
|
kind: Deployment
|
|
metadata:
|
|
name: my-nginx
|
|
spec:
|
|
selector:
|
|
matchLabels:
|
|
run: my-nginx
|
|
replicas: 1
|
|
template:
|
|
metadata:
|
|
labels:
|
|
run: my-nginx
|
|
spec:
|
|
containers:
|
|
- name: my-nginx
|
|
image: nginx:alpine
|
|
ports:
|
|
- containerPort: 80
|
|
---
|
|
apiVersion: v1
|
|
kind: Service
|
|
metadata:
|
|
name: my-nginx
|
|
labels:
|
|
run: my-nginx
|
|
spec:
|
|
ports:
|
|
- port: 80
|
|
protocol: TCP
|
|
selector:
|
|
run: my-nginx
|
|
```
|
|
Apply with `kubectl apply -f my-nginx-deployment.yml`:
|
|
```
|
|
deployment.apps/my-nginx created
|
|
service/my-nginx created
|
|
```
|
|
Test: `kubectl get all | grep my-nginx`:
|
|
```
|
|
pod/my-nginx-65c68bbcdf-xkhqj 1/1 Running 4 2d7h
|
|
service/my-nginx ClusterIP 10.43.118.13 <none> 80/TCP 2d7h
|
|
deployment.apps/my-nginx 1/1 1 1 2d7h
|
|
replicaset.apps/my-nginx-65c68bbcdf 1 1 1 2d7h
|
|
```
|
|
|
|
## Stick the nginx-ingress-controler and my-nginx app together <a name="stick-nginx-ingress-and-tcp-service"></a>
|
|
Finally, the nginx-ingress controller needs a port-mapping pointing to the my-nginx app. This will be done with a config-map `nginx-ingress-tcp-services-config-map.yml`, referenced earlier in the nginx-ingress deployment definition:
|
|
```
|
|
---
|
|
apiVersion: v1
|
|
kind: ConfigMap
|
|
metadata:
|
|
name: tcp-services
|
|
namespace: ingress-nginx
|
|
data:
|
|
"9000": default/my-nginx:80
|
|
```
|
|
Apply with `kubectl apply -f nginx-ingress-tcp-services-config-map.yml`:
|
|
```
|
|
configmap/tcp-services created
|
|
```
|
|
Subsequently the config-map can be edited with `kubectl -n ingress-nginx edit configmap tcp-services`
|
|
|
|
**Changes to config-maps do not take effect on running pods! A re-scale to 0 and back can solve this problem: https://stackoverflow.com/questions/37317003/restart-pods-when-configmap-updates-in-kubernetes**
|
|
|
|
## Test exposed app on TCP-port 9000 <a name="test-nginx-ingress-and-tcp-service"></a>
|
|
```
|
|
dominik@muggler:~$ curl -s http://10.62.94.246:9000
|
|
<!DOCTYPE html>
|
|
<html>
|
|
<head>
|
|
<title>Welcome to nginx!</title>
|
|
<style>
|
|
body {
|
|
width: 35em;
|
|
margin: 0 auto;
|
|
font-family: Tahoma, Verdana, Arial, sans-serif;
|
|
}
|
|
</style>
|
|
</head>
|
|
<body>
|
|
<h1>Welcome to nginx!</h1>
|
|
<p>If you see this page, the nginx web server is successfully installed and
|
|
working. Further configuration is required.</p>
|
|
|
|
<p>For online documentation and support please refer to
|
|
<a href="http://nginx.org/">nginx.org</a>.<br/>
|
|
Commercial support is available at
|
|
<a href="http://nginx.com/">nginx.com</a>.</p>
|
|
|
|
<p><em>Thank you for using nginx.</em></p>
|
|
</body>
|
|
</html>
|
|
```
|
|
Check logs of ingress-nginx-controller POD:
|
|
```
|
|
root@k3s-master:~# kubectl get pods --all-namespaces |grep ingress-nginx
|
|
[...]
|
|
ingress-nginx ingress-nginx-controller-d88d95c-khbv4 1/1 Running 0 4m36s
|
|
[...]
|
|
```
|
|
```
|
|
root@k3s-master:~# kubectl logs ingress-nginx-controller-d88d95c-khbv4 -f -n ingress-nginx
|
|
[...]
|
|
[10.62.94.1] [23/Aug/2020:16:38:33 +0000] TCP 200 850 81 0.001
|
|
[...]
|
|
```
|
|
Check logs of my-nginx POD:
|
|
```
|
|
root@k3s-master:/k3s# kubectl get pods
|
|
NAME READY STATUS RESTARTS AGE
|
|
my-nginx-65c68bbcdf-xkhqj 1/1 Running 0 90m
|
|
```
|
|
```
|
|
kubectl logs my-nginx-65c68bbcdf-xkhqj -f
|
|
[...]
|
|
10.42.0.18 - - [23/Aug/2020:16:38:33 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.0" "-"
|
|
[...]
|
|
```
|
|
|
|
# Running DaemonSets on `hostPort` <a name="running-daemonsets"></a>
|
|
* Docs: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
|
|
* Good article: https://medium.com/stakater/k8s-deployments-vs-statefulsets-vs-daemonsets-60582f0c62d4
|
|
|
|
In this case configuration of networking in context of services is not needed.
|
|
|
|
This setup is suitable for legacy scenarios where static IP-address are required:
|
|
* inbound mailserver
|
|
* dns server
|
|
|
|
```
|
|
kind: DaemonSet
|
|
apiVersion: apps/v1
|
|
metadata:
|
|
name: netcat-daemonset
|
|
labels:
|
|
app: netcat-daemonset
|
|
spec:
|
|
selector:
|
|
matchLabels:
|
|
app: netcat-daemonset
|
|
template:
|
|
metadata:
|
|
labels:
|
|
app: netcat-daemonset
|
|
spec:
|
|
containers:
|
|
- command:
|
|
- nc
|
|
- -lk
|
|
- -p
|
|
- "23456"
|
|
- -v
|
|
- -e
|
|
- /bin/true
|
|
env:
|
|
- name: DEMO_GREETING
|
|
value: Hello from the environment
|
|
image: dockreg-zdf.int.zwackl.de/alpine/latest/amd64:prod
|
|
imagePullPolicy: Always
|
|
name: netcat-daemonset
|
|
ports:
|
|
- containerPort: 23456
|
|
hostPort: 23456
|
|
protocol: TCP
|
|
resources:
|
|
limits:
|
|
cpu: 500m
|
|
memory: 64Mi
|
|
requests:
|
|
cpu: 50m
|
|
memory: 32Mi
|
|
restartPolicy: Always
|
|
securityContext: {}
|
|
terminationGracePeriodSeconds: 30
|
|
updateStrategy:
|
|
rollingUpdate:
|
|
maxUnavailable: 1
|
|
type: RollingUpdate
|
|
```
|
|
|
|
# HELM charts <a name="helm"></a>
|
|
Docs:
|
|
* https://helm.sh/docs/intro/using_helm/
|
|
|
|
Prerequisites:
|
|
* running kubernetes installation
|
|
* kubectl with ENV[KUBECONFIG] pointing to appropriate config file
|
|
* helm
|
|
|
|
## Create a chart <a name="helm-create"></a>
|
|
`helm create helm-test`
|
|
|
|
```
|
|
~/kubernetes/helm$ tree helm-test/
|
|
helm-test/
|
|
├── charts
|
|
├── Chart.yaml
|
|
├── templates
|
|
│ ├── deployment.yaml
|
|
│ ├── _helpers.tpl
|
|
│ ├── hpa.yaml
|
|
│ ├── ingress.yaml
|
|
│ ├── NOTES.txt
|
|
│ ├── serviceaccount.yaml
|
|
│ ├── service.yaml
|
|
│ └── tests
|
|
│ └── test-connection.yaml
|
|
└── values.yaml
|
|
```
|
|
|
|
## Install local chart without packaging <a name="helm-install-without-packaging"></a>
|
|
`helm install helm-test-dev helm-test/ --set image.tag=latest --debug --wait`
|
|
|
|
or just a *dry-run*:
|
|
|
|
`helm install helm-test-dev helm-test/ --set image.tag=latest --debug --dry-run`
|
|
|
|
```
|
|
--wait: Waits until all Pods are in a ready state, PVCs are bound, Deployments have minimum (Desired minus maxUnavailable)
|
|
Pods in ready state and Services have an IP address (and Ingress if a LoadBalancer) before marking the release as successful.
|
|
It will wait for as long as the --timeout value. If timeout is reached, the release will be marked as FAILED. Note: In
|
|
scenarios where Deployment has replicas set to 1 and maxUnavailable is not set to 0 as part of rolling update strategy,
|
|
|
|
--wait will return as ready as it has satisfied the minimum Pod in ready condition.
|
|
```
|
|
|
|
## List deployed helm charts <a name="helm-list"></a>
|
|
```
|
|
~/kubernetes/helm$ helm list
|
|
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
|
|
helm-test-dev default 4 2020-08-27 12:30:38.98457042 +0200 CEST deployed helm-test-0.1.0 1.16.0
|
|
```
|
|
|
|
## Upgrade local chart without packaging <a name="helm-upgrade"></a>
|
|
```
|
|
~/kubernetes/helm$ helm upgrade helm-test-dev helm-test/ --set image.tag=latest --wait --timeout 60s
|
|
Release "helm-test-dev" has been upgraded. Happy Helming!
|
|
NAME: helm-test-dev
|
|
LAST DEPLOYED: Thu Aug 27 12:47:09 2020
|
|
NAMESPACE: default
|
|
STATUS: deployed
|
|
REVISION: 7
|
|
NOTES:
|
|
1. Get the application URL by running these commands:
|
|
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=helm-test,app.kubernetes.io/instance=helm-test-dev" -o jsonpath="{.items[0].metadata.name}")
|
|
echo "Visit http://127.0.0.1:8080 to use your application"
|
|
kubectl --namespace default port-forward $POD_NAME 8080:80
|
|
```
|
|
`helm upgrade [...] --wait` is synchronous and exit with 0 on success, otherwise with >0 on failure. `helm upgrade` will wait for 5 minutes Setting the `--timeout` (Default 5 minutes) flag makes This can be used in term of CI/CD deployments with Jenkins.
|
|
|
|
## Get status of deployed chart <a name="helm-status"></a>
|
|
```
|
|
~/kubernetes/helm$ helm status helm-test-dev
|
|
NAME: helm-test-dev
|
|
LAST DEPLOYED: Thu Aug 27 12:47:09 2020
|
|
NAMESPACE: default
|
|
STATUS: deployed
|
|
REVISION: 7
|
|
NOTES:
|
|
1. Get the application URL by running these commands:
|
|
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=helm-test,app.kubernetes.io/instance=helm-test-dev" -o jsonpath="{.items[0].metadata.name}")
|
|
echo "Visit http://127.0.0.1:8080 to use your application"
|
|
kubectl --namespace default port-forward $POD_NAME 8080:80
|
|
```
|
|
|
|
## Get deployment history <a name="helm-history"></a>
|
|
```
|
|
~/kubernetes/helm$ helm history helm-test-dev
|
|
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
|
|
10 Thu Aug 27 12:56:33 2020 failed helm-test-0.1.0 1.16.0 Upgrade "helm-test-dev" failed: timed out waiting for the condition
|
|
11 Thu Aug 27 13:08:34 2020 superseded helm-test-0.1.0 1.16.0 Upgrade complete
|
|
12 Thu Aug 27 13:09:59 2020 superseded helm-test-0.1.0 1.16.0 Upgrade complete
|
|
13 Thu Aug 27 13:10:24 2020 superseded helm-test-0.1.0 1.16.0 Rollback to 11
|
|
14 Thu Aug 27 13:23:22 2020 failed helm-test-0.1.1 blubb Upgrade "helm-test-dev" failed: timed out waiting for the condition
|
|
15 Thu Aug 27 13:26:43 2020 pending-upgrade helm-test-0.1.1 blubb Preparing upgrade
|
|
16 Thu Aug 27 13:27:12 2020 superseded helm-test-0.1.1 blubb Upgrade complete
|
|
17 Thu Aug 27 14:32:32 2020 superseded helm-test-0.1.1 Upgrade complete
|
|
18 Thu Aug 27 14:33:58 2020 superseded helm-test-0.1.1 Upgrade complete
|
|
19 Thu Aug 27 14:36:49 2020 failed helm-test-0.1.1 cosmetics Upgrade "helm-test-dev" failed: timed out waiting for the condition
|
|
```
|
|
|
|
## Rollback <a name="helm-rollback"></a>
|
|
`helm rollback helm-test-dev 18 --wait`
|
|
```
|
|
~/kubernetes/helm$ helm history helm-test-dev
|
|
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
|
|
10 Thu Aug 27 12:56:33 2020 failed helm-test-0.1.0 1.16.0 Upgrade "helm-test-dev" failed: timed out waiting for the condition
|
|
11 Thu Aug 27 13:08:34 2020 superseded helm-test-0.1.0 1.16.0 Upgrade complete
|
|
12 Thu Aug 27 13:09:59 2020 superseded helm-test-0.1.0 1.16.0 Upgrade complete
|
|
13 Thu Aug 27 13:10:24 2020 superseded helm-test-0.1.0 1.16.0 Rollback to 11
|
|
14 Thu Aug 27 13:23:22 2020 failed helm-test-0.1.1 blubb Upgrade "helm-test-dev" failed: timed out waiting for the condition
|
|
15 Thu Aug 27 13:26:43 2020 pending-upgrade helm-test-0.1.1 blubb Preparing upgrade
|
|
16 Thu Aug 27 13:27:12 2020 superseded helm-test-0.1.1 blubb Upgrade complete
|
|
17 Thu Aug 27 14:32:32 2020 superseded helm-test-0.1.1 Upgrade complete
|
|
18 Thu Aug 27 14:33:58 2020 superseded helm-test-0.1.1 Upgrade complete
|
|
19 Thu Aug 27 14:36:49 2020 failed helm-test-0.1.1 cosmetics Upgrade "helm-test-dev" failed: timed out waiting for the condition
|
|
20 Thu Aug 27 14:37:36 2020 deployed helm-test-0.1.1 Rollback to 18
|
|
```
|
|
```
|
|
~/kubernetes/helm$ helm status helm-test-dev
|
|
NAME: helm-test-dev
|
|
LAST DEPLOYED: Thu Aug 27 14:37:36 2020
|
|
NAMESPACE: default
|
|
STATUS: deployed
|
|
REVISION: 20
|
|
NOTES:
|
|
1. Get the application URL by running these commands:
|
|
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=helm-test,app.kubernetes.io/instance=helm-test-dev" -o jsonpath="{.items[0].metadata.name}")
|
|
echo "Visit http://127.0.0.1:8080 to use your application"
|
|
kubectl --namespace default port-forward $POD_NAME 8080:80
|
|
```
|