- Taints & Tolerations, NodeAffinity/pod-nodeaffinity.yaml
apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: containers:
- name: myapp
image: nginx:1.20
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: type
operator: In
values:
- cpu
- key: type
operator: In
values:
- weight: 1
preference:
matchExpressions:
---===--- 11. Taints & Tolerations, NodeAffinity/pod-podaffinity.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deployment spec: selector: matchLabels: app: myapp replicas: 5 template: metadata: labels: app: myapp spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - myapp topologyKey: "kubernetes.io/hostname" podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - etcd topologyKey: "kubernetes.io/hostname" containers: - name: myapp-container image: nginx:1.20 nodeSelector: type: master---===--- 11. Taints & Tolerations, NodeAffinity/pod-with-node-name.yaml
apiVersion: v1 kind: Pod metadata: name: nginx spec: containers:
- name: nginx image: nginx:1.20 nodeName: worker1---===---
- Taints & Tolerations, NodeAffinity/pod-with-node-selector.yaml
apiVersion: v1 kind: Pod metadata: name: nginx spec: containers:
- name: nginx image: nginx:1.20 nodeSelector: type: cpu---===---
- Taints & Tolerations, NodeAffinity/pod-with-tolerations.yaml
apiVersion: v1 kind: Pod metadata: name: pod-with-toleration labels: env: test spec: containers:
- name: nginx image: nginx:1.20 tolerations:
- effect: NoExecute operator: Exists nodeName: master ---===---
- Taints & Tolerations, NodeAffinity/useful-links.md
- Assigning Pods to Nodes: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
- Node Affinity: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
- InterPod Affinity & AntiAffinity: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
- Taints and Tolerations: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
---===--- 12. Readiness & Liveness Probes/pod-health-probes.yaml
apiVersion: v1 kind: Pod metadata: name: myapp-health-probes spec: containers:
- image: nginx:1.20
name: myapp-container
ports:
- containerPort: 80 readinessProbe: tcpSocket: port: 80 initialDelaySeconds: 10 periodSeconds: 5 livenessProbe: tcpSocket: port: 80 initialDelaySeconds: 5 periodSeconds: 15
---===--- 12. Readiness & Liveness Probes/useful-links.md
- Configure Readiness and Liveness Probes: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ ---===---
- Rolling Updates/commands.md
kubectl rollout history deployment/{depl-name}
kubectl rollout undo deployment/{depl-name}
kubectl rollout status deployment/{depl-name}
---===--- 13. Rolling Updates/useful-links.md
- Deployment Strategies: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
- Rolling Update: https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/
---===--- 14. Etcd Backup & Restore/commands.md
sudo apt install etcd-client
ETCDCTL_API=3 etcdctl snapshot save /tmp/etcd-backup.db \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key
ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshotdb
ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcd-backup.db --data-dir /var/lib/etcd-backup
the restored files are located at the new folder /var/lib/etcd-backup, so now configure etcd to use that directory:
vim /etc/kubernetes/manifests/etcd.yaml
---===--- 14. Etcd Backup & Restore/useful-links.md
Back up etcd store: https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster
Restore etcd backup: https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#restoring-an-etcd-cluster ---===--- 15. K8s Rest API/commands.md
kubectl proxy --port=8081 &
curl http://localhost:8081/api/
kubectl create serviceaccount myscript
kubectl apply -f myscript-role.yml
kubectl create rolebinding script-role-binding --role=script-role --serviceaccount=default:myscript
kubectl config view
APISERVER=https://172.31.44.88:6443
kubectl get serviceaccount myscript -o yaml
kubectl get secret xxxxx -o yaml
TOKEN=$(echo "token" | base64 --decode | tr -d "\n")
if we don't have the ca cert for curl, we can accept insecure, without providing curl client with ca certificate
curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
if we don't want insecure connection, we can specify ca cert for curl providing curl with k8s ca certificate
curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --cacert /etc/kubernetes/pki/ca.crt
curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --cacert /etc/kubernetes/pki/ca.crt
curl -X GET $APISERVER/apis/apps/v1/namespaces/default/deployments --header "Authorization: Bearer $TOKEN" --cacert /etc/kubernetes/pki/ca.crt
curl -X GET $APISERVER/api/v1/namespaces/default/services --header "Authorization: Bearer $TOKEN" --cacert /etc/kubernetes/pki/ca.crt
curl -X GET $APISERVER/api/v1/namespaces/default/services/nginx-service --header "Authorization: Bearer $TOKEN" --cacert /etc/kubernetes/pki/ca.crt
curl -X GET $APISERVER/api/v1/namespaces/default/pods/pod-name/logs --header "Authorization: Bearer $TOKEN" --cacert /etc/kubernetes/pki/ca.crt
---===--- 15. K8s Rest API/myscript-role.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: script-role rules:
- apiGroups: [""] resources: ["pods", "services"] verbs: ["get", "list", "delete"]
- apiGroups: ["apps"] resources: ["deployments"] verbs: ["get", "list", "delete"]---===---
- K8s Rest API/useful-links.md
- Access Cluster using API: https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/
- API Overview: https://kubernetes.io/docs/reference/using-api/
- More about K8s API: https://kubernetes.io/docs/concepts/overview/kubernetes-api/
- K8s REST API Documentation: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#-strong-api-groups-strong-
- Bearer Token: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#putting-a-bearer-token-in-a-request