This project demonstrates the setup and usage of Kubernetes (K8s) for automating the deployment, scaling, and management of containerized applications.
Kubernetes (K8s) is an open-source system for automating the deployment, scaling, and management of containerized applications. It manages the lifecycle of containers across a cluster of servers, ensuring that the desired state of your applications is maintained.
- Scalability: Automatically scale applications up or down based on demand.
- Reliability: Ensures applications remain available by distributing containers across multiple nodes.
- Automation: Automates the creation, deletion, and management of containers.
- High Availability and Fault Tolerance: Kubernetes distributes containers across multiple nodes, ensuring the application remains available even if some nodes fail.
- Complexity: Requires significant setup and management effort.
- Resource-Intensive: Needs considerable computational resources for clusters and nodes.
Cluster in the context of Kubernetes is a group of computers (nodes) working together as a single system to ensure high availability and scalability of applications. Kubernetes clusters consist of:
- Master Nodes Manage the cluster's state and orchestrate tasks.
- Worker Nodes Run the application containers.
- Master Nodes: Manage the cluster's state and handle scheduling, state management, and tasks.
- Worker Nodes: Run the application containers and have components like kubelet, container runtime, and kube-proxy.
- Pod: The basic unit of deployment in Kubernetes. It represents one or more containers that share network resources and a filesystem. Pods are created, managed, and scaled by controllers (e.g., Deployment, StatefulSet).
-
Dockerfile: A text file with instructions for creating a Docker image. It defines how the application and its dependencies should be packaged in an image that can run in a Docker container.
-
deployment.yaml: Describes a Kubernetes Deployment object. It manages application deployments, ensuring their scalability.
-
service.yaml: Describes a Kubernetes Service object. It provides access to one or more pods, acting as a load balancer. The Service provides a stable IP address and DNS name for a set of pods, simplifying interaction with them and ensuring load balancing.
-
kubectl: A command-line tool for interacting with Kubernetes clusters.
- Installation guide: Install kubectl
-
minikube: A tool to run Kubernetes locally.
- Installation guide: Install minikube
-
Build a Docker image of your application:
docker build -t <docker-registry>/k8s:latest .
-
Push your Docker image to your Docker registry:
docker push <docker-registry>/k8s:latest
-
Start a Kubernetes cluster using Minikube:
minikube start
-
Apply the Kubernetes deployment and service files:
kubectl apply -f kubernetes/deployment.yaml
kubectl apply -f kubernetes/service.yaml
Restart all service:
kubectl rollout restart deployment k8s-deploymentkubectl apply -f kubernetes/service.yamlCheck cluster status:
minikube statusDelete the existing Minikube cluster:
minikube delete Get cluster info
kubectl cluster-infoCreate pod:
kubectl run <pod-name> --image=nginxCheck Pods status:
kubectl get podsGet Pod information:
kubectl describe pod <pod-name>Get Pod logs:
kubectl logs <pod-name>Delete Pod by name:
kubectl delete pod <pod-name>Get Node info
kubectl get nodesExplore node IP:
minikube ip