This repository contains a comprehensive, step-by-step guide to setting up a highly available 3-node k3s Kubernetes cluster on Ubuntu Server 24.04 LTS (or 22.04 LTS if 24.04 is not available). The tutorial incorporates Longhorn for persistent storage, Helm for package management, and Ansible for automation and configuration management. To test the cluster, we'll deploy Plex and Jellyfin media servers.
A Glossary explaining Kubernetes and DevOps concepts used throughout is also available.
- Prerequisites
- Setting Up Ubuntu Servers
- Configuring SSH Access
- Installing Ansible
- Preparing Ansible Inventory and Playbooks
- Deploying k3s Cluster with Ansible
- Deploying Plex and Jellyfin with Ansible
- Testing the Deployment
- Conclusion
- Additional Information
- Three Ubuntu Server machines running Ubuntu 24.04 LTS or 22.04 LTS.
- Ansible Control Node: Can be one of the Ubuntu servers or a separate machine.
- SSH Access: Root or a user with sudo privileges.
- Stable Internet Connection for downloading packages and container images.
Login to each Ubuntu server and run:
sudo apt update && sudo apt upgrade -y
Assign unique hostnames to each server.
On the master node:
sudo hostnamectl set-hostname k3s-master
On the worker nodes:
sudo hostnamectl set-hostname k3s-worker1
sudo hostnamectl set-hostname k3s-worker2
Update /etc/hosts
on all nodes:
sudo nano /etc/hosts
Add the following lines (replace with your servers' IP addresses):
192.168.1.100 k3s-master
192.168.1.101 k3s-worker1
192.168.1.102 k3s-worker2
If you don't have an SSH key pair, generate one:
ssh-keygen -t rsa -b 4096
Replace user
with your actual username:
ssh-copy-id user@k3s-master
ssh-copy-id user@k3s-worker1
ssh-copy-id user@k3s-worker2
On your Ansible control node, run:
sudo apt update
sudo apt install -y ansible
ansible --version
Create a directory for your Ansible project:
mkdir ~/k3s-cluster && cd ~/k3s-cluster
Create an inventory.ini
file:
[master]
k3s-master ansible_user=user
[workers]
k3s-worker1 ansible_user=user
k3s-worker2 ansible_user=user
[all:vars]
ansible_python_interpreter=/usr/bin/python3
Note: Replace user
with your actual username on the servers.
Create a site.yml
playbook:
Click to view site.yml
---
- name: Install Dependencies and Set Up Cluster
hosts: all
become: yes
tasks:
- name: Install required packages
apt:
name:
- curl
- apt-transport-https
- ca-certificates
state: present
update_cache: yes
- name: Install k3s Master Node
hosts: master
become: yes
tasks:
- name: Install k3s master
shell: |
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik" sh -
args:
executable: /bin/bash
- name: Retrieve k3s token
slurp:
src: /var/lib/rancher/k3s/server/node-token
register: k3s_token
- name: Save k3s token to a file
copy:
content: "{{ k3s_token.content | b64decode }}"
dest: ~/k3s_token
- name: Copy kubeconfig to user directory
copy:
src: /etc/rancher/k3s/k3s.yaml
dest: /home/{{ ansible_user }}/.kube/config
remote_src: yes
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: 0600
- name: Update kubeconfig server address
replace:
path: /home/{{ ansible_user }}/.kube/config
regexp: '127.0.0.1'
replace: '{{ inventory_hostname }}'
- name: Install k3s Worker Nodes
hosts: workers
become: yes
tasks:
- name: Fetch k3s token from master
fetch:
src: ~/k3s_token
dest: ./k3s_token
flat: yes
- name: Join k3s worker node
shell: |
TOKEN=$(cat ./k3s_token)
curl -sfL https://get.k3s.io | K3S_URL=https://k3s-master:6443 K3S_TOKEN=$TOKEN sh -
args:
executable: /bin/bash
- name: Install Helm on Master Node
hosts: master
become: yes
tasks:
- name: Download Helm install script
get_url:
url: https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
dest: /tmp/get-helm-3
mode: '0755'
- name: Install Helm
shell: /tmp/get-helm-3
args:
executable: /bin/bash
- name: Install Longhorn via Helm
hosts: master
become: yes
tasks:
- name: Create Longhorn namespace
kubernetes.core.k8s:
state: present
definition:
kind: Namespace
apiVersion: v1
metadata:
name: longhorn-system
- name: Add Longhorn Helm repository
shell: |
helm repo add longhorn https://charts.longhorn.io
helm repo update
args:
executable: /bin/bash
- name: Install Longhorn using Helm
helm:
name: longhorn
chart_repo: https://charts.longhorn.io
chart: longhorn
namespace: longhorn-system
create_namespace: false
wait: true
Create a deploy-apps.yml
playbook:
Click to view deploy-apps.yml
---
- name: Deploy Plex and Jellyfin
hosts: master
become: yes
tasks:
- name: Add k8s-at-home Helm repository
shell: |
helm repo add k8s-at-home https://k8s-at-home.com/charts/
helm repo update
args:
executable: /bin/bash
- name: Create namespace for media servers
kubernetes.core.k8s:
state: present
definition:
kind: Namespace
apiVersion: v1
metadata:
name: media-servers
- name: Install Plex via Helm
helm:
name: plex
chart: k8s-at-home/plex
namespace: media-servers
values:
persistence:
config:
enabled: true
storageClass: longhorn
data:
enabled: true
storageClass: longhorn
wait: true
- name: Install Jellyfin via Helm
helm:
name: jellyfin
chart: k8s-at-home/jellyfin
namespace: media-servers
values:
persistence:
config:
enabled: true
storageClass: longhorn
cache:
enabled: true
storageClass: longhorn
wait: true
Ensure you're in the ~/k3s-cluster
directory and run:
ansible-playbook -i inventory.ini site.yml
On the master node or Ansible control node:
kubectl get nodes
You should see all three nodes in the Ready state.
ansible-playbook -i inventory.ini deploy-apps.yml
kubectl -n media-servers get pods
All pods should be in the Running state.
Retrieve the service information:
kubectl -n media-servers get svc
You should see services for plex and jellyfin.
If you don't have LoadBalancer or Ingress set up, you can use port forwarding:
kubectl -n media-servers port-forward svc/plex 32400:32400
kubectl -n media-servers port-forward svc/jellyfin 8096:8096
Access Plex at http://localhost:32400/web and Jellyfin at http://localhost:8096.
By automating the installation and configuration of k3s, Helm, Longhorn, and your applications using Ansible, you've set up a reproducible and manageable Kubernetes environment. This approach aligns with modern best practices, ensuring that your infrastructure is consistent and easy to maintain.
-
Ansible Modules Used:
apt
: For package management on Ubuntu.shell
: To execute shell commands where necessary.slurp
andfetch
: To handle file content across nodes.copy
: For copying files and content.kubernetes.core.k8s
: To interact with Kubernetes resources.helm
: An Ansible module to manage Helm charts.
-
Best Practices Implemented:
- Infrastructure as Code: All configurations are codified in Ansible playbooks.
- Idempotency: Ansible ensures tasks have the same result regardless of how many times they are executed.
- Version Control: Store your Ansible playbooks in a version control system like Git.
Please refer to the Glossary for explanations of Kubernetes and DevOps concepts used in this tutorial.
This project is licensed under the terms of the MIT license. See the LICENSE file for details.
Contributions are welcome! Please open an issue or submit a pull request for any improvements or additions.