kube-ansible
is a set of Ansible playbooks and roles that allows
you to instantiate a vanilla Kubernetes cluster on (primarily) CentOS virtual
machines or baremetal.
Additionally, kube-ansible includes CNI pod networking (defaulting to Flannel, with an ability to deploy Weave and Multus).
The purpose of kube-ansible is to provide a simpler lab environment that allows prototyping and proof of concepts. For staging and production deployments, we recommend that you utilize OpenShift-Ansible
Playbooks are located in the playbooks/
directory.
Playbook | Inventory | Purpose |
---|---|---|
virthost-setup.yml |
./inventory/virthost/ |
Provision a virtual machine host |
bmhost-setup.yml |
./inventory/bmhost/ |
Provision a bare metal host and add to group nodes. |
allhost-setup.yml |
./inventory/allhosts/ |
Provision both a virtual machine host and a bare metal host. |
kube-install.yml |
./inventory/all.local.generated |
Install and configure a k8s cluster using all hosts in group nodes |
kube-teardown.yml |
./inventory/all.local.generated |
Runs kubeadm reset on all nodes to tear down k8s |
vm-teardown.yml |
./inventory/virthost/ |
Destroys VMs on the virtual machine host |
ka-multus-cni/multus-cni.yml |
./inventory/vms.local.generated |
Compiles multus-cni |
ka-gluster-install/gluster-install.yml |
./inventory/vms.local.generated |
Install a GlusterFS cluster across VMs (requires vm-attach-disk) |
fedora-python-bootstrapper.yml |
./inventory/vms.local.generated |
Bootstrapping Python dependencies on cloud images |
ka-builder/builder.yml |
./inventory/vms.local.generated |
Build a Kubernetes release in a dedicated virtual machine |
(Table generated with markdown tables)
kube-ansible provides the means to install and setup KVM as a virtual host platform on which virtual machines can be created, and used as the foundation of a Kubernetes cluster installation.
There are generally two steps to this deployment:
- Installation of KVM on the baremetal system and virtual machine instantiation
- Kubernetes environment installed and setup on the virtual machines
First we start by configuring our virthost/
inventory to match our working
environment, including DNS or IP address of the baremetal system that we'll
install KVM onto. We also setup our network (KVM network, whether that be a
bridged interface, or a NAT interface), and then define the system topology
we're going to deploy (number of virtual machines to instantiate).
We do this with the virthost-setup.yml
playbook, which performs the virtual
host basic configuration, virtual machine instantiation, and extra virtual disk
creation when configuring persistent storage with GlusterFS.
During the virthost-setup.yml
a vms.local.generated
inventory file is
created with the IP addresses and hostname of the virtual machines. The
vms.local.generated
file can then be used with kube-install.yml
.
Install role dependencies with ansible-galaxy
.
ansible-galaxy install -r requirements.yml
Copy the example virthost
inventory into a new directory.
cp -r inventory/examples/virthost inventory/virthost/
Modify ./inventory/virthost/virthost.inventory
to setup a virtual
host (skip to step 3 if you already have an inventory).
Create or modify ./inventory/allhosts/allhosts.inventory
to set up both bare metal
and virtual hosts.
Want more VMs? Edit inventory/virthost/group_vars/virthost.yml
and add an
override list via virtual_machines
(template in
roles/ka-init/group_vars/all.yml
). You can also define separate vCPU and vRAM
for each of the virtual machines with system_ram_mb
and system_cpus
. The
default values are setup via system_default_ram_mb
and system_default_cpus
which can also be overridden if you wish different default values. (Current
defaults are 2048MB and 4 vCPU.)
WARNING
If you're not going to be connecting to the virtual machines from the same network as your source machine, you'll need to make sure you setup the
ssh_proxy_enabled: true
and other relatedssh_proxy_...
variables to allow thekube-install.yml
playbook to work properly. See next NOTE for more information.
Running on virthost directly
ansible-playbook -i inventory/virthost/ playbooks/virthost-setup.yml
Setting up virthost as a jump host
ansible-playbook -i inventory/virthost/ -e ssh_proxy_enabled=true playbooks/virthost-setup.yml
NOTE
There are a few extra variables you may wish to set against the virtual host which can be satisfied in the
inventory/virthost/group_vars/virthost.yml
file of your local inventory configuration ininventory/virthost/
that you just created.Primarily, this is for overriding the default variables located in the
roles/ka-init/group_vars/all.yml
file, or overriding the default values associated with the roles.Some common variables you may wish to override include:
bridge_networking: false
disable bridge networking setupimages_directory: /home/images/kubelab
override image directory locationspare_disk_location: /home/images/kubelab
override spare disk locationThe following values are used in the generation of the dynamic inventory file
vms.local.generated
ssh_proxy_enabled: true
proxy via jump host (remote virthost)ssh_proxy_user: root
username to SSH into virthostssh_proxy_host: virthost
hostname or IP of virthostssh_proxy_port: 2222
port of the virthost (optional, default 22)vm_ssh_key_path: /home/lmadsen/.ssh/id_vm_rsa
path to local SSH key
During the execution of Step 1 a local inventory should have been
generated for you called inventory/vms.local.generated
that contains the
hosts and their IP addresses. You should be able to pass this inventory to the
kube-install.yml
playbook.
NOTE
If you're not running the Ansible playbooks from the virtual host itself, it's possible to connect to the virtual machines via SSH proxy. You can do this by setting up the
ssh_proxy_...
variables as noted in Step 2.
Alternatively you can ignore the generated inventory and copy the example
inventory directory from inventory/examples/vms/
and modify to your hearts
content.
ansible-playbook -i inventory/vms.local.generated playbooks/kube-install.yml
Once you've done that, you should be able to connect to your Kubernetes master
virtual machine and run kubectl get nodes
and see that all your nodes are in
a Ready state. (It may take some time for everything to coalesce and the
nodes to report back to the Kubernetes master node.)
In order to login to the nodes, you may need to ssh-add ~/.ssh/vmhost/id_vm_rsa
. The private key created on the virtual host will be
automatically fetched to your local machine, allowing you to connect to the
nodes when proxying.
Pro Tip
You can create a
~/.bashrc
alias to SSH into the virtual machines if you're not executing the Ansible playbooks directly from your virtual host (i.e. from your laptop or desktop). To SSH into the nodes via SSH proxy, add the following alias:alias ssh-virthost='ssh -o ProxyCommand="ssh -W %h:%p root@virthost"'
It's assumed you're logging into the virtual host as the
root
user and at hostnamevirthost
. Change as required.Usage:
source ~/.bashrc ; ssh-virthost centos@kube-master
Once you're logged into your Kubernetes master node, run the following command to check the state of your cluster.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master Ready master 10m v1.8.3
kube-node-1 Ready <none> 9m v1.8.3
kube-node-2 Ready <none> 9m v1.8.3
kube-node-3 Ready <none> 9m v1.8.3
Everything should be marked as ready. If so, you're good to go!
Initially inspired by: