|
| 1 | +# Accessing the Cluster |
| 2 | + |
| 3 | +## Access the cluster using kubectl, continuous build pipelines, or other clients |
| 4 | + |
| 5 | +If you've chosen to configure a _public_ Load Balancer for your Kubernetes Master(s) (i.e. `control_plane_subnet_access=public` or |
| 6 | +`control_plane_subnet_access=private` _and_ `k8s_master_lb_access=public`), you can interact with your cluster using kubectl, continuous build |
| 7 | +pipelines, or any other client over the Internet. A working kubeconfig can be found in the ./generated folder or generated on the fly using the `kubeconfig` Terraform output variable. |
| 8 | + |
| 9 | +```bash |
| 10 | +# warning: 0.0.0.0/0 is wide open. Consider limiting HTTPs ingress to smaller set of IPs. |
| 11 | +$ terraform plan -var master_https_ingress=0.0.0.0/0 |
| 12 | +$ terraform apply -var master_https_ingress=0.0.0.0/0 |
| 13 | +# consider closing access off again using terraform apply -var master_https_ingress=10.0.0.0/16 |
| 14 | +``` |
| 15 | + |
| 16 | +```bash |
| 17 | +$ export KUBECONFIG=`pwd`/generated/kubeconfig |
| 18 | +$ kubectl cluster-info |
| 19 | +$ kubectl get nodes |
| 20 | +``` |
| 21 | + |
| 22 | +If you've chosen to configure a strictly _private_ cluster (i.e. `control_plane_subnet_access=private` _and_ `k8s_master_lb_access=private`), |
| 23 | +access to the cluster will be limited to the NAT instance(s) similar to how you would use a bastion host e.g. |
| 24 | + |
| 25 | +```bash |
| 26 | +$ terraform plan -var public_subnet_ssh_ingress=0.0.0.0/0 |
| 27 | +$ terraform apply -var public_subnet_ssh_ingress=0.0.0.0/0 |
| 28 | +$ terraform output ssh_private_key > generated/instances_id_rsa |
| 29 | +$ chmod 600 generated/instances_id_rsa |
| 30 | +$ scp -i generated/instances_id_rsa generated/instances_id_rsa opc@NAT_INSTANCE_PUBLIC_IP:/home/opc/ |
| 31 | +$ ssh -i generated/instances_id_rsa opc@NAT_INSTANCE_PUBLIC_IP |
| 32 | +nat$ ssh -i /home/opc/instances_id_rsa opc@K8SMASTER_INSTANCE_PRIVATE_IP |
| 33 | +master$ kubectl cluster-info |
| 34 | +master$ kubectl get nodes |
| 35 | +``` |
| 36 | + |
| 37 | +Note, for easier access, consider setting up an SSH tunnel between your local host and a NAT instance. |
| 38 | + |
| 39 | +## Access the cluster using Kubernetes Dashboard |
| 40 | + |
| 41 | +Assuming `kubectl` has access to the Kubernetes Master Load Balancer, you can use use `kubectl proxy` to access the |
| 42 | +Dashboard: |
| 43 | + |
| 44 | +``` |
| 45 | +kubectl proxy & |
| 46 | +open http://localhost:8001/ui |
| 47 | +``` |
| 48 | + |
| 49 | +## Verifying your cluster: |
| 50 | + |
| 51 | +If you've chosen to configure a public cluster, you can do a quick and automated verification of your cluster from |
| 52 | +your local machine by running the `cluster-check.sh` located in the `scripts` directory. Note that this script requires your KUBECONFIG environment variable to be set (above), and SSH and HTTPs access to be open to etcd and worker nodes. |
| 53 | + |
| 54 | +To temporarily open access SSH and HTTPs access for `cluster-check.sh`, add the following to your `terraform.tfvars` file: |
| 55 | + |
| 56 | +```bash |
| 57 | +# warning: 0.0.0.0/0 is wide open. remember to undo this. |
| 58 | +etcd_ssh_ingress = "0.0.0.0/0" |
| 59 | +master_ssh_ingress = "0.0.0.0/0" |
| 60 | +worker_ssh_ingress = "0.0.0.0/0" |
| 61 | +master_https_ingress = "0.0.0.0/0" |
| 62 | +worker_nodeport_ingress = "0.0.0.0/0" |
| 63 | +``` |
| 64 | + |
| 65 | +```bash |
| 66 | +$ scripts/cluster-check.sh |
| 67 | +``` |
| 68 | +``` |
| 69 | +[cluster-check.sh] Running some basic checks on Kubernetes cluster.... |
| 70 | +[cluster-check.sh] Checking ssh connectivity to each node... |
| 71 | +[cluster-check.sh] Checking whether instance bootstrap has completed on each node... |
| 72 | +[cluster-check.sh] Checking Flannel's etcd key from each node... |
| 73 | +[cluster-check.sh] Checking whether expected system services are running on each node... |
| 74 | +[cluster-check.sh] Checking status of /healthz endpoint at each k8s master node... |
| 75 | +[cluster-check.sh] Checking status of /healthz endpoint at the LB... |
| 76 | +[cluster-check.sh] Running 'kubectl get nodes' a number or times through the master LB... |
| 77 | +
|
| 78 | +The Kubernetes cluster is up and appears to be healthy. |
| 79 | +Kubernetes master is running at https://129.146.22.175:443 |
| 80 | +KubeDNS is running at https://129.146.22.175:443/api/v1/proxy/namespaces/kube-system/services/kube-dns |
| 81 | +kubernetes-dashboard is running at https://129.146.22.175:443/ui |
| 82 | +``` |
| 83 | + |
| 84 | +## SSH into OCI Instances |
| 85 | + |
| 86 | +If you've chosen to launch your control plane instance in _public_ subnets (i.e. `control_plane_subnet_access=public`), you can open |
| 87 | + access SSH access to your master nodes by adding the following to your `terraform.tfvars` file: |
| 88 | + |
| 89 | +```bash |
| 90 | +# warning: 0.0.0.0/0 is wide open. remember to undo this. |
| 91 | +etcd_ssh_ingress = "0.0.0.0/0" |
| 92 | +master_ssh_ingress = "0.0.0.0/0" |
| 93 | +worker_ssh_ingress = "0.0.0.0/0" |
| 94 | +``` |
| 95 | + |
| 96 | +```bash |
| 97 | +# Create local SSH private key file logging into OCI instances |
| 98 | +$ terraform output ssh_private_key > generated/instances_id_rsa |
| 99 | +# Retrieve public IP for etcd nodes |
| 100 | +$ terraform output etcd_public_ips |
| 101 | +# Log in as user opc to the OEL OS |
| 102 | +$ ssh -i `pwd`/generated/instances_id_rsa opc@ETCD_INSTANCE_PUBLIC_IP |
| 103 | +# Retrieve public IP for k8s masters |
| 104 | +$ terraform output master_public_ips |
| 105 | +$ ssh -i `pwd`/generated/instances_id_rsa opc@K8SMASTER_INSTANCE_PUBLIC_IP |
| 106 | +# Retrieve public IP for k8s workers |
| 107 | +$ terraform output worker_public_ips |
| 108 | +$ ssh -i `pwd`/generated/instances_id_rsa opc@K8SWORKER_INSTANCE_PUBLIC_IP |
| 109 | +``` |
| 110 | + |
| 111 | +If you've chosen to launch your control plane instance in _private_ subnets (i.e. `control_plane_subnet_access=private`), you'll |
| 112 | +need to first SSH into a NAT instance, then to a worker, master, or etcd node: |
| 113 | + |
| 114 | +```bash |
| 115 | +$ terraform plan -var public_subnet_ssh_ingress=0.0.0.0/0 |
| 116 | +$ terraform apply -var public_subnet_ssh_ingress=0.0.0.0/0 |
| 117 | +$ terraform output ssh_private_key > generated/instances_id_rsa |
| 118 | +$ chmod 600 generated/instances_id_rsa |
| 119 | +$ terraform output nat_instance_public_ips |
| 120 | +$ scp -i generated/instances_id_rsa generated/instances_id_rsa opc@NAT_INSTANCE_PUBLIC_IP:/home/opc/ |
| 121 | +$ ssh -i generated/instances_id_rsa opc@NAT_INSTANCE_PUBLIC_IP |
| 122 | +nat$ ssh -i /home/opc/instances_id_rsa opc@PRIVATE_IP |
| 123 | +``` |
0 commit comments