55- Learn how to expose a deployment using Ingress
66- Learn how to use ` deployments `
77- Learn how to scale deployments
8- - Learn how to use ` services ` to do loadbalance between the pods of a scaled deployment
8+ - Learn how to use ` services ` to do load balancing between the pods of a scaled deployment
99
1010## Introduction
1111
@@ -14,7 +14,7 @@ scale it, and how to expose it using an `Ingress` resource with URL routing.
1414
1515## Deployments
1616
17- Deployments are a higher level abstraction than pods, and controls the lifecycle
17+ Deployments are a higher- level abstraction than pods, and control the lifecycle
1818and configuration of a "deployment" of an application. They are used to manage a
1919set of pods, and to ensure that a specified number of pods are always running
2020with the desired configuration. ` deployments ` are a Kubernetes ` kind ` and
@@ -71,13 +71,13 @@ spec:
7171
7272### High Availability
7373
74- In order to make our application stable, we want **high availability**. High availability means that
74+ To make our application stable, we want **high availability**. High availability means that
7575we replicate our applications, such that we have **redundant copies**, which means that _when_ an
7676application fails, our users are not impacted, as they will simply use one of the other copies,
7777while the failed instance recovers.
7878
79- In Kubernetes this is done in practice by ` scaling` a deployment, e.g. by adding or removing
80- ` replicas` . `replicas` are **identical** copies of the **the same pod**.
79+ In Kubernetes, this is done in practice by ` scaling` a deployment, e.g., by adding or removing
80+ ` replicas` . `replicas` are **identical** copies of the **same pod**.
8181
8282To scale a deployment, we change the number of `replicas` in the manifest file, and then apply the
8383changes using `kubectl apply -f <manifest-file>`.
@@ -135,11 +135,11 @@ ingress-controller called ALB, and the ALB will route traffic to the service.
135135- Scale the deployment by adding a replicas key
136136- Turn frontend pod manifests into a deployment manifest
137137- Apply the frontend deployment manifest
138- - Test service promise of high availability
138+ - Test the service promise of high availability
139139
140- > :bulb: If you get stuck somewhere along the way, you can check the solution inthe done directory.
140+ > :bulb: If you get stuck somewhere along the way, you can check the solution in the ** done** directory.
141141
142- # ## Step by step instructions
142+ # ## Step-by- step instructions
143143
144144<details>
145145<summary>
@@ -148,9 +148,9 @@ Step by step:
148148
149149- Go into the `deployments-ingress/start` directory.
150150
151- In the directory we have the pod manifests for the backend and frontend that have created in the
151+ In the directory, we have the pod manifests for the backend and frontend that were created in the
152152previous exercises. We also have two services, one for the backend (type ClusterIP) and one for the
153- frontend (type NodePort) as well as an ingress manifest for the frontend.
153+ frontend (type NodePort), as well as an ingress manifest for the frontend.
154154
155155# ### Add Ingress to frontend service
156156
@@ -192,9 +192,8 @@ It will take a while for the ingress to work, so we will continue with the backe
192192
193193# ### Deploy the quotes application
194194
195- To show how Deployments take the place of Pod manifests we will
196- first deploy the quotes application, and then slowly replace the
197- Pods with Deployments.
195+ To show how Deployments take the place of Pod manifests, we will first deploy the quotes application,
196+ and then slowly replace the Pods with Deployments.
198197
199198- Deploy the following using the `kubectl apply -f` command
200199 - ` frontend-pod.yaml`
@@ -210,7 +209,7 @@ Pods with Deployments.
210209How do I connect to a pod through a NodePort service?
211210</summary>
212211
213- > :bulb: In previous exercises you learned how connect to a pod exposed through a NodePort service,
212+ > :bulb: In previous exercises, you learned how to connect to a pod exposed through a NodePort service,
214213> you need to find the nodePort using `kubectl get service` and the IP address of one of the nodes
215214> using `kubectl get nodes -o wide`
216215> Then combine the node IP address and nodePort with a colon between them, in a browser or using curl:
@@ -224,12 +223,11 @@ http://<node-ip>:<nodePort>
224223# ### Turn the backend pod manifests into a deployment manifest
225224
226225Now we'll replace the Pod manifest with a Deployment manifest. In addition to the regular
227- manifest fields, we need to set couple of extra things :
226+ manifest fields, we need to set a couple of extra things :
228227
229- 1. A Pod `template`. This almost a copy of most of the Pod manifest. This defines the Pods
228+ 1. A Pod `template`. This is almost a copy of most of the Pod manifest. This defines the Pods
230229 that will be created by the deployment.
231- 2. A `selectorLabel`. This is used to determine which pods to manage will be managed by the
232- ReplicaSet.
230+ 2. A `selectorLabel`. This is used to determine which pods will be managed by the ReplicaSet.
2332313. A number of `replicas`. This specifies how many simultaneous identical copies we want of the Pod
234232
235233To do this, open both the `backend-deployment.yaml` and the `backend-pod.yaml` files in your editor,
@@ -243,7 +241,7 @@ Now we need to set the `selectorLabel`
243241
244242- Add a `selector` key under the `spec` key
245243- Add a `matchLabels` key under the `selector` key
246- - Look up the the `metadata.label` section in `backend-pod.yaml`
244+ - Look up the `metadata.label` section in `backend-pod.yaml`
247245 and the labels defined there under `matchLabels`
248246
249247<details>
@@ -353,20 +351,20 @@ Deployment. We've already deployed the Pod, so let us remove it again before app
353351
354352- Access the frontend again from the browser.
355353
356- Now the Ingress should work and you should be able to access the frontend from the browser using
354+ Now the Ingress should work, and you should be able to access the frontend from the browser using
357355 the hostname you specified in the ingress manifest.
358356
359- The url should look something like this :
357+ The URL should look something like this :
360358
361359 ` ` ` text
362360 http://quotes-<yourname>.<prefix>.eficode.academy
363361 ` ` `
364362
365- - If it still does not work, you can check it through NodePort service instead.
363+ - If it still does not work, you can check it through the NodePort service instead.
366364
367365- You should now see the backend.
368366
369- - If this works, please delete the `backend-pod.yaml` file, as we now have upgraded to a deployment
367+ - If this works, please delete the `backend-pod.yaml` file, as we have now upgraded to a deployment
370368 and no longer need it!
371369
372370# ### Scale the deployment by adding a replicas key
@@ -424,12 +422,12 @@ backend-5f4b8b7b4-7x7xg 1/1 Running 0 1m
424422
425423# ### Turn frontend pod manifests into a deployment manifest
426424
427- You will now do the exact same thing for the frontend, we will walk you through it again, but at a
428- higher level, if get stuck you can go back and double check how you did it for the backend.
425+ You will now do the same thing for the frontend. We will walk you through it again, but at a
426+ higher level. If you get stuck, you can go back and double- check how you did it for the backend.
429427
430428- Open both the `frontend-deployment.yaml` and the `frontend-pod.yaml` files in your editor.
431- - add the api-version and kind keys to the `frontend-deployment.yaml` file.
432- - Give the deployment a name of `frontend` under `metadata.name` key.
429+ - Add the api-version and kind keys to the `frontend-deployment.yaml` file.
430+ - Give the deployment a name of `frontend` under the `metadata.name` key.
433431- Add a label of `run : frontend` under `metadata.labels` key.
434432- Set `spec.replicas` to `3`.
435433- Copy the `metadata` and `spec` contents of the `frontend-pod.yaml` file into the
@@ -499,7 +497,7 @@ frontend-47b45fb8b-4x7xg 1/1 Running 0 1m
499497- Access the frontend again from the browser.
500498 Note that both the frontend and backend hostname parts of the website should change periodically.
501499
502- - If this works, please delete the `frontend-pod.yaml` file, as we now have upgraded to a deployment
500+ - If this works, please delete the `frontend-pod.yaml` file, as we have now upgraded to a deployment
503501 and no longer need it!
504502
505503</details>
@@ -531,7 +529,7 @@ kubectl delete -f frontend-ingress.yaml
531529
532530# # Extra Exercise
533531
534- Test Kubernetes promise of resiliency and high availability
532+ Test the Kubernetes promise of resiliency and high availability
535533
536534<details>
537535<summary>
@@ -541,7 +539,7 @@ An example of using a LoadBalancer service to route traffic to replicated pods
541539We can use the `ghcr.io/eficode-academy/network-multitool` image to illustrate both high
542540availability and load balancing of `services`. The `network-multitool` pod will serve a tiny
543541webpage that dynamically contains the pod hostname and IP address of the pod. This enables us to see
544- which of a group of network-multitool pods that served the request.
542+ which of a group of network-multitool pods served the request.
545543
546544Create the network-multitool deployment :
547545
@@ -558,11 +556,11 @@ We also create a service of type `LoadBalancer`:
558556kubectl expose deployment customnginx --port 80 --type LoadBalancer
559557` ` `
560558
561- > :bulb: It might take a minute to provision the LoadBalancer, if you are using AWS, then
559+ > :bulb: It might take a minute to provision the LoadBalancer. If you are using AWS, then
562560> `kubectl get services` will show you the DNS name of the provisioned LoadBalancer immediately, but
563561> it will be a moment before it is ready.
564562
565- When the LoadBalancer is ready we setup a loop to keep sending requests to the pods :
563+ When the LoadBalancer is ready, we set up a loop to keep sending requests to the pods :
566564
567565` ` ` shell
568566while true; do curl --connect-timeout 1 -m 1 -s <loadbalancerIP> ; sleep 0.5; done
@@ -578,8 +576,8 @@ Eficode Academy Network MultiTool (with NGINX) - customnginx-7fcfd947cf-zbvtd -
578576Eficode Academy Network MultiTool (with NGINX) - customnginx-7fcfd947cf-zbvtd - 100.96.2.36 <BR></p>
579577` ` `
580578
581- We see that when we query the LoadBalancer IP, it is giving us result /content from all four pods.
582- None of the curl commands time out.
579+ We see that when we query the LoadBalancer IP, it is giving us results /content from all four pods.
580+ None of the curl commands times out.
583581Now, if we kill three out of four pods, the service should still respond, without timing out.
584582We let the loop run in a separate terminal, and kill three pods of this deployment from another terminal.
585583
@@ -613,11 +611,11 @@ Eficode Academy Network MultiTool (with NGINX) - customnginx-59db6cff7b-h2dbg -
613611Eficode Academy Network MultiTool (with NGINX) - customnginx-59db6cff7b-5xbjc - 10.244.0.22
614612` ` `
615613
616- We notice that no curl commands failed, and actually we have started seeing new IPs.
614+ We notice that no curl commands failed, and actually, we have started seeing new IPs.
617615
618- Why is that? It is because, as soon as the pods are deleted, the deployment sees that it's desired
616+ Why is that? It is because, as soon as the pods are deleted, the deployment sees that its desired
619617state is four pods, and there is only one running, so it immediately starts three more to reach the
620- desired state of four pods. And, while the pods are in process of starting, one surviving pod serves
618+ desired state of four pods. And, while the pods are in the process of starting, one surviving pod serves
621619all of the traffic, preventing our application from missing any requests.
622620
623621` ` ` shell
@@ -637,8 +635,8 @@ customnginx-3557040084-fw1t3 1/1 Running 0 16m
637635customnginx-3557040084-xqk1n 1/1 Running 0 15s
638636` ` `
639637
640- This proves, Kubernetes enables high availability, by using multiple replicas of a pod, and
641- loadbalancing between them.
638+ This proves, Kubernetes enables high availability by using multiple replicas of a pod, and
639+ load-balancing between them.
642640
643641Remember to clean up the deployment afterwards with :
644642
0 commit comments