Skip to content

Commit fc4599f

Browse files
committed
Language and spelling fixes with Grammarly
1 parent a0c2785 commit fc4599f

10 files changed

+206
-208
lines changed

accessing-your-application.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -32,14 +32,14 @@ You can then access the pod on `localhost:8080`.
3232
<details>
3333
<summary>:bulb: How does this port-forward work?</summary>
3434

35-
Port forwarding is a network address translation that redirects Internet packets from one IP address to another.
36-
with a specified port number to another `IP:PORT` set.
35+
Port forwarding is a network address translation that redirects Internet packets from one IP address
36+
to another with a specified port number to another `IP:PORT` set.
3737

3838
In Kubernetes `port-forward` creates a tunnel between your local machine and the Kubernetes cluster on
3939
the specified `IP:PORT` pairs to establish a connection to the cluster.
4040
`kubectl port-forward` allows you to forward not only pods but also services, deployments, and others.
4141

42-
More information can be found from [port-forward-access-application-cluste](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)
42+
More information can be found from [Use Port Forwarding to Access Applications in a Cluster](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/)
4343

4444
</details>
4545

configmaps-secrets.md

Lines changed: 25 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,13 @@
88
## Introduction
99

1010
Configmaps and secrets are a way to store information that is used by several deployments and pods
11-
in your cluster. This makes it easy to update the configuration in one place, when you want to
11+
in your cluster. This makes it easy to update the configuration in one place when you want to
1212
change it.
1313

1414
Both configmaps and secrets are generic `key-value` pairs, but secrets are `base64 encoded` and
1515
configmaps are not.
1616

17-
> :bulb: Secrets are not encrypted, they are encoded. This means that if someone gets access to the
17+
> :bulb: Secrets are not encrypted; they are encoded. This means that if someone gets access to the
1818
> cluster, they will be able to read the values.
1919
2020
## ConfigMaps
@@ -23,10 +23,10 @@ You use a ConfigMap to keep your application code separate from your configurati
2323

2424
It is an important part of creating a [Twelve-Factor Application](https://12factor.net/).
2525

26-
This lets you change easily configuration depending on the environment (development, production,
26+
This lets you change configuration easily depending on the environment (development, production,
2727
testing, etc.) and to dynamically change configuration at runtime.
2828

29-
A ConfigMap manifest looks like this in yaml:
29+
A ConfigMap manifest looks like this in YAML:
3030

3131
```yaml
3232
apiVersion: v1
@@ -101,7 +101,7 @@ data:
101101

102102
`secrets` are used for storing configuration that is considered sensitive, and well ... _secret_.
103103

104-
When you create a `secret` Kubernetes will go out of it's way to not print the actual values of
104+
When you create a `secret`, Kubernetes will go out of its way to not print the actual values of
105105
secret object, to things like logs or command output.
106106

107107
You should use `secrets` to store things like passwords for databases, API keys, certificates, etc.
@@ -113,19 +113,19 @@ source these values from environment variables.
113113
actual values are `base64` encoded. `base64` encoded means that the values are obscured, but can be
114114
trivially decoded. When values from a `secret` are used, Kubernetes handles the decoding for you.
115115

116-
> :bulb: As `secrets` don't actually make their data secret for anyone with access to the cluster,
116+
> :bulb: As `secrets` don't make their data secret for anyone with access to the cluster,
117117
> you should think of `secrets` as metadata for humans, to know that the data contained within is
118118
> considered secret.
119119

120120
## Using ConfigMaps and Secrets in a deployment
121121

122-
To use a configmap or secret in a deployment, you can either mount it in as a volume, or use it
122+
To use a configmap or secret in a deployment, you can either mount it as a volume or use it
123123
directly as an environment variable.
124124

125125
### Injecting a ConfigMap as environment variables
126126

127127
This will inject all key-value pairs from a configmap as environment variables in a container.
128-
The keys will be the name of variables, and the values will be values of the variables.
128+
The keys will be the names of variables, and the values will be the values of the variables.
129129

130130
```yaml
131131
apiVersion: apps/v1
@@ -151,10 +151,10 @@ spec:
151151

152152
- Add the database part of the application
153153
- Change the database user into a configmap and implement that in the backend
154-
- Change the database password into a secret, and implement that in the backend.
154+
- Change the database password to a secret, and implement that in the backend.
155155
- Change database deployment to use the configmap and secret.
156156

157-
### Step by step instructions
157+
### Step-by-step instructions
158158

159159
<details>
160160
<summary>
@@ -168,7 +168,7 @@ Step by step:
168168
We have already created the database part of the application, with a deployment and a service.
169169

170170
- Look at the database deployment file `postgres-deployment.yaml`.
171-
Notice the database username and password are injected as hardcoded environment variables.
171+
Notice that the database username and password are injected as hardcoded environment variables.
172172

173173
> :bulb: This is not a good practice, as we do not want to store these values in version control.
174174
> We will fix this in the next steps.
@@ -194,10 +194,10 @@ postgres-6fbd757dd7-ttpqj 1/1 Running 0 4s
194194
#### Refactor the database user into a configmap and implement that in the backend
195195

196196
We want to change the database user into a configmap, so that we can change it in one place, and
197-
use it on all deployments that needs it.
197+
use it on all deployments that need it.
198198

199199
- Create a configmap with the name `postgres-config` and filename `postgres-config.yaml` and the
200-
information about database configuration as follows:
200+
information about the database configuration as follows:
201201

202202
```yaml
203203
data:
@@ -238,7 +238,7 @@ data:
238238

239239
</details>
240240

241-
- apply the configmap with `kubectl apply -f postgres-config.yaml`
241+
- Apply the configmap with `kubectl apply -f postgres-config.yaml`
242242
- In the `backend-deployment.yaml`, change the environment variables to use the configmap instead
243243
of the hardcoded values.
244244

@@ -266,14 +266,14 @@ data:
266266
name: postgres-config
267267
```
268268

269-
- re-apply the backend deployment with `kubectl apply -f backend-deployment.yaml`
270-
- check that the website is still running.
269+
- Re-apply the backend deployment with `kubectl apply -f backend-deployment.yaml`
270+
- Check that the website is still running.
271271

272272
#### Change the database password into a secret, and implement that in the backend
273273

274-
We want to change the database password into a secret, so that we can change it in one place, and
275-
use it on all deployments that needs it.
276-
In order for this, we need to change the backend deployment to use the secret instead of the
274+
We want to change the database password to a secret, so that we can change it in one place, and
275+
use it on all deployments that need it.
276+
For this, we need to change the backend deployment to use the secret instead of the
277277
configmap for the password itself.
278278

279279
- create a secret with the name `postgres-secret` and the following data:
@@ -340,12 +340,12 @@ envFrom:
340340

341341
We are going to implement the configmap and secret in the database deployment as well.
342342

343-
The standard Postgres docker image can be configured by setting specific environment variables,
343+
The standard Postgres Docker image can be configured by setting specific environment variables,
344344
([you can see the documentation here](https://hub.docker.com/_/postgres)).
345-
By populating these specific values we can configure the credentials for root user and the name of
346-
the database to be created.
345+
By populating these specific values, we can configure the credentials for the root user and the name
346+
of the database to be created.
347347

348-
This means that we need to change the way we are injecting the environment variables, in order to
348+
This means that we need to change the way we are injecting the environment variables, to
349349
make sure the environment variables have the correct names.
350350

351351
- Open the `postgres-deployment.yaml` file, and change the way the environment variables are
@@ -371,8 +371,8 @@ env:
371371
key: DB_PASSWORD
372372
```
373373

374-
- re-apply the database deployment with `kubectl apply -f postgres-deployment.yaml`
375-
- check that the website is still running, and that the new database can be reached from the backend.
374+
- Re-apply the database deployment with `kubectl apply -f postgres-deployment.yaml`
375+
- Check that the website is still running, and that the new database can be reached from the backend.
376376

377377
Congratulations! You have now created a configmap and a secret, and used them in your application.
378378

deployments-ingress.md

Lines changed: 37 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
- Learn how to expose a deployment using Ingress
66
- Learn how to use `deployments`
77
- Learn how to scale deployments
8-
- Learn how to use `services` to do loadbalance between the pods of a scaled deployment
8+
- Learn how to use `services` to do load balancing between the pods of a scaled deployment
99

1010
## Introduction
1111

@@ -14,7 +14,7 @@ scale it, and how to expose it using an `Ingress` resource with URL routing.
1414

1515
## Deployments
1616

17-
Deployments are a higher level abstraction than pods, and controls the lifecycle
17+
Deployments are a higher-level abstraction than pods, and control the lifecycle
1818
and configuration of a "deployment" of an application. They are used to manage a
1919
set of pods, and to ensure that a specified number of pods are always running
2020
with the desired configuration. `deployments` are a Kubernetes `kind` and
@@ -71,13 +71,13 @@ spec:
7171
7272
### High Availability
7373
74-
In order to make our application stable, we want **high availability**. High availability means that
74+
To make our application stable, we want **high availability**. High availability means that
7575
we replicate our applications, such that we have **redundant copies**, which means that _when_ an
7676
application fails, our users are not impacted, as they will simply use one of the other copies,
7777
while the failed instance recovers.
7878
79-
In Kubernetes this is done in practice by `scaling` a deployment, e.g. by adding or removing
80-
`replicas`. `replicas` are **identical** copies of the **the same pod**.
79+
In Kubernetes, this is done in practice by `scaling` a deployment, e.g., by adding or removing
80+
`replicas`. `replicas` are **identical** copies of the **same pod**.
8181

8282
To scale a deployment, we change the number of `replicas` in the manifest file, and then apply the
8383
changes using `kubectl apply -f <manifest-file>`.
@@ -135,11 +135,11 @@ ingress-controller called ALB, and the ALB will route traffic to the service.
135135
- Scale the deployment by adding a replicas key
136136
- Turn frontend pod manifests into a deployment manifest
137137
- Apply the frontend deployment manifest
138-
- Test service promise of high availability
138+
- Test the service promise of high availability
139139

140-
> :bulb: If you get stuck somewhere along the way, you can check the solution inthe done directory.
140+
> :bulb: If you get stuck somewhere along the way, you can check the solution in the **done** directory.
141141

142-
### Step by step instructions
142+
### Step-by-step instructions
143143

144144
<details>
145145
<summary>
@@ -148,9 +148,9 @@ Step by step:
148148

149149
- Go into the `deployments-ingress/start` directory.
150150

151-
In the directory we have the pod manifests for the backend and frontend that have created in the
151+
In the directory, we have the pod manifests for the backend and frontend that were created in the
152152
previous exercises. We also have two services, one for the backend (type ClusterIP) and one for the
153-
frontend (type NodePort) as well as an ingress manifest for the frontend.
153+
frontend (type NodePort), as well as an ingress manifest for the frontend.
154154

155155
#### Add Ingress to frontend service
156156

@@ -192,9 +192,8 @@ It will take a while for the ingress to work, so we will continue with the backe
192192

193193
#### Deploy the quotes application
194194

195-
To show how Deployments take the place of Pod manifests we will
196-
first deploy the quotes application, and then slowly replace the
197-
Pods with Deployments.
195+
To show how Deployments take the place of Pod manifests, we will first deploy the quotes application,
196+
and then slowly replace the Pods with Deployments.
198197

199198
- Deploy the following using the `kubectl apply -f` command
200199
- `frontend-pod.yaml`
@@ -210,7 +209,7 @@ Pods with Deployments.
210209
How do I connect to a pod through a NodePort service?
211210
</summary>
212211

213-
> :bulb: In previous exercises you learned how connect to a pod exposed through a NodePort service,
212+
> :bulb: In previous exercises, you learned how to connect to a pod exposed through a NodePort service,
214213
> you need to find the nodePort using `kubectl get service` and the IP address of one of the nodes
215214
> using `kubectl get nodes -o wide`
216215
> Then combine the node IP address and nodePort with a colon between them, in a browser or using curl:
@@ -224,12 +223,11 @@ http://<node-ip>:<nodePort>
224223
#### Turn the backend pod manifests into a deployment manifest
225224

226225
Now we'll replace the Pod manifest with a Deployment manifest. In addition to the regular
227-
manifest fields, we need to set couple of extra things:
226+
manifest fields, we need to set a couple of extra things:
228227

229-
1. A Pod `template`. This almost a copy of most of the Pod manifest. This defines the Pods
228+
1. A Pod `template`. This is almost a copy of most of the Pod manifest. This defines the Pods
230229
that will be created by the deployment.
231-
2. A `selectorLabel`. This is used to determine which pods to manage will be managed by the
232-
ReplicaSet.
230+
2. A `selectorLabel`. This is used to determine which pods will be managed by the ReplicaSet.
233231
3. A number of `replicas`. This specifies how many simultaneous identical copies we want of the Pod
234232

235233
To do this, open both the `backend-deployment.yaml` and the `backend-pod.yaml` files in your editor,
@@ -243,7 +241,7 @@ Now we need to set the `selectorLabel`
243241

244242
- Add a `selector` key under the `spec` key
245243
- Add a `matchLabels` key under the `selector` key
246-
- Look up the the `metadata.label` section in `backend-pod.yaml`
244+
- Look up the `metadata.label` section in `backend-pod.yaml`
247245
and the labels defined there under `matchLabels`
248246

249247
<details>
@@ -353,20 +351,20 @@ Deployment. We've already deployed the Pod, so let us remove it again before app
353351

354352
- Access the frontend again from the browser.
355353

356-
Now the Ingress should work and you should be able to access the frontend from the browser using
354+
Now the Ingress should work, and you should be able to access the frontend from the browser using
357355
the hostname you specified in the ingress manifest.
358356

359-
The url should look something like this:
357+
The URL should look something like this:
360358

361359
```text
362360
http://quotes-<yourname>.<prefix>.eficode.academy
363361
```
364362

365-
- If it still does not work, you can check it through NodePort service instead.
363+
- If it still does not work, you can check it through the NodePort service instead.
366364

367365
- You should now see the backend.
368366

369-
- If this works, please delete the `backend-pod.yaml` file, as we now have upgraded to a deployment
367+
- If this works, please delete the `backend-pod.yaml` file, as we have now upgraded to a deployment
370368
and no longer need it!
371369

372370
#### Scale the deployment by adding a replicas key
@@ -424,12 +422,12 @@ backend-5f4b8b7b4-7x7xg 1/1 Running 0 1m
424422

425423
#### Turn frontend pod manifests into a deployment manifest
426424

427-
You will now do the exact same thing for the frontend, we will walk you through it again, but at a
428-
higher level, if get stuck you can go back and double check how you did it for the backend.
425+
You will now do the same thing for the frontend. We will walk you through it again, but at a
426+
higher level. If you get stuck, you can go back and double-check how you did it for the backend.
429427

430428
- Open both the `frontend-deployment.yaml` and the `frontend-pod.yaml` files in your editor.
431-
- add the api-version and kind keys to the `frontend-deployment.yaml` file.
432-
- Give the deployment a name of `frontend` under `metadata.name` key.
429+
- Add the api-version and kind keys to the `frontend-deployment.yaml` file.
430+
- Give the deployment a name of `frontend` under the `metadata.name` key.
433431
- Add a label of `run: frontend` under `metadata.labels` key.
434432
- Set `spec.replicas` to `3`.
435433
- Copy the `metadata` and `spec` contents of the `frontend-pod.yaml` file into the
@@ -499,7 +497,7 @@ frontend-47b45fb8b-4x7xg 1/1 Running 0 1m
499497
- Access the frontend again from the browser.
500498
Note that both the frontend and backend hostname parts of the website should change periodically.
501499

502-
- If this works, please delete the `frontend-pod.yaml` file, as we now have upgraded to a deployment
500+
- If this works, please delete the `frontend-pod.yaml` file, as we have now upgraded to a deployment
503501
and no longer need it!
504502

505503
</details>
@@ -531,7 +529,7 @@ kubectl delete -f frontend-ingress.yaml
531529

532530
## Extra Exercise
533531

534-
Test Kubernetes promise of resiliency and high availability
532+
Test the Kubernetes promise of resiliency and high availability
535533

536534
<details>
537535
<summary>
@@ -541,7 +539,7 @@ An example of using a LoadBalancer service to route traffic to replicated pods
541539
We can use the `ghcr.io/eficode-academy/network-multitool` image to illustrate both high
542540
availability and load balancing of `services`. The `network-multitool` pod will serve a tiny
543541
webpage that dynamically contains the pod hostname and IP address of the pod. This enables us to see
544-
which of a group of network-multitool pods that served the request.
542+
which of a group of network-multitool pods served the request.
545543

546544
Create the network-multitool deployment:
547545

@@ -558,11 +556,11 @@ We also create a service of type `LoadBalancer`:
558556
kubectl expose deployment customnginx --port 80 --type LoadBalancer
559557
```
560558

561-
> :bulb: It might take a minute to provision the LoadBalancer, if you are using AWS, then
559+
> :bulb: It might take a minute to provision the LoadBalancer. If you are using AWS, then
562560
> `kubectl get services` will show you the DNS name of the provisioned LoadBalancer immediately, but
563561
> it will be a moment before it is ready.
564562

565-
When the LoadBalancer is ready we setup a loop to keep sending requests to the pods:
563+
When the LoadBalancer is ready, we set up a loop to keep sending requests to the pods:
566564

567565
```shell
568566
while true; do curl --connect-timeout 1 -m 1 -s <loadbalancerIP> ; sleep 0.5; done
@@ -578,8 +576,8 @@ Eficode Academy Network MultiTool (with NGINX) - customnginx-7fcfd947cf-zbvtd -
578576
Eficode Academy Network MultiTool (with NGINX) - customnginx-7fcfd947cf-zbvtd - 100.96.2.36 <BR></p>
579577
```
580578

581-
We see that when we query the LoadBalancer IP, it is giving us result/content from all four pods.
582-
None of the curl commands time out.
579+
We see that when we query the LoadBalancer IP, it is giving us results/content from all four pods.
580+
None of the curl commands times out.
583581
Now, if we kill three out of four pods, the service should still respond, without timing out.
584582
We let the loop run in a separate terminal, and kill three pods of this deployment from another terminal.
585583

@@ -613,11 +611,11 @@ Eficode Academy Network MultiTool (with NGINX) - customnginx-59db6cff7b-h2dbg -
613611
Eficode Academy Network MultiTool (with NGINX) - customnginx-59db6cff7b-5xbjc - 10.244.0.22
614612
```
615613

616-
We notice that no curl commands failed, and actually we have started seeing new IPs.
614+
We notice that no curl commands failed, and actually, we have started seeing new IPs.
617615

618-
Why is that? It is because, as soon as the pods are deleted, the deployment sees that it's desired
616+
Why is that? It is because, as soon as the pods are deleted, the deployment sees that its desired
619617
state is four pods, and there is only one running, so it immediately starts three more to reach the
620-
desired state of four pods. And, while the pods are in process of starting, one surviving pod serves
618+
desired state of four pods. And, while the pods are in the process of starting, one surviving pod serves
621619
all of the traffic, preventing our application from missing any requests.
622620

623621
```shell
@@ -637,8 +635,8 @@ customnginx-3557040084-fw1t3 1/1 Running 0 16m
637635
customnginx-3557040084-xqk1n 1/1 Running 0 15s
638636
```
639637

640-
This proves, Kubernetes enables high availability, by using multiple replicas of a pod, and
641-
loadbalancing between them.
638+
This proves, Kubernetes enables high availability by using multiple replicas of a pod, and
639+
load-balancing between them.
642640

643641
Remember to clean up the deployment afterwards with:
644642

0 commit comments

Comments
 (0)