diff --git a/.github/workflows/stale.yml b/.github/workflows/stale.yml new file mode 100644 index 0000000000..aecdf963b3 --- /dev/null +++ b/.github/workflows/stale.yml @@ -0,0 +1,27 @@ +# This workflow warns and then closes issues and PRs that have had no activity for a specified amount of time. +# +# You can adjust the behavior by modifying this file. +# For more information, see: +# https://github.com/actions/stale +name: Mark stale issues and pull requests + +on: + schedule: + - cron: '20 7 * * *' + +jobs: + stale: + + runs-on: ubuntu-latest + permissions: + issues: write + pull-requests: write + + steps: + - uses: actions/stale@v5 + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} + stale-issue-message: 'Stale issue message' + stale-pr-message: 'Stale pull request message' + stale-issue-label: 'no-issue-activity' + stale-pr-label: 'no-pr-activity' diff --git a/2023/day01/tasks.md b/2023/day01/README.md similarity index 69% rename from 2023/day01/tasks.md rename to 2023/day01/README.md index ade0bae42f..362e4be12e 100644 --- a/2023/day01/tasks.md +++ b/2023/day01/README.md @@ -5,7 +5,8 @@ This is the day you have to Take this challenge and start your #90DaysOfDevOps w - Fork this Repo. - Start with a DevOps Roadmap[https://youtu.be/iOE9NTAG35g] - Write a LinkedIn post or a small article about your understanding of DevOps - - What is DevOps - - What is Automation, Scaling, Infrastructure - - Why DevOps is Important, etc - \ No newline at end of file +- What is DevOps +- What is Automation, Scaling, Infrastructure +- Why DevOps is Important, etc + +[Next Day →](../day02/README.md) diff --git a/2023/day02/tasks.md b/2023/day02/README.md similarity index 73% rename from 2023/day02/tasks.md rename to 2023/day02/README.md index 16aaa1d804..342ac39c32 100644 --- a/2023/day02/tasks.md +++ b/2023/day02/README.md @@ -1,6 +1,7 @@ Day 2 Task: Basics linux command -Task: What is the linux command to +Task: What is the linux command to + 1. Check your present working directory. 2. List all the files or directories including hidden files. 3. Create a nested directory A/B/C/D/E @@ -8,3 +9,5 @@ Task: What is the linux command to Note: [Check this file for reference](basic_linux_commands.md) Check the basic_linux_commands.md file on the same directory day2 + +[← Previous Day](../day01/README.md) | [Next Day →](../day03/README.md) diff --git a/2023/day02/basic_linux_commands.md b/2023/day02/basic_linux_commands.md index e260f0a5d8..24bb7fe1e3 100644 --- a/2023/day02/basic_linux_commands.md +++ b/2023/day02/basic_linux_commands.md @@ -21,7 +21,7 @@ Examples: - ``` cd - ``` --> Go to the last working directory. -- ``` cd ..``` --> chnage directory to one step back. +- ``` cd ..``` --> change directory to one step back. - ``` cd ../..``` --> Change directory to 2 levels back. diff --git a/2023/day03/tasks.md b/2023/day03/README.md similarity index 91% rename from 2023/day03/tasks.md rename to 2023/day03/README.md index 102c78a6b9..c3d1d17563 100644 --- a/2023/day03/tasks.md +++ b/2023/day03/README.md @@ -14,5 +14,6 @@ Task: What is the linux command to 10. Add content in Colors.txt (One in each line) - Red, Pink, White, Black, Blue, Orange, Purple, Grey. 11. To find the difference between fruits.txt and Colors.txt file. - Reference: https://www.linkedin.com/pulse/linux-commands-devops-used-day-to-day-activit-chetan-/ + +[← Previous Day](../day02/README.md) | [Next Day →](../day04/README.md) diff --git a/2023/day04/README.md b/2023/day04/README.md new file mode 100644 index 0000000000..2ffe27d9a9 --- /dev/null +++ b/2023/day04/README.md @@ -0,0 +1,31 @@ +# Day 4 Task: Basic Linux Shell Scripting for DevOps Engineers. + +## What is Kernel + +The kernel is a computer program that is the core of a computer’s operating system, with complete control over everything in the system. + +## What is Shell + +A shell is special user program which provide an interface to user to use operating system services. Shell accept human readable commands from user and convert them into something which kernel can understand. It is a command language interpreter that execute commands read from input devices such as keyboards or from files. The shell gets started when the user logs in or start the terminal. + +## What is Linux Shell Scripting? + +A shell script is a computer program designed to be run by a linux shell, a command-line interpreter. The various dialects of shell scripts are considered to be scripting languages. Typical operations performed by shell scripts include file manipulation, program execution, and printing text. + +**Tasks** + +- Explain in your own words and examples, what is Shell Scripting for DevOps. +- What is `#!/bin/bash?` can we write `#!/bin/sh` as well? +- Write a Shell Script which prints `I will complete #90DaysOofDevOps challenge` +- Write a Shell Script to take user input, input from arguments and print the variables. +- Write an Example of If else in Shell Scripting by comparing 2 numbers + +Was it difficult? + +- Post about it on LinkedIn and Let me know :) + +Article Reference: [Click here to read basic Linux Shell Scripting](https://devopscube.com/linux-shell-scripting-for-devops/) + +YouTube Video: [EASIEST Shell Scripting Tutorial for DevOps Engineers](https://www.youtube.com/watch?v=_-D6gkRj7xc&list=PLlfy9GnSVerQr-Se9JRE_tZJk3OUoHCkh&index=3) + +[← Previous Day](../day03/README.md) | [Next Day →](../day05/README.md) diff --git a/2023/day04/tasks.md b/2023/day04/tasks.md deleted file mode 100644 index a6722128c9..0000000000 --- a/2023/day04/tasks.md +++ /dev/null @@ -1,29 +0,0 @@ -# Day 4 Task: Basic Linux Shell Scripting for DevOps Engineers. - - ## What is Kernel - - The kernel is a computer program that is the core of a computer’s operating system, with complete control over everything in the system. - - ## What is Shell - - A shell is special user program which provide an interface to user to use operating system services. Shell accept human readable commands from user and convert them into something which kernel can understand. It is a command language interpreter that execute commands read from input devices such as keyboards or from files. The shell gets started when the user logs in or start the terminal. - - ## What is Linux Shell Scripting? - - A shell script is a computer program designed to be run by a linux shell, a command-line interpreter. The various dialects of shell scripts are considered to be scripting languages. Typical operations performed by shell scripts include file manipulation, program execution, and printing text. - - **Tasks** - - - Explain in your own words and examples, what is Shell Scripting for DevOps. - - What is `#!/bin/bash?` can we write `#!/bin/sh` as well? - - Write a Shell Script which prints `I will complete #90DaysOofDevOps challenge` - - Write a Shell Script to take user input, input from arguments and print the variables. - - Write an Example of If else in Shell Scripting by comparing 2 numbers - - Was it difficult? - - - Post about it on LinkedIn and Let me know :) - - Article Reference: [Click here to read basic Linux Shell Scripting](https://devopscube.com/linux-shell-scripting-for-devops/) - - YouTube Vedio: [EASIEST Shell Scripting Tutorial for DevOps Engineers](https://www.youtube.com/watch?v=_-D6gkRj7xc&list=PLlfy9GnSVerQr-Se9JRE_tZJk3OUoHCkh&index=3) \ No newline at end of file diff --git a/2023/day05/README.md b/2023/day05/README.md new file mode 100644 index 0000000000..d894468fd3 --- /dev/null +++ b/2023/day05/README.md @@ -0,0 +1,53 @@ +# Day 5 Task: Advanced Linux Shell Scripting for DevOps Engineers with User management + +If you noticed that there are a total 90 sub-directories in the directory '2023' of this repository. What did you think, how did I create 90 directories? Manually one by one or using a script, or a command? + +All 90 directories within seconds using a simple command. + +` mkdir day{1..90}` + +### Tasks + +1. You have to do the same using Shell Script i.e using either Loops or command with start day and end day variables using arguments - + +So Write a bash script create directories.sh that when the script is executed with three given arguments (one is the directory name and second is start number of directories and third is the end number of directories ) it creates a specified number of directories with a dynamic directory name. + +Example 1: When the script is executed as + +`./createDirectories.sh day 1 90` + +then it creates 90 directories as `day1 day2 day3 .... day90` + +Example 2: When the script is executed as + +`./createDirectories.sh Movie 20 50` +then it creates 50 directories as `Movie20 Movie21 Movie23 ...Movie50` + +Notes: +You may need to use loops or commands (or both), based on your preference . [Check out this reference: https://www.geeksforgeeks.org/bash-scripting-for-loop/](https://www.geeksforgeeks.org/bash-scripting-for-loop/) + +2. Create a Script to backup all your work done till now. + +Backups are an important part of DevOps Engineer's day to Day activities +The video in References will help you to understand How a DevOps Engineer takes backups (it can feel a bit difficult but keep trying, Nothing is impossible.) +Watch [this video](https://youtu.be/aolKiws4Joc) + +In case of Doubts, post it in [Discord Channel for #90DaysOfDevOps](https://discord.gg/hs3Pmc5F) + +3. Read About Cron and Crontab, to automate the backup Script + +Cron is the system's main scheduler for running jobs or tasks unattended. A command called crontab allows the user to submit, edit or delete entries to cron. A crontab file is a user file that holds the scheduling information. + +Watch This video as a Reference to Task 2 and 3 [https://youtu.be/aolKiws4Joc](https://youtu.be/aolKiws4Joc) + +4. Read about User Management and Let me know on Linkedin if you're ready for Day 6. + +A user is an entity, in a Linux operating system, that can manipulate files and perform several other operations. Each user is assigned an ID that is unique for each user in the operating system. In this post, we will learn about users and commands which are used to get information about the users. After installation of the operating system, the ID 0 is assigned to the root user and the IDs 1 to 999 (both inclusive) are assigned to the system users and hence the ids for local user begins from 1000 onwards. + +5. Create 2 users and just display their Usernames + +[Check out this reference: https://www.geeksforgeeks.org/user-management-in-linux/](https://www.geeksforgeeks.org/user-management-in-linux/) + +Post your daily work on Linkedin and let [me](https://www.linkedin.com/in/shubhamlondhe1996/) know , writing an article is the best :) + +[← Previous Day](../day04/README.md) | [Next Day →](../day06/README.md) diff --git a/2023/day05/tasks.md b/2023/day05/tasks.md deleted file mode 100644 index 3f831b8b15..0000000000 --- a/2023/day05/tasks.md +++ /dev/null @@ -1,55 +0,0 @@ -# Day 5 Task: Advanced Linux Shell Scripting for DevOps Engineers with User management - -If you noticed that there are total 90 sub directories in the directory '2023' of this repository. What did you think, how did I create 90 directories. Manually one by one or using a script, or a command ? - -All 90 directories within seconds using a simple command. - -` mkdir day{1..90}` - -### Tasks -1) You have to do the same using Shell Script i.e using either Loops or command with start day and end day variables using arguments - - - So Write a bash script createDirectories.sh that when the script is executed with three given arguments (one is directory name and second is start number of directories and third is the end number of directories ) it creates specified number of directories with a dynamic directory name. - -Example 1: When the script is executed as - -```./createDirectories.sh day 1 90``` - -then it creates 90 directories as ```day1 day2 day3 .... day90``` - -Example 2: When the script is executed as - -```./createDirectories.sh Movie 20 50``` -then it creates 50 directories as ```Movie20 Movie21 Movie23 ...Movie50``` - -Notes: -You may need to use loops or commands (or both), based on your preference . [Check out this reference: https://www.geeksforgeeks.org/bash-scripting-for-loop/](https://www.geeksforgeeks.org/bash-scripting-for-loop/) - - - 2) Create a Script to backup all your work done till now. - - Backups are an important part of DevOps Engineers day to Day activities - The video in References will help you to understand How a DevOps Engineer takes backups (it can feel a bit difficult but keep trying, Nothing is impossible.) - Watch [this video](https://youtu.be/aolKiws4Joc) - - In case of Doubts, post it in [Discord Channel for #90DaysOfDevOps](https://discord.gg/hs3Pmc5F) - - - 3) Read About Cron and Crontab, to automate the backup Script - - Cron is the system's main scheduler for running jobs or tasks unattended. A command called crontab allows the user to submit, edit or delete entries to cron. A crontab file is a user file that holds the scheduling information. - - Watch This video as a Reference to Task 2 and 3 [https://youtu.be/aolKiws4Joc](https://youtu.be/aolKiws4Joc) - - - 4) Read about User Management and Let me know on Linkedin if you're ready for Day 6. - -A user is an entity, in a Linux operating system, that can manipulate files and perform several other operations. Each user is assigned an ID that is unique for each user in the operating system. In this post, we will learn about users and commands which are used to get information about the users. After installation of the operating system, the ID 0 is assigned to the root user and the IDs 1 to 999 (both inclusive) are assigned to the system users and hence the ids for local user begins from 1000 onwards. - - - 5) Create 2 users and just display their Usernames - -[Check out this reference: https://www.geeksforgeeks.org/user-management-in-linux/](https://www.geeksforgeeks.org/user-management-in-linux/) - - Post your daily work on Linkedin and le [me](https://www.linkedin.com/in/shubhamlondhe1996/) know , writing an article is the best :) - diff --git a/2023/day06/README.md b/2023/day06/README.md new file mode 100644 index 0000000000..76c9d09ab3 --- /dev/null +++ b/2023/day06/README.md @@ -0,0 +1,31 @@ +# Day 6 Task: File Permissions and Access Control Lists + +### Today is more on Reading, Learning and Implementing File permissions + +The concept of Linux File permission and ownership is important in Linux. +Here, we will be working on Linux permissions and ownership and will do tasks on +both of them. +Let us start with the Permissions. + +1. Create a simple file and do `ls -ltr` to see the details of the files [refer to Notes](https://github.com/LondheShubham153/90DaysOfDevOps/tree/master/2023/day06/notes) + +Each of the three permissions are assigned to three defined categories of users. The categories are: + +- owner — The owner of the file or application. +- "chown" is used to change the ownership permission of a file or directory. +- group — The group that owns the file or application. +- "chgrp" is used to change the group permission of a file or directory. +- others — All users with access to the system. (outised the users are in a group) +- "chmod" is used to change the other users permissions of a file or directory. + + As a task, change the user permissions of the file and note the changes after `ls -ltr` + +2. Write an article about File Permissions based on your understanding from the notes. + +3. Read about ACL and try out the commands `getfacl` and `setfacl` + +In case of any doubts, post it on [Discord Community](https://discord.gg/hs3Pmc5F) + +Happy Learning + +[← Previous Day](../day05/README.md) | [Next Day →](../day07/README.md) diff --git a/2023/day06/tasks.md b/2023/day06/tasks.md deleted file mode 100644 index 05882b1bf8..0000000000 --- a/2023/day06/tasks.md +++ /dev/null @@ -1,28 +0,0 @@ -# Day 6 Task: File Permissions and Access Control Lists - -### Today is more on Reading, Learning and Implementing File permissions - - The concept of Linux File permission and ownership is important in Linux. - Here, we will be working on Linux permissions and ownership and will do tasks on - both of them. - Let us start with the Permissions. - -1) Create a simple file and do `ls -ltr` to see the details of the files [refer to Notes](https://github.com/LondheShubham153/90DaysOfDevOps/tree/master/2023/day6/notes) - - Each of the three permissions are assigned to three defined categories of users. The categories are: -- owner — The owner of the file or application. -- "chown" is used to change the ownership permission of a file or directory. -- group — The group that owns the file or application. -- "chgrp" is used to change the gropu permission of a file or directory. -- others — All users with access to the system. (outised the users are in a group) -- "chmod" is used to change the other users permissions of a file or directory. - - As a task, change the user permissions of the file and note the changes after `ls -ltr` - -2) Write an article about File Permissions based on your understanding from the notes. - -3) Read about ACL and try out the commands `getfacl` and `setfacl` - -In case of any doubts, post it on [Discord Community](https://discord.gg/hs3Pmc5F) - -Happy Learning \ No newline at end of file diff --git a/2023/day07/README.md b/2023/day07/README.md new file mode 100644 index 0000000000..d942492d95 --- /dev/null +++ b/2023/day07/README.md @@ -0,0 +1,43 @@ +# Day 7 Task: Understanding package manager and systemctl + +### What is a package manager in Linux? + +In simpler words, a package manager is a tool that allows users to install, remove, upgrade, configure and manage software packages on an operating system. The package manager can be a graphical application like a software center or a command line tool like apt-get or pacman. + +You’ll often find me using the term ‘package’ in tutorials and articles, To understand package manager, you must understand what a package is. + +### What is a package? + +A package is usually referred to an application but it could be a GUI application, command line tool or a software library (required by other software programs). A package is essentially an archive file containing the binary executable, configuration file and sometimes information about the dependencies. + +### Different kinds of package managers + +Package Managers differ based on packaging system but same packaging system may have more than one package manager. + +For example, RPM has Yum and DNF package managers. For DEB, you have apt-get, aptitude command line based package managers. + +## Tasks + +1. You have to install docker and jenkins in your system from your terminal using package managers + +2. Write a small blog or article to install these tools using package managers on Ubuntu and CentOS + +### systemctl and systemd + +systemctl is used to examine and control the state of “systemd” system and service manager. systemd is system and service manager for Unix like operating systems(most of the distributions, not all). + +## Tasks + +1. check the status of docker service in your system (make sure you completed above tasks, else docker won't be installed) + +2. stop the service jenkins and post before and after screenshots + +3. read about the commands systemctl vs service + +eg. `systemctl status docker` vs `service docker status` + +For Reference, read [this](https://www.howtogeek.com/devops/how-to-check-if-the-docker-daemon-or-a-container-is-running/#:~:text=Checking%20With%20Systemctl&text=Check%20what%27s%20displayed%20under%20%E2%80%9CActive,running%20sudo%20systemctl%20start%20docker%20.) + +#### Post about this and bring your friends to this #90DaysOfDevOps challenge. + +[← Previous Day](../day06/README.md) | [Next Day →](../day08/README.md) diff --git a/2023/day07/tasks.md b/2023/day07/tasks.md deleted file mode 100644 index 5459be319f..0000000000 --- a/2023/day07/tasks.md +++ /dev/null @@ -1,45 +0,0 @@ -# Day 7 Task: Understanding package manager and systemctl - -### What is a package manager in Linux? - - In simpler words, a package manager is a tool that allows users to install, remove, upgrade, configure and manage software packages on an operating system. The package manager can be a graphical application like a software center or a command line tool like apt-get or pacman. - - You’ll often find me using the term ‘package’ in tutorials and articles, To understand package manager, you must understand what a package is. - -### What is a package? - - A package is usually referred to an application but it could be a GUI application, command line tool or a software library (required by other software programs). A package is essentially an archive file containing the binary executable, configuration file and sometimes information about the dependencies. - -### Different kinds of package managers - Package Managers differ based on packaging system but same packaging system may have more than one package manager. - - For example, RPM has Yum and DNF package managers. For DEB, you have apt-get, aptitude command line based package managers. - - -## Tasks - - 1) You have to install docker and jenkins in your system from your terminal using package managers - - 2) Write a small blog or article to install these tools using package managers on Ubuntu and CentOS - - -### systemctl and systemd - - systemctl is used to examine and control the state of “systemd” system and service manager. systemd is system and service manager for Unix like operating systems(most of the distributions, not all). - - -## Tasks - - 1) check the status of docker service in your system (make sure you completed above tasks, else docker won't be installed) - - 2) stop the service jenkins and post before and after screenshots - - 3) read about the commands systemctl vs service - - eg. `systemctl status docker` vs `service docker status` - -For Reference, read [this](https://www.howtogeek.com/devops/how-to-check-if-the-docker-daemon-or-a-container-is-running/#:~:text=Checking%20With%20Systemctl&text=Check%20what%27s%20displayed%20under%20%E2%80%9CActive,running%20sudo%20systemctl%20start%20docker%20.) - - -#### Post about this and bring your friends to this #90DaysOfDevOps challenge. - diff --git a/2023/day08/tasks.md b/2023/day08/README.md similarity index 85% rename from 2023/day08/tasks.md rename to 2023/day08/README.md index e8ef6e86a9..cbaed0f8b3 100644 --- a/2023/day08/tasks.md +++ b/2023/day08/README.md @@ -1,50 +1,51 @@ -# Day 8 Task: Basic Git & GitHub for DevOps Engineers. - - -## What is Git? -Git is a version control system that allows you to track changes to files and coordinate work on those files among multiple people. It is commonly used for software development, but it can be used to track changes to any set of files. - -With Git, you can keep a record of who made changes to what part of a file, and you can revert back to earlier versions of the file if needed. Git also makes it easy to collaborate with others, as you can share changes and merge the changes made by different people into a single version of a file. - -## What is Github? -GitHub is a web-based platform that provides hosting for version control using Git. It is a subsidiary of Microsoft, and it offers all of the distributed version control and source code management (SCM) functionality of Git as well as adding its own features. GitHub is a very popular platform for developers to share and collaborate on projects, and it is also used for hosting open-source projects. - -## What is Version Control? How many types of version controls we have? -Version control is a system that tracks changes to a file or set of files over time so that you can recall specific versions later. It allows you to revert files back to a previous state, revert the entire project back to a previous state, compare changes over time, see who last modified something that might be causing a problem, who introduced an issue and when, and more. - -There are two main types of version control systems: centralized version control systems and distributed version control systems. - -1) A centralized version control system (CVCS) uses a central server to store all the versions of a project's files. Developers "check out" files from the central server, make changes, and then "check in" the updated files. Examples of CVCS include Subversion and Perforce. - -2) A distributed version control system (DVCS) allows developers to "clone" an entire repository, including the entire version history of the project. This means that they have a complete local copy of the repository, including all branches and past versions. Developers can work independently and then later merge their changes back into the main repository. Examples of DVCS include Git, Mercurial, and Darcs. - - -## Why we use distributed version control over centralized version control? - -1) Better collaboration: In a DVCS, every developer has a full copy of the repository, including the entire history of all changes. This makes it easier for developers to work together, as they don't have to constantly communicate with a central server to commit their changes or to see the changes made by others. - -2) Improved speed: Because developers have a local copy of the repository, they can commit their changes and perform other version control actions faster, as they don't have to communicate with a central server. - -3) Greater flexibility: With a DVCS, developers can work offline and commit their changes later when they do have an internet connection. They can also choose to share their changes with only a subset of the team, rather than pushing all of their changes to a central server. - -4) Enhanced security: In a DVCS, the repository history is stored on multiple servers and computers, which makes it more resistant to data loss. If the central server in a CVCS goes down or the repository becomes corrupted, it can be difficult to recover the lost data. - -Overall, the decentralized nature of a DVCS allows for greater collaboration, flexibility, and security, making it a popular choice for many teams. - - -## Task: - -- Install Git on your computer (if it is not already installed). You can download it from the official website at https://git-scm.com/downloads -- Create a free account on GitHub (if you don't already have one). You can sign up at https://github.com/ -- Learn the basics of Git by reading through the [video](https://youtu.be/AT1uxOLsCdk) This will give you an understanding of what Git is, how it works, and how to use it to track changes to files. - -## Exercises: - -1) Create a new repository on GitHub and clone it to your local machine -2) Make some changes to a file in the repository and commit them to the repository using Git -3) Push the changes back to the repository on GitHub - - -Reff :- https://youtu.be/AT1uxOLsCdk - -Post your daily work on Linkedin and le me know , writing an article is the best :) +# Day 8 Task: Basic Git & GitHub for DevOps Engineers. + +## What is Git? + +Git is a version control system that allows you to track changes to files and coordinate work on those files among multiple people. It is commonly used for software development, but it can be used to track changes to any set of files. + +With Git, you can keep a record of who made changes to what part of a file, and you can revert back to earlier versions of the file if needed. Git also makes it easy to collaborate with others, as you can share changes and merge the changes made by different people into a single version of a file. + +## What is Github? + +GitHub is a web-based platform that provides hosting for version control using Git. It is a subsidiary of Microsoft, and it offers all of the distributed version control and source code management (SCM) functionality of Git as well as adding its own features. GitHub is a very popular platform for developers to share and collaborate on projects, and it is also used for hosting open-source projects. + +## What is Version Control? How many types of version controls we have? + +Version control is a system that tracks changes to a file or set of files over time so that you can recall specific versions later. It allows you to revert files back to a previous state, revert the entire project back to a previous state, compare changes over time, see who last modified something that might be causing a problem, who introduced an issue and when, and more. + +There are two main types of version control systems: centralized version control systems and distributed version control systems. + +1. A centralized version control system (CVCS) uses a central server to store all the versions of a project's files. Developers "check out" files from the central server, make changes, and then "check in" the updated files. Examples of CVCS include Subversion and Perforce. + +2. A distributed version control system (DVCS) allows developers to "clone" an entire repository, including the entire version history of the project. This means that they have a complete local copy of the repository, including all branches and past versions. Developers can work independently and then later merge their changes back into the main repository. Examples of DVCS include Git, Mercurial, and Darcs. + +## Why we use distributed version control over centralized version control? + +1. Better collaboration: In a DVCS, every developer has a full copy of the repository, including the entire history of all changes. This makes it easier for developers to work together, as they don't have to constantly communicate with a central server to commit their changes or to see the changes made by others. + +2. Improved speed: Because developers have a local copy of the repository, they can commit their changes and perform other version control actions faster, as they don't have to communicate with a central server. + +3. Greater flexibility: With a DVCS, developers can work offline and commit their changes later when they do have an internet connection. They can also choose to share their changes with only a subset of the team, rather than pushing all of their changes to a central server. + +4. Enhanced security: In a DVCS, the repository history is stored on multiple servers and computers, which makes it more resistant to data loss. If the central server in a CVCS goes down or the repository becomes corrupted, it can be difficult to recover the lost data. + +Overall, the decentralized nature of a DVCS allows for greater collaboration, flexibility, and security, making it a popular choice for many teams. + +## Task: + +- Install Git on your computer (if it is not already installed). You can download it from the official website at https://git-scm.com/downloads +- Create a free account on GitHub (if you don't already have one). You can sign up at https://github.com/ +- Learn the basics of Git by reading through the [video](https://youtu.be/AT1uxOLsCdk) This will give you an understanding of what Git is, how it works, and how to use it to track changes to files. + +## Exercises: + +1. Create a new repository on GitHub and clone it to your local machine +2. Make some changes to a file in the repository and commit them to the repository using Git +3. Push the changes back to the repository on GitHub + +Reff :- https://youtu.be/AT1uxOLsCdk + +Post your daily work on Linkedin and le me know , writing an article is the best :) + +[← Previous Day](../day07/README.md) | [Next Day →](../day09/README.md) diff --git a/2023/day09/tasks.md b/2023/day09/README.md similarity index 58% rename from 2023/day09/tasks.md rename to 2023/day09/README.md index 364728b0f2..fd9e178d58 100644 --- a/2023/day09/tasks.md +++ b/2023/day09/README.md @@ -1,23 +1,28 @@ -# Day 9 Task: Deep Dive in Git & GitHub for DevOps Engineers. - -## Find the answers by your understandings(Shoulden't be copied by internet & used hand-made diagrams) of below quistions and Write blog on it. -1) What is Git and why is it important? -2) What is difference Between Main Branch and Master Branch?? -3) Can you explain the difference between Git and GitHub? -4) How do you create a new repository on GitHub? -5) What is difference between local & remote repository? How to connect local to remote? - -## Tasks -task-1: -- Set your user name and email address, which will be associated with your commits. - -task-2: -- Create a repository named "Devops" on GitHub -- Connect your local repository to the repository on GitHub. -- Create a new file in Devops/Git/Day-02.txt & add some content to it -- Push your local commits to the repository on GitHub - -reff :- https://youtu.be/AT1uxOLsCdk - - -Note: These steps assume that you have already installed Git on your computer and have created a GitHub account. If you need help with these prerequisites, you can refer to the [day-08](https://github.com/LondheShubham153/90DaysOfDevOps/blob/ee7c53f276edb02a85a97282027028295be17c04/2023/day08/tasks.md) +# Day 9 Task: Deep Dive in Git & GitHub for DevOps Engineers. + +## Find the answers by your understandings(Shoulden't be copied by internet & used hand-made diagrams) of below quistions and Write blog on it. + +1. What is Git and why is it important? +2. What is difference Between Main Branch and Master Branch?? +3. Can you explain the difference between Git and GitHub? +4. How do you create a new repository on GitHub? +5. What is difference between local & remote repository? How to connect local to remote? + +## Tasks + +task-1: + +- Set your user name and email address, which will be associated with your commits. + +task-2: + +- Create a repository named "Devops" on GitHub +- Connect your local repository to the repository on GitHub. +- Create a new file in Devops/Git/Day-02.txt & add some content to it +- Push your local commits to the repository on GitHub + +reff :- https://youtu.be/AT1uxOLsCdk + +Note: These steps assume that you have already installed Git on your computer and have created a GitHub account. If you need help with these prerequisites, you can refer to the [day-08](https://github.com/LondheShubham153/90DaysOfDevOps/blob/ee7c53f276edb02a85a97282027028295be17c04/2023/day08/README.md) + +[← Previous Day](../day08/README.md) | [Next Day →](../day10/README.md) diff --git a/2023/day10/README.md b/2023/day10/README.md new file mode 100644 index 0000000000..71250e5259 --- /dev/null +++ b/2023/day10/README.md @@ -0,0 +1,69 @@ +# Day 10 Task: Advance Git & GitHub for DevOps Engineers. + +## Git Branching + +Use a branch to isolate development work without affecting other branches in the repository. Each repository has one default branch, and can have multiple other branches. You can merge a branch into another branch using a pull request. + +Branches allow you to develop features, fix bugs, or safely experiment with new ideas in a contained area of your repository. + +## Git Revert and Reset + +Two commonly used tools that git users will encounter are those of git reset and git revert . The benefit of both of these commands is that you can use them to remove or edit changes you’ve made in the code in previous commits. + +## Git Rebase and Merge + +### What Is Git Rebase? + +Git rebase is a command that lets users integrate changes from one branch to another, and the logs are modified once the action is complete. Git rebase was developed to overcome merging’s shortcomings, specifically regarding logs. + +### What Is Git Merge? + +Git merge is a command that allows developers to merge Git branches while the logs of commits on branches remain intact. + +The merge wording can be confusing because we have two methods of merging branches, and one of those ways is actually called “merge,” even though both procedures do essentially the same thing. + +Refer to this article for a better understanding of Git Rebase and Merge [Read here](https://www.simplilearn.com/git-rebase-vs-merge-article) + +## Task 1: + +Add a text file called version01.txt inside the Devops/Git/ with “This is first feature of our application” written inside. +This should be in a branch coming from `master`, +[hint try `git checkout -b dev`], +swithch to `dev` branch ( Make sure your commit message will reflect as "Added new feature"). +[Hint use your knowledge of creating branches and Git commit command] + +- version01.txt should reflect at local repo first followed by Remote repo for review. + [Hint use your knowledge of Git push and git pull commands here] + +Add new commit in `dev` branch after adding below mentioned content in Devops/Git/version01.txt: +While writing the file make sure you write these lines + +- 1st line>> This is the bug fix in development branch +- Commit this with message “ Added feature2 in development branch” + +- 2nd line>> This is gadbad code +- Commit this with message “ Added feature3 in development branch + +- 3rd line>> This feature will gadbad everything from now. +- Commit with message “ Added feature4 in development branch + +Restore the file to a previous version where the content should be “This is the bug fix in development branch” +[Hint use git revert or reset according to your knowledge] + +## Task 2: + +- Demonstrate the concept of branches with 2 or more branches with screenshot. +- add some changes to `dev` branch and merge that branch in `master` +- as a practice try git rebase too, see what difference you get. + +## Note: + +We should learn and follow the [best practices](https://www.flagship.io/git-branching-strategies/) , industry follows for branching. + +Simple Reference on branching: [video](https://youtu.be/NzjK9beT_CY) + +Advance Reference on branching : [video](https://youtu.be/7xhkEQS3dXw) + +You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. Happy Learning :) + +[← Previous Day](../day09/README.md) | [Next Day →](../day11/README.md) diff --git a/2023/day10/tasks.md b/2023/day10/tasks.md deleted file mode 100644 index 1177d9d5c2..0000000000 --- a/2023/day10/tasks.md +++ /dev/null @@ -1,64 +0,0 @@ -# Day 10 Task: Advance Git & GitHub for DevOps Engineers. - -## Git Branching - Use a branch to isolate development work without affecting other branches in the repository. Each repository has one default branch, and can have multiple other branches. You can merge a branch into another branch using a pull request. - - Branches allow you to develop features, fix bugs, or safely experiment with new ideas in a contained area of your repository. - -## Git Revert and Reset - Two commonly used tools that git users will encounter are those of git reset and git revert . The benefit of both of these commands is that you can use them to remove or edit changes you’ve made in the code in previous commits. - -## Git Rebase and Merge - ### What Is Git Rebase? - - Git rebase is a command that lets users integrate changes from one branch to another, and the logs are modified once the action is complete. Git rebase was developed to overcome merging’s shortcomings, specifically regarding logs. - - ### What Is Git Merge? - - Git merge is a command that allows developers to merge Git branches while the logs of commits on branches remain intact. - - The merge wording can be confusing because we have two methods of merging branches, and one of those ways is actually called “merge,” even though both procedures do essentially the same thing. - - Refer to this article for a better understanding of Git Rebase and Merge [Read here](https://www.simplilearn.com/git-rebase-vs-merge-article) - - -## Task 1: - Add a text file called version01.txt inside the Devops/Git/ with “This is first feature of our application” written inside. - This should be in a branch coming from `master`, - [hint try `git checkout -b dev`], - swithch to `dev` branch ( Make sure your commit message will reflect as "Added new feature"). - [Hint use your knowledge of creating branches and Git commit command] - - - version01.txt should reflect at local repo first followed by Remote repo for review. - [Hint use your knowledge of Git push and git pull commands here] - - Add new commit in `dev` branch after adding below mentioned content in Devops/Git/version01.txt: - While writing the file make sure you write these lines - - - 1st line>> This is the bug fix in development branch - - Commit this with message “ Added feature2 in development branch” - - - 2nd line>> This is gadbad code - - Commit this with message “ Added feature3 in development branch - - - 3rd line>> This feature will gadbad everything from now. - - Commit with message “ Added feature4 in development branch - - Restore the file to a previous version where the content should be “This is the bug fix in development branch” - [Hint use git revert or reset according to your knowledge] - -## Task 2: - - - Demonstrate the concept of branches with 2 or more branches with screenshot. - - add some changes to `dev` branch and merge that branch in `master` - - as a practice try git rebase too, see what difference you get. - - -## Note: -We should learn and follow the [best practices](https://www.flagship.io/git-branching-strategies/) , industry follows for branching. - -Simple Reference on branching: [video](https://youtu.be/NzjK9beT_CY) - -Advance Reference on branching : [video](https://youtu.be/7xhkEQS3dXw) - -You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challange. Happy Learning :) diff --git a/2023/day11/tasks.md b/2023/day11/README.md similarity index 95% rename from 2023/day11/tasks.md rename to 2023/day11/README.md index 1c4878925a..08249b0568 100644 --- a/2023/day11/tasks.md +++ b/2023/day11/README.md @@ -1,49 +1,57 @@ -# Day 11 Task: Advance Git & GitHub for DevOps Engineers: Part-2 - -## Git Stash: -Git stash is a command that allows you to temporarily save changes you have made in your working directory, without committing them. This is useful when you need to switch to a different branch to work on something else, but you don't want to commit the changes you've made in your current branch yet. - -To use Git stash, you first create a new branch and make some changes to it. Then you can use the command git stash to save those changes. This will remove the changes from your working directory and record them in a new stash. You can apply these changes later. git stash list command shows the list of stashed changes. - -You can also use git stash drop to delete a stash and git stash clear to delete all the stashes. - -## Cherry-pick: -Git cherry-pick is a command that allows you to select specific commits from one branch and apply them to another. This can be useful when you want to selectively apply changes that were made in one branch to another. - -To use git cherry-pick, you first create two new branches and make some commits to them. Then you use git cherry-pick command to select the specific commits from one branch and apply them to the other. - -## Resolving Conflicts: -Conflicts can occur when you merge or rebase branches that have diverged, and you need to manually resolve the conflicts before git can proceed with the merge/rebase. -git status command shows the files that have conflicts, git diff command shows the difference between the conflicting versions and git add command is used to add the resolved files. - - -# Task-01 -- Create a new branch and make some changes to it. -- Use git stash to save the changes without committing them. -- Switch to a different branch, make some changes and commit them. -- Use git stash pop to bring the changes back and apply them on top of the new commits. - -# Task-02 -- In version01.txt of development branch add below lines after “This is the bug fix in development branch” that you added in Day10 and reverted to this commit. -- Line2>> After bug fixing, this is the new feature with minor alteration” - - Commit this with message “ Added feature2.1 in development branch” -- Line3>> This is the advancement of previous feature - - Commit this with message “ Added feature2.2 in development branch” -- Line4>> Feature 2 is completed and ready for release - - Commit this with message “ Feature2 completed” -- All these commits messages should be reflected in Production branch too which will come out from Master branch (Hint: try rebase). - -# Task-03 -- In Production branch Cherry pick Commit “Added feature2.2 in development branch” and added below lines in it: -- Line to be added after Line3>> This is the advancement of previous feature -- Line4>>Added few more changes to make it more optimized. -- Commit: Optimized the feature - - -## Reference [video](https://youtu.be/apGV9Kg7ics) - - -You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challange. Happy Learning :) +# Day 11 Task: Advance Git & GitHub for DevOps Engineers: Part-2 + +## Git Stash: + +Git stash is a command that allows you to temporarily save changes you have made in your working directory, without committing them. This is useful when you need to switch to a different branch to work on something else, but you don't want to commit the changes you've made in your current branch yet. + +To use Git stash, you first create a new branch and make some changes to it. Then you can use the command git stash to save those changes. This will remove the changes from your working directory and record them in a new stash. You can apply these changes later. git stash list command shows the list of stashed changes. + +You can also use git stash drop to delete a stash and git stash clear to delete all the stashes. + +## Cherry-pick: + +Git cherry-pick is a command that allows you to select specific commits from one branch and apply them to another. This can be useful when you want to selectively apply changes that were made in one branch to another. + +To use git cherry-pick, you first create two new branches and make some commits to them. Then you use git cherry-pick command to select the specific commits from one branch and apply them to the other. + +## Resolving Conflicts: + +Conflicts can occur when you merge or rebase branches that have diverged, and you need to manually resolve the conflicts before git can proceed with the merge/rebase. +git status command shows the files that have conflicts, git diff command shows the difference between the conflicting versions and git add command is used to add the resolved files. + +# Task-01 + +- Create a new branch and make some changes to it. +- Use git stash to save the changes without committing them. +- Switch to a different branch, make some changes and commit them. +- Use git stash pop to bring the changes back and apply them on top of the new commits. + +# Task-02 + +- In version01.txt of development branch add below lines after “This is the bug fix in development branch” that you added in Day10 and reverted to this commit. +- Line2>> After bug fixing, this is the new feature with minor alteration” + + Commit this with message “ Added feature2.1 in development branch” + +- Line3>> This is the advancement of previous feature + + Commit this with message “ Added feature2.2 in development branch” + +- Line4>> Feature 2 is completed and ready for release + + Commit this with message “ Feature2 completed” + +- All these commits messages should be reflected in Production branch too which will come out from Master branch (Hint: try rebase). + +# Task-03 + +- In Production branch Cherry pick Commit “Added feature2.2 in development branch” and added below lines in it: +- Line to be added after Line3>> This is the advancement of previous feature +- Line4>>Added few more changes to make it more optimized. +- Commit: Optimized the feature + +## Reference [video](https://youtu.be/apGV9Kg7ics) + +You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. Happy Learning :) + +[← Previous Day](../day10/README.md) | [Next Day →](../day12/README.md) diff --git a/2023/day12/README.md b/2023/day12/README.md new file mode 100644 index 0000000000..456bfe2feb --- /dev/null +++ b/2023/day12/README.md @@ -0,0 +1,17 @@ +## Finally!! 🎉 + +You have completed the Linux & Git-GitHub handson and I hope you have learned something interesting from it.🙌 + +Now why not make an interesting 😉 assignment, which not only will help you for the future but also for the DevOps Community! + +Let’s make a well articulated and documented **"cheat-sheet"** with all the commands you learned so far in Linux, Git-GitHub and brief info about its usage. + +Let’s show us your knowledge mixed with your creativity😎 + +_I have added a [cheatsheet](https://education.github.com/git-cheat-sheet-education.pdf) for your reference, Make sure every cheatsheet must be UNIQUE_ + +Post it on Linkedin and Spread the knowledge.😃 + +**Happy Learning :)** + +[← Previous Day](../day11/README.md) | [Next Day →](../day13/README.md) diff --git a/2023/day12/tasks.md b/2023/day12/tasks.md deleted file mode 100644 index 0ab8930078..0000000000 --- a/2023/day12/tasks.md +++ /dev/null @@ -1,14 +0,0 @@ -## Finally!! 🎉 -You have completed the Linux & Git-GitHub handson and I hope you have learned something interesting from it.🙌 - -Now why not make an interesting 😉 assignment, which not only will help you for the future but also for the DevOps Community! - -Let’s make a well articulated and documented **"cheat-sheet"** with all the commands you learned so far in Linux, Git-GitHub and brief info about its usage. - -Let’s show us your knowledge mixed with your creativity😎 - -*I have added a [cheatsheet](https://www.sqltutorial.org/wp-content/uploads/2016/04/SQL-Cheat-Sheet-2.png) for your reference, Make sure every cheatsheet must be UNIQUE* - -Post it on Linkedin and Spread the knowledge.😃 - -**Happy Learning :)** diff --git a/2023/day13/tasks.md b/2023/day13/README.md similarity index 90% rename from 2023/day13/tasks.md rename to 2023/day13/README.md index be7d16c3c8..f366710009 100644 --- a/2023/day13/tasks.md +++ b/2023/day13/README.md @@ -1,30 +1,29 @@ -Hello Dosto 😎 - -Let's Start with Basics of Python as this is also important for Devops Engineer to build the logic and Programs. - -**What is Python?** - -- Python is a Open source, general purpose, high level, and object-oriented programming language. -- It was created by **Guido van Rossum** -- Python consists of vast libraries and various frameworks like Django,Tensorflow, Flask, Pandas, Keras etc. - - -**How to Install Python?** - -You can install Python in your System whether it is window, MacOS, ubuntu, centos etc. Below are the links for the installation: -- [Windows Installation](https://www.python.org/downloads/) -- Ubuntu: apt-get install python3.6 - - - -Task1: -1. Install Python in your respective OS, and check the version. -2. Read about different Data Types in Python. - - -You can get the complete Playlist [here](https://www.youtube.com/watch?v=abPgj_3hzVY&list=PLlfy9GnSVerS_L5z0COaF7rsbgWmJXTOM)🙌 - -Don't forget to share your Journey over linkedin. Let the community know that you have started another chapter of your Journey. - -Happy Learning, Ruko Mat Phod do😃 - +Hello Dosto 😎 + +Let's Start with Basics of Python as this is also important for Devops Engineer to build the logic and Programs. + +**What is Python?** + +- Python is a Open source, general purpose, high level, and object-oriented programming language. +- It was created by **Guido van Rossum** +- Python consists of vast libraries and various frameworks like Django,Tensorflow, Flask, Pandas, Keras etc. + +**How to Install Python?** + +You can install Python in your System whether it is window, MacOS, ubuntu, centos etc. Below are the links for the installation: + +- [Windows Installation](https://www.python.org/downloads/) +- Ubuntu: apt-get install python3.6 + +Task1: + +1. Install Python in your respective OS, and check the version. +2. Read about different Data Types in Python. + +You can get the complete Playlist [here](https://www.youtube.com/watch?v=abPgj_3hzVY&list=PLlfy9GnSVerS_L5z0COaF7rsbgWmJXTOM)🙌 + +Don't forget to share your Journey over linkedin. Let the community know that you have started another chapter of your Journey. + +Happy Learning, Ruko Mat Phod do😃 + +[← Previous Day](../day12/README.md) | [Next Day →](../day14/README.md) diff --git a/2023/day14/README.md b/2023/day14/README.md new file mode 100644 index 0000000000..88dbb3a46c --- /dev/null +++ b/2023/day14/README.md @@ -0,0 +1,61 @@ +## Day 14 Task: Python Data Types and Data Structures for DevOps + +### New day, New Topic.... Let's learn along 😉 + +### Data Types + +- Data types are the classification or categorization of data items. It represents the kind of value that tells what operations can be performed on a particular data. +- Since everything is an object in Python programming, data types are actually classes and variables are instance (object) of these classes. +- Python has the following data types built-in by default: Numeric(Integer, complex, float), Sequential(string,lists, tuples), Boolean, Set, Dictionaries, etc + +To check what is the data type of the variable used, we can simply write: +`your_variable=100` +`type(your_variable)` + +### Data Structures + +Data Structures are a way of organizing data so that it can be accessed more efficiently depending upon the situation. Data Structures are fundamentals of any programming language around which a program is built. Python helps to learn the fundamental of these data structures in a simpler way as compared to other programming languages. + +- Lists + Python Lists are just like the arrays, declared in other languages which is an ordered collection of data. It is very flexible as the items in a list do not need to be of the same type + +- Tuple + Python Tuple is a collection of Python objects much like a list but Tuples are immutable in nature i.e. the elements in the tuple cannot be added or removed once created. Just like a List, a Tuple can also contain elements of various types. + +- Dictionary + Python dictionary is like hash tables in any other language with the time complexity of O(1). It is an unordered collection of data values, used to store data values like a map, which, unlike other Data Types that hold only a single value as an element, Dictionary holds the key:value pair. Key-value is provided in the dictionary to make it more optimized + +## Tasks + +1. Give the Difference between List, Tuple and set. Do Handson and put screenshots as per your understanding. +2. Create below Dictionary and use Dictionary methods to print your favourite tool just by using the keys of the Dictionary. + +``` +fav_tools = +{ + 1:"Linux", + 2:"Git", + 3:"Docker", + 4:"Kubernetes", + 5:"Terraform", + 6:"Ansible", + 7:"Chef" +} +``` + +3. Create a List of cloud service providers + eg. + +``` +cloud_providers = ["AWS","GCP","Azure"] +``` + +Write a program to add `Digital Ocean` to the list of cloud_providers and sort the list in alphabetical order. + +[Hint: Use keys to built in functions for Lists] + +If you want to deep dive further, Watch [Python](https://youtu.be/abPgj_3hzVY) + +You can share the learning with everyone over linkedin and tag us along 😃 + +[← Previous Day](../day13/README.md) | [Next Day →](../day15/README.md) diff --git a/2023/day14/tasks.md b/2023/day14/tasks.md deleted file mode 100644 index e54e23c5ba..0000000000 --- a/2023/day14/tasks.md +++ /dev/null @@ -1,53 +0,0 @@ -## Day 14 Task: Python Data Types and Data Structures for DevOps - -### New day, New Topic.... Let's learn along 😉 - -### Data Types -- Data types are the classification or categorization of data items. It represents the kind of value that tells what operations can be performed on a particular data. -- Since everything is an object in Python programming, data types are actually classes and variables are instance (object) of these classes. -- Python has the following data types built-in by default: Numeric(Integer, complex, float), Sequential(string,lists, tuples), Boolean, Set, Dictionaries, etc - -To check what is the data type of the variable used, we can simply write: -```your_variable=100``` -```type(your_variable)``` - -### Data Structures - - Data Structures are a way of organizing data so that it can be accessed more efficiently depending upon the situation. Data Structures are fundamentals of any programming language around which a program is built. Python helps to learn the fundamental of these data structures in a simpler way as compared to other programming languages. - -- Lists -Python Lists are just like the arrays, declared in other languages which is an ordered collection of data. It is very flexible as the items in a list do not need to be of the same type - -- Tuple -Python Tuple is a collection of Python objects much like a list but Tuples are immutable in nature i.e. the elements in the tuple cannot be added or removed once created. Just like a List, a Tuple can also contain elements of various types. - -- Dictionary -Python dictionary is like hash tables in any other language with the time complexity of O(1). It is an unordered collection of data values, used to store data values like a map, which, unlike other Data Types that hold only a single value as an element, Dictionary holds the key:value pair. Key-value is provided in the dictionary to make it more optimized - -## Tasks -1. Give the Difference between List, Tuple and set. Do Handson and put screenshots as per your understanding. -2. Create below Dictionary and use Dictionary methods to print your favourite tool just by using the keys of the Dictionary. -``` -fav_tools = -{ - 1:"Linux", - 2:"Git", - 3:"Docker", - 4:"Kubernetes", - 5:"Terraform", - 6:"Ansible", - 7:"Chef" -} -``` -3. Create a List of cloud service providers -eg. -``` -cloud_providers = ["AWS","GCP","Azure"] -``` -Write a program to add `Digital Ocean` to the list of cloud_providers and sort the list in alphabetical order. - -[Hint: Use keys to built in functions for Lists] - -If you want to deep dive further, Watch [Python](https://youtu.be/abPgj_3hzVY) - -You can share the learning with everyone over linkedin and tag us along 😃 diff --git a/2023/day15/tasks.md b/2023/day15/README.md similarity index 88% rename from 2023/day15/tasks.md rename to 2023/day15/README.md index d4f76d4bc4..decf2b5ed9 100644 --- a/2023/day15/tasks.md +++ b/2023/day15/README.md @@ -2,13 +2,12 @@ ### Reading JSON and YAML in Python -- As a DevOps Engineer you should be able to parse files, be it txt, json, yaml, etc. +- As a DevOps Engineer you should be able to parse files, be it txt, json, yaml, etc. - You should know what all libraries one should use in Pythonfor DevOps. - Python has numerous libraries like `os`, `sys`, `json`, `yaml` etc that a DevOps Engineer uses in day to day tasks. - - ## Tasks + 1. Create a Dictionary in Python and write it to a json File. 2. Read a json file `services.json` kept in this folder and print the service names of every cloud service provider. @@ -21,7 +20,10 @@ azure : VM gcp : compute engine ``` + 3. Read YAML file using python, file `services.yaml` and read the contents to convert yaml to json Python Project for your practice: -https://youtube.com/playlist?list=PLlfy9GnSVerSzFmQ8JqP9v0XHHOAeWbjo \ No newline at end of file +https://youtube.com/playlist?list=PLlfy9GnSVerSzFmQ8JqP9v0XHHOAeWbjo + +[← Previous Day](../day14/README.md) | [Next Day →](../day16/README.md) diff --git a/2023/day16/tasks.md b/2023/day16/README.md similarity index 58% rename from 2023/day16/tasks.md rename to 2023/day16/README.md index 4cc47ec174..981c2dc916 100644 --- a/2023/day16/tasks.md +++ b/2023/day16/README.md @@ -1,12 +1,12 @@ ## Day 16 Task: Docker for DevOps Engineers. - ### Docker - Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called containers that have everything the software needs to run including libraries, system tools, code, and runtime. Using Docker, you can quickly deploy and scale applications into any environment and know your code will run. + +Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called containers that have everything the software needs to run including libraries, system tools, code, and runtime. Using Docker, you can quickly deploy and scale applications into any environment and know your code will run. # Tasks - As you have already installed docker in previous days tasks, now is the time to run Docker commands. +As you have already installed docker in previous days tasks, now is the time to run Docker commands. - Use the `docker run` command to start a new container and interact with it through the command line. [Hint: docker run hello-world] @@ -22,9 +22,11 @@ - Use the `docker load` command to load an image from a tar archive. -These tasks involve simple operations that can be used to manage images and containers. +These tasks involve simple operations that can be used to manage images and containers. For reference you can watch this video: https://youtu.be/Tevxhn6Odc8 -You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challange. Happy Learning :) \ No newline at end of file +You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. Happy Learning :) + +[← Previous Day](../day15/README.md) | [Next Day →](../day17/README.md) diff --git a/2023/day17/README.md b/2023/day17/README.md new file mode 100644 index 0000000000..430ddb1154 --- /dev/null +++ b/2023/day17/README.md @@ -0,0 +1,31 @@ +## Day 17 Task: Docker Project for DevOps Engineers. + +### You people are doing just amazing in **#90daysofdevops**. Today's challenge is so special Because You are going to do DevOps project today with Docker. Are You Exited 😍 + +# Dockerfile + +Docker is a tool that makes it easy to run applications in containers. Containers are like small packages that hold everything an application needs to run. To create these containers, developers use something called a Dockerfile. + +A Dockerfile is like a set of instructions for making a container. It tells Docker what base image to use, what commands to run, and what files to include. For example, if you were making a container for a website, the Dockerfile might tell Docker to use an official web server image, copy the files for your website into the container, and start the web server when the container starts. + +For more about Dockerfile visit [here](https://rushikesh-mashidkar.hashnode.dev/dockerfile-docker-compose-swarm-and-volumes) + +task: + +- Create a Dockerfile for a simple web application (e.g. a Node.js or Python app) + +- Build the image using the Dockerfile and run the container + +- Verify that the application is working as expected by accessing it in a web browser + +- Push the image to a public or private repository (e.g. Docker Hub ) + +For Refference Project visit [here](https://youtu.be/Tevxhn6Odc8) + +If you want to dive further, Watch [bootcamp](https://youtube.com/playlist?list=PLlfy9GnSVerRqYJgVYO0UiExj5byjrW8u) + +You can share the learning with everyone over linkedin and tag us along 😃 + +Happy Learning:) + +[← Previous Day](../day16/README.md) | [Next Day →](../day18/README.md) diff --git a/2023/day17/tasks.md b/2023/day17/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day18/README.md b/2023/day18/README.md new file mode 100644 index 0000000000..b57f22cf0b --- /dev/null +++ b/2023/day18/README.md @@ -0,0 +1,43 @@ +# Day 18 Task: Docker for DevOps Engineers + +Till now you have created Docker file and pushed it to the Repository. Let's move forward and dig more on other Docker concepts. +Aj thodi padhai krte hai on Docker Compose 😃 + +## Docker Compose + +- Docker Compose is a tool that was developed to help define and share multi-container applications. +- With Compose, we can create a YAML file to define the services and with a single command, can spin everything up or tear it all down. +- Learn more about docker-compose [visit here](https://tecadmin.net/tutorial/docker/docker-compose/) + +## What is YAML? + +- YAML is a data serialization language that is often used for writing configuration files. Depending on whom you ask, YAML stands for yet another markup language or YAML ain’t markup language (a recursive acronym), which emphasizes that YAML is for data, not documents. +- YAML is a popular programming language because it is human-readable and easy to understand. +- YAML files use a .yml or .yaml extension. +- Read more about it [here](https://www.redhat.com/en/topics/automation/what-is-yaml) + +## Task-1 + +Learn how to use the docker-compose.yml file, to set up the environment, configure the services and links between different containers, and also to use environment variables in the docker-compose.yml file. + +[Sample docker-compose.yaml file](https://github.com/LondheShubham153/90DaysOfDevOps/blob/master/2023/day18/docker-compose.yaml) + +## Task-2 + +- Pull a pre-existing Docker image from a public repository (e.g. Docker Hub) and run it on your local machine. Run the container as a non-root user (Hint- Use `usermod ` command to give user permission to docker). Make sure you reboot instance after giving permission to user. +- Inspect the container's running processes and exposed ports using the docker inspect command. +- Use the docker logs command to view the container's log output. +- Use the docker stop and docker start commands to stop and start the container. +- Use the docker rm command to remove the container when you're done. + +## How to run Docker commands without sudo? + +- Make sure docker is installed and system is updated (This is already been completed as a part of previous tasks): +- sudo usermod -a -G docker $USER +- Reboot the machine. + +For reference you can watch this [video](https://youtu.be/Tevxhn6Odc8) + +You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. Happy Learning :) + +[← Previous Day](../day17/README.md) | [Next Day →](../day19/README.md) diff --git a/2023/day18/docker-compose.yaml b/2023/day18/docker-compose.yaml new file mode 100644 index 0000000000..b11a5f4a43 --- /dev/null +++ b/2023/day18/docker-compose.yaml @@ -0,0 +1,12 @@ +version : "3.3" +services: + web: + image: nginx:latest + ports: + - "80:80" + db: + image: mysql + ports: + - "3306:3306" + environment: + - "MYSQL_ROOT_PASSWORD=test@123" diff --git a/2023/day18/tasks.md b/2023/day18/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day19/README.md b/2023/day19/README.md new file mode 100644 index 0000000000..6ad763f8e1 --- /dev/null +++ b/2023/day19/README.md @@ -0,0 +1,39 @@ +# Day 19 Task: Docker for DevOps Engineers + +**Till now you have learned how to create docker-compose.yml file and pushed it to the Repository. Let's move forward and dig more on other Docker-compose.yml concepts.** +**Aaj thodi padhai krte hai on Docker Volume & Docker Network** 😃 + +# Docker-Volume + +Docker allows you to create something called volumes. Volumes are like separate storage areas that can be accessed by containers. They allow you to store data, like a database, outside the container, so it doesn't get deleted when the container is deleted. +You can also mount from the same volume and create more containers having same data. +[reference](https://docs.docker.com/storage/volumes/) + +# Docker Network + +Docker allows you to create virtual spaces called networks, where you can connect multiple containers (small packages that hold all the necessary files for a specific application to run) together. This way, the containers can communicate with each other and with the host machine (the computer on which the Docker is installed). +When we run a container, it has its own storage space that is only accessible by that specific container. If we want to share that storage space with other containers, we can't do that. [reference](https://docs.docker.com/network/) + +## Task-1 + +- Create a multi-container docker-compose file which will bring _UP_ and bring _DOWN_ containers in a single shot ( Example - Create application and database container ) + +_hints:_ + +- Use the `docker-compose up` command with the `-d` flag to start a multi-container application in detached mode. +- Use the `docker-compose scale` command to increase or decrease the number of replicas for a specific service. You can also add [`replicas`](https://stackoverflow.com/questions/63408708/how-to-scale-from-within-docker-compose-file) in deployment file for _auto-scaling_. +- Use the `docker-compose ps` command to view the status of all containers, and `docker-compose logs` to view the logs of a specific service. +- Use the `docker-compose down` command to stop and remove all containers, networks, and volumes associated with the application + +## Task-2 + +- Learn how to use Docker Volumes and Named Volumes to share files and directories between multiple containers. +- Create two or more containers that read and write data to the same volume using the `docker run --mount` command. +- Verify that the data is the same in all containers by using the docker exec command to run commands inside each container. +- Use the docker volume ls command to list all volumes and docker volume rm command to remove the volume when you're done. + +## You can use this task as _Project_ to add in your resume. + +You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. Happy Learning :) + +[← Previous Day](../day18/README.md) | [Next Day →](../day20/README.md) diff --git a/2023/day19/sample_project_deployment.yaml b/2023/day19/sample_project_deployment.yaml new file mode 100644 index 0000000000..821be80f2b --- /dev/null +++ b/2023/day19/sample_project_deployment.yaml @@ -0,0 +1,20 @@ +version : "3.3" +services: + web: + image: varsha0108/local_django:latest + deploy: + replicas: 2 + ports: + - "8001-8005:8001" + volumes: + - my_django_volume:/app + db: + image: mysql + ports: + - "3306:3306" + environment: + - "MYSQL_ROOT_PASSWORD=test@123" +volumes: + my_django_volume: + external: true + diff --git a/2023/day19/tasks.md b/2023/day19/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day20/README.md b/2023/day20/README.md new file mode 100644 index 0000000000..e9c4b59ba9 --- /dev/null +++ b/2023/day20/README.md @@ -0,0 +1,16 @@ +## Finally!! 🎉 + +You have completed✅ the Docker handson and I hope you have learned something interesting from it.🙌 + +Now it's time to take your Docker skills to the next level by creating a comprehensive cheat-sheet of all the commands you've learned so far. This cheat-sheet should include commands for both Docker and Docker-Compose, as well as brief explanations of their usage. +This cheat-sheet will not only help you in the future but also contribute to the DevOps community by providing a useful resource for others.😊🙌 + +So, put your knowledge and creativity to the test and create a cheat-sheet that truly stands out! 🚀 + +_I have added a [cheatsheet](https://cdn.hashnode.com/res/hashnode/image/upload/v1670863735841/r6xdXpsap.png?auto=compress,format&format=webp) for your reference, Make sure every cheatsheet must be UNIQUE_ + +Post it on Linkedin and Spread the knowledge.😃 + +**Happy Learning :)** + +[← Previous Day](../day19/README.md) | [Next Day →](../day21/README.md) diff --git a/2023/day20/tasks.md b/2023/day20/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day21/README.md b/2023/day21/README.md new file mode 100644 index 0000000000..efb2cfc646 --- /dev/null +++ b/2023/day21/README.md @@ -0,0 +1,40 @@ +## Day 21 Task: Docker Important interview Questions. + +## Docker Interview + +Docker is a good topic to ask in DevOps Engineer Interviews, mostly for freshers. +One must surely try these questions in order to be better in Docker + +## Questions + +- What is the Difference between an Image, Container and Engine? +- What is the Difference between the Docker command COPY vs ADD? +- What is the Difference between the Docker command CMD vs RUN? +- How Will you reduce the size of the Docker image? +- Why and when to use Docker? +- Explain the Docker components and how they interact with each other. +- Explain the terminology: Docker Compose, Docker File, Docker Image, Docker Container? +- In what real scenarios have you used Docker? +- Docker vs Hypervisor? +- What are the advantages and disadvantages of using docker? +- What is a Docker namespace? +- What is a Docker registry? +- What is an entry point? +- How to implement CI/CD in Docker? +- Will data on the container be lost when the docker container exits? +- What is a Docker swarm? +- What are the docker commands for the following: + - view running containers + - command to run the container under a specific name + - command to export a docker + - command to import an already existing docker image + - commands to delete a container + - command to remove all stopped containers, unused networks, build caches, and dangling images? +- What are the common docker practices to reduce the size of Docker Image? + +These questions will help you in your next DevOps Interview. +_Write a Blog and share it on LinkedIn._ + +**Happy Learning :)** + +[← Previous Day](../day20/README.md) | [Next Day →](../day22/README.md) diff --git a/2023/day21/tasks.md b/2023/day21/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day22/README.md b/2023/day22/README.md new file mode 100644 index 0000000000..5117b18ce1 --- /dev/null +++ b/2023/day22/README.md @@ -0,0 +1,30 @@ +# Day-22 : Getting Started with Jenkins 😃 + +**Linux, Git, Git-Hub, Docker finish ho chuka hai to chaliye seekhte hai inko deploy krne ke lye CI-CD tool:** + +## What is Jenkins? + +- Jenkins is an open source continuous integration-continuous delivery and deployment (CI/CD) automation software DevOps tool written in the Java programming language. It is used to implement CI/CD workflows, called pipelines. + +- Jenkins is a tool that is used for automation, and it is an open-source server that allows all the developers to build, test and deploy software. It works or runs on java as it is written in java. By using Jenkins we can make a continuous integration of projects(jobs) or end-to-endpoint automation. + +- Jenkins achieves Continuous Integration with the help of plugins. Plugins allow the integration of Various DevOps stages. If you want to integrate a particular tool, you need to install the plugins for that tool. For example Git, Maven 2 project, Amazon EC2, HTML publisher etc. + +**Let us do discuss the necessity of this tool before going ahead to the procedural part for installation:** + +- Nowadays, humans are becoming lazy😴 day by day so even having digital screens and just one click button in front of us then also need some automation. + +- Here, I’m referring to that part of automation where we need not have to look upon a process(here called a job) for completion and after it doing another job. For that, we have Jenkins with us. + +Note: By now Jenkins should be installed on your machine(as it was a part of previous tasks, if not follow [Installation Guide](https://youtu.be/OkVtBKqMt7I)) + +## Tasks: + +**1. What you understood in Jenkin, write a small article in your own words (Don't copy from Internet Directly)** + +**2.Create a freestyle pipeline to print "Hello World!!** +Hint: Use link for [Article](https://www.geeksforgeeks.org/what-is-jenkins) + +Don't forget to post your progress on Linkedin. Till then Happy learning :) + +[← Previous Day](../day21/README.md) | [Next Day →](../day23/README.md) diff --git a/2023/day22/tasks.md b/2023/day22/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day23/README.md b/2023/day23/README.md new file mode 100644 index 0000000000..1fc2135053 --- /dev/null +++ b/2023/day23/README.md @@ -0,0 +1,40 @@ +# Day 23 Task: Jenkins Freestyle Project for DevOps Engineers. + +The Community is absolutely crushing it in the #90daysofdevops journey. Today's challenge is particularly exciting as it entails creating a Jenkins Freestyle Project, an opportunity for DevOps engineers to showcase their skills and push their limits. Who's ready to dive in and make it happen? 😍 + +## What is CI/CD? + +- CI or Continuous Integration is the practice of automating the integration of code changes from multiple developers into a single codebase. It is a software development practice where the developers commit their work frequently into the central code repository (Github or Stash). Then there are automated tools that build the newly committed code and do a code review, etc as required upon integration. + The key goals of Continuous Integration are to find and address bugs quicker, make the process of integrating code across a team of developers easier, improve software quality and reduce the time it takes to release new feature updates. + +- CD or Continuous Delivery is carried out after Continuous Integration to make sure that we can release new changes to our customers quickly in an error-free way. This includes running integration and regression tests in the staging area (similar to the production environment) so that the final release is not broken in production. It ensures to automate the release process so that we have a release-ready product at all times and we can deploy our application at any point in time. + +## What Is a Build Job? + +A Jenkins build job contains the configuration for automating a specific task or step in the application building process. These tasks include gathering dependencies, compiling, archiving, or transforming code, and testing and deploying code in different environments. + +Jenkins supports several types of build jobs, such as freestyle projects, pipelines, multi-configuration projects, folders, multibranch pipelines, and organization folders. + +## What is Freestyle Projects ?? 🤔 + +A freestyle project in Jenkins is a type of project that allows you to build, test, and deploy software using a variety of different options and configurations. Here are a few tasks that you could complete when working with a freestyle project in Jenkins: + +# Task-01 + +- create a agent for your app. ( which you deployed from docker in earlier task) +- Create a new Jenkins freestyle project for your app. +- In the "Build" section of the project, add a build step to run the "docker build" command to build the image for the container. +- Add a second step to run the "docker run" command to start a container using the image created in step 3. + +# Task-02 + +- Create Jenkins project to run "docker-compose up -d" command to start the multiple containers defined in the compose file (Hint- use day-19 Application & Database docker-compose file) +- Set up a cleanup step in the Jenkins project to run "docker-compose down" command to stop and remove the containers defined in the compose file. + +For Refference jenkins Freestyle Project visit [here](https://youtu.be/wwNWgG5htxs) + +You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. + +Happy Learning:) + +[← Previous Day](../day22/README.md) | [Next Day →](../day24/README.md) diff --git a/2023/day23/tasks.md b/2023/day23/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day24/README.md b/2023/day24/README.md new file mode 100644 index 0000000000..b611db8316 --- /dev/null +++ b/2023/day24/README.md @@ -0,0 +1,29 @@ +# Day 24 Task: Complete Jenkins CI/CD Project + +Let's make a beautiful CI/CD Pipeline for your Node JS Application 😍 + +## Did you finish Day 23? + +- Day 23 was all about Jenkins CI/CD, make sure you have done it and understood the concepts. As today You will be doing one Project End to End and adding it to your resume :) +- As you have worked with Docker and Docker compose, it will be good to use it in a live project. + +# Task-01 + +- Fork [this](https://github.com/LondheShubham153/node-todo-cicd.git) repository: +- Create a connection to your Jenkins job and your GitHub Repository via GitHub Integration. +- Read About [GitHub WebHooks](https://betterprogramming.pub/how-too-add-github-webhook-to-a-jenkins-pipeline-62b0be84e006) and make sure you have CICD setup +- Refer [this](https://youtu.be/nplH3BzKHPk) video for the entire project + +# Task-02 + +- In the Execute shell run the application using Docker compose +- You will have to make a Docker Compose file for this Project (Can be a good open source contribution) +- Run the project and give yourself a treat:) + +For Reference and entire hands-on Project visit [here](https://youtu.be/nplH3BzKHPk) + +You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. + +Happy Learning:) + +[← Previous Day](../day23/README.md) | [Next Day →](../day25/README.md) diff --git a/2023/day24/tasks.md b/2023/day24/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day25/README.md b/2023/day25/README.md new file mode 100644 index 0000000000..dabbc9b07e --- /dev/null +++ b/2023/day25/README.md @@ -0,0 +1,31 @@ +# Day 25 Task: Complete Jenkins CI/CD Project - Continued with Documentation + +I can imagine catching up will be tough so take a small breather today and complete the Jenkins CI/CD project from Day 24 and add a documentation. + +## Did you finish Day 24? + +- Day 24 will give you an End to End project and adding it to your resume will be a cherry on the top. + +- take more time, finish the project, add a Documentation, add it to your Resume and post about it today. + +# Task-01 + +- Document the process from cloning the repository to adding webhooks, and Deployment, etc. as a README , go through [this example](https://github.com/LondheShubham153/fynd-my-movie/blob/master/README.md) + +- A well written readme file will help others to understand your project and you will understand how to use the project again without any problems. + +# Task-02 + +- Also it's important to keep smaller goals, as its a small task, think of a small Goal you can accomplish. + +- Write about it using [this template](https://www.linkedin.com/posts/shubhamlondhe1996_taking-resolutions-and-having-goals-for-an-activity-7023858409762373632-s2J8?utm_source=share&utm_medium=member_desktop) + +- Have small goals and strategies to achieve them, also have a small reward for yourself. + +For Reference and entire hands-on Project visit [here](https://youtu.be/nplH3BzKHPk) + +You can Post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. + +Happy Learning:) + +[← Previous Day](../day24/README.md) | [Next Day →](../day26/README.md) diff --git a/2023/day25/tasks.md b/2023/day25/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day26/README.md b/2023/day26/README.md new file mode 100644 index 0000000000..b0d65accb6 --- /dev/null +++ b/2023/day26/README.md @@ -0,0 +1,59 @@ +# Day 26 Task: Jenkins Declarative Pipeline + +One of the most important parts of your DevOps and CICD journey is a Declarative Pipeline Syntax of Jenkins + +## Some terms for your Knowledge + +**What is Pipeline -** A pipeline is a collection of steps or jobs interlinked in a sequence. + +**Declarative:** Declarative is a more recent and advanced implementation of a pipeline as a code. + +**Scripted:** Scripted was the first and most traditional implementation of the pipeline as a code in Jenkins. It was designed as a general-purpose DSL (Domain Specific Language) built with Groovy. + +# Why you should have a Pipeline + +The definition of a Jenkins Pipeline is written into a text file (called a [`Jenkinsfile`](https://www.jenkins.io/doc/book/pipeline/jenkinsfile)) which in turn can be committed to a project’s source control repository. +This is the foundation of "Pipeline-as-code"; treating the CD pipeline as a part of the application to be versioned and reviewed like any other code. + +**Creating a `Jenkinsfile` and committing it to source control provides a number of immediate benefits:** + +- Automatically creates a Pipeline build process for all branches and pull requests. +- Code review/iteration on the Pipeline (along with the remaining source code). + +# Pipeline syntax + +```groovy +pipeline { + agent any + stages { + stage('Build') { + steps { + // + } + } + stage('Test') { + steps { + // + } + } + stage('Deploy') { + steps { + // + } + } + } +} +``` + +# Task-01 + +- Create a New Job, this time select Pipeline instead of Freestyle Project. +- Follow the Official Jenkins [Hello world example](https://www.jenkins.io/doc/pipeline/tour/hello-world/) +- Complete the example using the Declarative pipeline +- In case of any issues feel free to post on any Groups, [Discord](https://discord.gg/Q6ntmMtH) or [Telegram](https://t.me/trainwithshubham) + +You can post your progress on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. + +Happy Learning:) + +[← Previous Day](../day25/README.md) | [Next Day →](../day27/README.md) diff --git a/2023/day26/tasks.md b/2023/day26/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day27/README.md b/2023/day27/README.md new file mode 100644 index 0000000000..277a2db069 --- /dev/null +++ b/2023/day27/README.md @@ -0,0 +1,43 @@ +# Day 27 Task: Jenkins Declarative Pipeline with Docker + +Day 26 was all about a Declarative pipeline, now its time to level up things, let's integrate Docker and your Jenkins declarative pipeline + +## Use your Docker Build and Run Knowledge + +**docker build -** you can use `sh 'docker build . -t ' ` in your pipeline stage block to run the docker build command. (Make sure you have docker installed with correct permissions. + +**docker run:** you can use `sh 'docker run -d '` in your pipeline stage block to build the container. + +**How will the stages look** + +```groovy +stages { + stage('Build') { + steps { + sh 'docker build -t trainwithshubham/django-app:latest' + } + } + } +``` + +# Task-01 + +- Create a docker-integrated Jenkins declarative pipeline +- Use the above-given syntax using `sh` inside the stage block +- You will face errors in case of running a job twice, as the docker container will be already created, so for that do task 2 + +# Task-02 + +- Create a docker-integrated Jenkins declarative pipeline using the `docker` groovy syntax inside the stage block. +- You won't face errors, you can Follow [this documentation](https://tempora-mutantur.github.io/jenkins.io/github_pages_test/doc/book/pipeline/docker/) + +- Complete your previous projects using this Declarative pipeline approach + +- In case of any issues feel free to post on any Groups, [Discord](https://discord.gg/Q6ntmMtH) or [Telegram](https://t.me/trainwithshubham) + +Are you enjoying the #90DaysOfDevOps Challenge? +Let me know how are feeling after 4 weeks of DevOps Learnings, + +Happy Learning:) + +[← Previous Day](../day26/README.md) | [Next Day →](../day28/README.md) diff --git a/2023/day27/tasks.md b/2023/day27/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day28/README.md b/2023/day28/README.md new file mode 100644 index 0000000000..1c388c0d38 --- /dev/null +++ b/2023/day28/README.md @@ -0,0 +1,49 @@ +# Day 28 Task: Jenkins Agents + +# Jenkins Master (Server) + +Jenkins’s server or master node holds all key configurations. Jenkins master server is like a control server that orchestrates all the workflow defined in the pipelines. For example, scheduling a job, monitoring the jobs, etc. + +# Jenkins Agent + +An agent is typically a machine or container that connects to a Jenkins master and this agent that actually execute all the steps mentioned in a Job. When you create a Jenkins job, you have to assign an agent to it. Every agent has a label as a unique identifier. + +When you trigger a Jenkins job from the master, the actual execution happens on the agent node that is configured in the job. + +A single, monolithic Jenkins installation can work great for a small team with a relatively small number of projects. As your needs grow, however, it often becomes necessary to scale up. Jenkins provides a way to do this called “master to agent connection.” Instead of serving the Jenkins UI and running build jobs all on a single system, you can provide Jenkins with agents to handle the execution of jobs while the master serves the Jenkins UI and acts as a control node. + +

+ +## Pre-requisites + +Let’s say we’re starting with a fresh Ubuntu 22.04 Linux installation. To get an agent working make sure you install Java ( same version as jenkins master server ) and Docker on it. + +`Note:- +While creating an agent, be sure to separate rights, permissions, and ownership for jenkins users. ` + +# Task-01 + +- Create an agent by setting up a node on Jenkins + +- Create a new AWS EC2 Instance and connect it to master(Where Jenkins is installed) + +- The connection of master and agent requires SSH and the public-private key pair exchange. +- Verify its status under "Nodes" section. + +- You can follow [this article](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7017885886461698048-os5f?utm_source=share&utm_medium=member_android) for the same + +# Task-02 + +- Run your previous Jobs (which you built on Day 26, and Day 27) on the new agent + +- Use labels for the agent, your master server should trigger builds for the agent server. + +- In case of any issues feel free to post on any Groups, [Discord](https://discord.gg/Q6ntmMtH) or [Telegram](https://t.me/trainwithshubham) + +Are you enjoying the #90DaysOfDevOps Challenge? + +Let me know how are feeling after 4 weeks of DevOps Learning. + +Happy Learning:) + +[← Previous Day](../day27/README.md) | [Next Day →](../day29/README.md) diff --git a/2023/day28/tasks.md b/2023/day28/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day29/README.md b/2023/day29/README.md new file mode 100644 index 0000000000..6563b7637e --- /dev/null +++ b/2023/day29/README.md @@ -0,0 +1,33 @@ +## Day 29 Task: Jenkins Important interview Questions. + +

+ + +## Jenkins Interview + +Here are some Jenkins-specific questions related to Docker that one can use during a DevOps Engineer interview: + +## Questions + +1. What’s the difference between continuous integration, continuous delivery, and continuous deployment? +2. Benefits of CI/CD +3. What is meant by CI-CD? +4. What is Jenkins Pipeline? +5. How do you configure the job in Jenkins? +6. Where do you find errors in Jenkins? +7. In Jenkins how can you find log files? +8. Jenkins workflow and write a script for this workflow? +9. How to create continuous deployment in Jenkins? +10. How build job in Jenkins? +11. Why we use pipeline in Jenkins? +12. Is Only Jenkins enough for automation? +13. How will you handle secrets? +14. Explain diff stages in CI-CD setup +15. Name some of the plugins in Jenkin? + +These questions will help you in your next DevOps Interview. +Write a Blog and share it on LinkedIn. + +_Happy Learning :)_ + +[← Previous Day](../day28/README.md) | [Next Day →](../day30/README.md) diff --git a/2023/day29/tasks.md b/2023/day29/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day30/README.md b/2023/day30/README.md new file mode 100644 index 0000000000..af4d37aa2f --- /dev/null +++ b/2023/day30/README.md @@ -0,0 +1,29 @@ +## Day 30 Task: Kubernetes Architecture + +

+ +## Kubernetes Overview + +With the widespread adoption of [containers](https://cloud.google.com/containers) among organizations, Kubernetes, the container-centric management software, has become a standard to deploy and operate containerized applications and is one of the most important parts of DevOps. + +Originally developed at Google and released as open-source in 2014. Kubernetes builds on 15 years of running Google's containerized workloads and the valuable contributions from the open-source community. Inspired by Google’s internal cluster management system, [Borg](https://research.google.com/pubs/pub43438.html), + +## Tasks + +1. What is Kubernetes? Write in your own words and why do we call it k8s? + +2. What are the benefits of using k8s? + +3. Explain the architecture of Kubernetes, refer to [this video](https://youtu.be/FqfoDUhzyDo) + +4. What is Control Plane? + +5. Write the difference between kubectl and kubelets. + +6. Explain the role of the API server. + +Kubernetes architecture is important, so make sure you spend a day understanding it. [This video](https://youtu.be/FqfoDUhzyDo) will surely help you. + +_Happy Learning :)_ + +[← Previous Day](../day29/README.md) | [Next Day →](../day31/README.md) diff --git a/2023/day30/tasks.md b/2023/day30/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day31/README.md b/2023/day31/README.md new file mode 100644 index 0000000000..5b2a6b79e5 --- /dev/null +++ b/2023/day31/README.md @@ -0,0 +1,65 @@ +## Day 31 Task: Launching your First Kubernetes Cluster with Nginx running + +### Awesome! You learned the architecture of one of the top most important tool "Kubernetes" in your previous task. + +## What about doing some hands-on now? + +Let's read about minikube and implement _k8s_ in our local machine + +1. **What is minikube?** + +_Ans_:- Minikube is a tool which quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows. It can deploy as a VM, a container, or on bare-metal. + +Minikube is a pared-down version of Kubernetes that gives you all the benefits of Kubernetes with a lot less effort. + +This makes it an interesting option for users who are new to containers, and also for projects in the world of edge computing and the Internet of Things. + +2. **Features of minikube** + +_Ans_ :- + +(a) Supports the latest Kubernetes release (+6 previous minor versions) + +(b) Cross-platform (Linux, macOS, Windows) + +(c) Deploy as a VM, a container, or on bare-metal + +(d) Multiple container runtimes (CRI-O, containerd, docker) + +(e) Direct API endpoint for blazing fast image load and build + +(f) Advanced features such as LoadBalancer, filesystem mounts, FeatureGates, and network policy + +(g) Addons for easily installed Kubernetes applications + +(h) Supports common CI environments + +## Task-01: + +## Install minikube on your local + +For installation, you can Visit [this page](https://minikube.sigs.k8s.io/docs/start/). + +If you want to try an alternative way, you can check [this](https://k8s-docs.netlify.app/en/docs/tasks/tools/install-minikube/). + +## Let's understand the concept **pod** + +_Ans:-_ + +Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. + +A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific "logical host": it contains one or more application containers which are relatively tightly coupled. + +You can read more about pod from [here](https://kubernetes.io/docs/concepts/workloads/pods/) . + +## Task-02: + +## Create your first pod on Kubernetes through minikube. + +We are suggesting you make an nginx pod, but you can always show your creativity and do it on your own. + +**Having an issue? Don't worry, adding a sample yaml file for pod creation, you can always refer that.** + +_Happy Learning :)_ + +[← Previous Day](../day30/README.md) | [Next Day →](../day32/README.md) diff --git a/2023/day31/pod.yml b/2023/day31/pod.yml new file mode 100644 index 0000000000..cfc02a372d --- /dev/null +++ b/2023/day31/pod.yml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: Pod +metadata: + name: nginx +spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + + +# After creating this file , run below command: +# kubectl apply -f diff --git a/2023/day31/tasks.md b/2023/day31/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day32/Deployment.yml b/2023/day32/Deployment.yml new file mode 100644 index 0000000000..8f3814196b --- /dev/null +++ b/2023/day32/Deployment.yml @@ -0,0 +1,21 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: todo-app + labels: + app: todo +spec: + replicas: 2 + selector: + matchLabels: + app: todo + template: + metadata: + labels: + app: todo + spec: + containers: + - name: todo + image: rishikeshops/todo-app + ports: + - containerPort: 3000 diff --git a/2023/day32/README.md b/2023/day32/README.md new file mode 100644 index 0000000000..eb2ee9c304 --- /dev/null +++ b/2023/day32/README.md @@ -0,0 +1,27 @@ +## Day 32 Task: Launching your Kubernetes Cluster with Deployment + +### Congratulation ! on your learning on K8s on Day-31 + +## What is Deployment in k8s + +A Deployment provides a configuration for updates for Pods and ReplicaSets. + +You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new replicas for scaling, or to remove existing Deployments and adopt all their resources with new Deployments. + +## Today's task let's keep it very simple. + +## Task-1: + +**Create one Deployment file to deploy a sample todo-app on K8s using "Auto-healing" and "Auto-Scaling" feature** + +- add a deployment.yml file (sample is kept in the folder for your reference) +- apply the deployment to your k8s (minikube) cluster by command + `kubectl apply -f deployment.yml` + +Let's make your resume shine with one more project ;) + +**Having an issue? Don't worry, adding a sample deployment file , you can always refer that or wathch [this video](https://youtu.be/ONrbWFJXLLk)** + +Happy Learning :) + +[← Previous Day](../day31/README.md) | [Next Day →](../day33/README.md) diff --git a/2023/day32/tasks.md b/2023/day32/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day33/README.md b/2023/day33/README.md new file mode 100644 index 0000000000..984842c527 --- /dev/null +++ b/2023/day33/README.md @@ -0,0 +1,34 @@ +# Day 33 Task: Working with Namespaces and Services in Kubernetes + +### Congrats🎊🎉 on updating your Deployment yesterday💥🙌 + +## What are Namespaces and Services in k8s + +In Kubernetes, Namespaces are used to create isolated environments for resources. Each Namespace is like a separate cluster within the same physical cluster. Services are used to expose your Pods and Deployments to the network. Read more about Namespace [Here](https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/) + +# Today's task: + +## Task 1: + +- Create a Namespace for your Deployment + +- Use the command `kubectl create namespace ` to create a Namespace + +- Update the deployment.yml file to include the Namespace + +- Apply the updated deployment using the command: + `kubectl apply -f deployment.yml -n ` + +- Verify that the Namespace has been created by checking the status of the Namespaces in your cluster. + +## Task 2: + +- Read about Services, Load Balancing, and Networking in Kubernetes. Refer official documentation of kubernetes [Link](https://kubernetes.io/docs/concepts/services-networking/) + +Need help with Namespaces? Check out this [video](https://youtu.be/K3jNo4z5Jx8) for assistance. + +Keep growing your Kubernetes knowledge💥🙌 + +Happy Learning! :) + +[← Previous Day](../day32/README.md) | [Next Day →](../day34/README.md) diff --git a/2023/day33/tasks.md b/2023/day33/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day34/README.md b/2023/day34/README.md new file mode 100644 index 0000000000..9753f7ff1f --- /dev/null +++ b/2023/day34/README.md @@ -0,0 +1,36 @@ +# Day 34 Task: Working with Services in Kubernetes + +### Congratulation🎊 on your learning on Deployments in K8s on Day-33 + +## What are Services in K8s + +In Kubernetes, Services are objects that provide stable network identities to Pods and abstract away the details of Pod IP addresses. Services allow Pods to receive traffic from other Pods, Services, and external clients. + +## Task-1: + +- Create a Service for your todo-app Deployment from Day-32 +- Create a Service definition for your todo-app Deployment in a YAML file. +- Apply the Service definition to your K8s (minikube) cluster using the `kubectl apply -f service.yml -n ` command. +- Verify that the Service is working by accessing the todo-app using the Service's IP and Port in your Namespace. + +## Task-2: + +- Create a ClusterIP Service for accessing the todo-app from within the cluster +- Create a ClusterIP Service definition for your todo-app Deployment in a YAML file. +- Apply the ClusterIP Service definition to your K8s (minikube) cluster using the `kubectl apply -f cluster-ip-service.yml -n ` command. +- Verify that the ClusterIP Service is working by accessing the todo-app from another Pod in the cluster in your Namespace. + +## Task-3: + +- Create a LoadBalancer Service for accessing the todo-app from outside the cluster +- Create a LoadBalancer Service definition for your todo-app Deployment in a YAML file. +- Apply the LoadBalancer Service definition to your K8s (minikube) cluster using the `kubectl apply -f load-balancer-service.yml -n ` command. +- Verify that the LoadBalancer Service is working by accessing the todo-app from outside the cluster in your Namespace. + +Struggling with Services? Take a look at this video for a step-by-step [guide](https://youtu.be/OJths_RojFA). + +Need help with Services in Kubernetes? Check out the Kubernetes [documentation](https://kubernetes.io/docs/concepts/services-networking/service/) for assistance. + +Happy Learning :) + +[← Previous Day](../day33/README.md) | [Next Day →](../day35/README.md) diff --git a/2023/day34/tasks.md b/2023/day34/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day35/README.md b/2023/day35/README.md new file mode 100644 index 0000000000..160e0030b2 --- /dev/null +++ b/2023/day35/README.md @@ -0,0 +1,37 @@ +# Day 35: Mastering ConfigMaps and Secrets in Kubernetes🔒🔑🛡️ + +### 👏🎉 Yay! Yesterday we conquered Namespaces and Services 💪💻🔗🚀 + +## What are ConfigMaps and Secrets in k8s + +In Kubernetes, ConfigMaps and Secrets are used to store configuration data and secrets, respectively. ConfigMaps store configuration data as key-value pairs, while Secrets store sensitive data in an encrypted form. + +- _Example :- Imagine you're in charge of a big spaceship (Kubernetes cluster) with lots of different parts (containers) that need information to function properly. + ConfigMaps are like a file cabinet where you store all the information each part needs in simple, labeled folders (key-value pairs). + Secrets, on the other hand, are like a safe where you keep the important, sensitive information that shouldn't be accessible to just anyone (encrypted data). + So, using ConfigMaps and Secrets, you can ensure each part of your spaceship (Kubernetes cluster) has the information it needs to work properly and keep sensitive information secure! 🚀_ +- Read more about [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) & [Secret](https://kubernetes.io/docs/concepts/configuration/secret/). + +## Today's task: + +## Task 1: + +- Create a ConfigMap for your Deployment +- Create a ConfigMap for your Deployment using a file or the command line +- Update the deployment.yml file to include the ConfigMap +- Apply the updated deployment using the command: `kubectl apply -f deployment.yml -n ` +- Verify that the ConfigMap has been created by checking the status of the ConfigMaps in your Namespace. + +## Task 2: + +- Create a Secret for your Deployment +- Create a Secret for your Deployment using a file or the command line +- Update the deployment.yml file to include the Secret +- Apply the updated deployment using the command: `kubectl apply -f deployment.yml -n ` +- Verify that the Secret has been created by checking the status of the Secrets in your Namespace. + +Need help with ConfigMaps and Secrets? Check out this [video](https://youtu.be/FAnQTgr04mU) for assistance. + +Keep learning and expanding your knowledge of Kubernetes💥🙌 + +[← Previous Day](../day34/README.md) | [Next Day →](../day36/README.md) diff --git a/2023/day35/tasks.md b/2023/day35/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day36/Deployment.yml b/2023/day36/Deployment.yml new file mode 100644 index 0000000000..3c9c1c7cbc --- /dev/null +++ b/2023/day36/Deployment.yml @@ -0,0 +1,26 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: todo-app-deployment +spec: + replicas: 1 + selector: + matchLabels: + app: todo-app + template: + metadata: + labels: + app: todo-app + spec: + containers: + - name: todo-app + image: rishikeshops/todo-app + ports: + - containerPort: 8000 + volumeMounts: + - name: todo-app-data + mountPath: /app + volumes: + - name: todo-app-data + persistentVolumeClaim: + claimName: pvc-todo-app diff --git a/2023/day36/README.md b/2023/day36/README.md new file mode 100644 index 0000000000..2079e66d65 --- /dev/null +++ b/2023/day36/README.md @@ -0,0 +1,51 @@ +# Day 36 Task: Managing Persistent Volumes in Your Deployment 💥 + +🙌 Kudos to you for conquering ConfigMaps and Secrets in Kubernetes yesterday. + +🔥 You're on fire! 🔥 + +## What are Persistent Volumes in k8s + +In Kubernetes, a Persistent Volume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. A Persistent Volume Claim (PVC) is a request for storage by a user. The PVC references the PV, and the PV is bound to a specific node. Read official documentation of [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). + +⏰ Wait, wait, wait! 📣 Attention all #90daysofDevOps Challengers. 💪 + +Before diving into today's task, don't forget to share your thoughts on the #90daysofDevOps challenge 💪 Fill out our feedback form (https://lnkd.in/gcgvrq8b) to help us improve and provide the best experience 🌟 Your participation and support is greatly appreciated 🙏 Let's continue to grow together 🌱 + +## Today's tasks: + +### Task 1: + +Add a Persistent Volume to your Deployment todo app. + +- Create a Persistent Volume using a file on your node. [Template](https://github.com/LondheShubham153/90DaysOfDevOps/blob/94e3970819e097a5b8edea40fe565d583419f912/2023/day36/pv.yml) + +- Create a Persistent Volume Claim that references the Persistent Volume. [Template](https://github.com/LondheShubham153/90DaysOfDevOps/blob/94e3970819e097a5b8edea40fe565d583419f912/2023/day36/pvc.yml) + +- Update your deployment.yml file to include the Persistent Volume Claim. After Applying pv.yml pvc.yml your deployment file look like this [Template](https://github.com/LondheShubham153/90DaysOfDevOps/blob/94e3970819e097a5b8edea40fe565d583419f912/2023/day36/Deployment.yml) + +- Apply the updated deployment using the command: `kubectl apply -f deployment.yml` + +- Verify that the Persistent Volume has been added to your Deployment by checking the status of the Pods and Persistent Volumes in your cluster. Use this commands `kubectl get pods` , + +`kubectl get pv` + +⚠️ Don't forget: To apply changes or create files in your Kubernetes deployments, each file must be applied separately. ⚠️ + +### Task 2: + +Accessing data in the Persistent Volume, + +- Connect to a Pod in your Deployment using command : `kubectl exec -it -- /bin/bash + +` + +- Verify that you can access the data stored in the Persistent Volume from within the Pod + +Need help with Persistent Volumes? Check out this [video](https://youtu.be/U0_N3v7vJys) for assistance. + +Keep up the excellent work🙌💥 + +Happy Learning :) + +[← Previous Day](../day35/README.md) | [Next Day →](../day37/README.md) diff --git a/2023/day36/pv.yml b/2023/day36/pv.yml new file mode 100644 index 0000000000..9546aba56a --- /dev/null +++ b/2023/day36/pv.yml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pv-todo-app +spec: + capacity: + storage: 1Gi + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: "/tmp/data" diff --git a/2023/day36/pvc.yml b/2023/day36/pvc.yml new file mode 100644 index 0000000000..3d9dce14d8 --- /dev/null +++ b/2023/day36/pvc.yml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: pvc-todo-app +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 500Mi diff --git a/2023/day36/tasks.md b/2023/day36/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day37/README.md b/2023/day37/README.md new file mode 100644 index 0000000000..1300e335ae --- /dev/null +++ b/2023/day37/README.md @@ -0,0 +1,43 @@ +## Day 37 Task: Kubernetes Important interview Questions. + +## Questions + +1.What is Kubernetes and why it is important? + +2.What is difference between docker swarm and kubernetes? + +3.How does Kubernetes handle network communication between containers? + +4.How does Kubernetes handle scaling of applications? + +5.What is a Kubernetes Deployment and how does it differ from a ReplicaSet? + +6.Can you explain the concept of rolling updates in Kubernetes? + +7.How does Kubernetes handle network security and access control? + +8.Can you give an example of how Kubernetes can be used to deploy a highly available application? + +9.What is namespace is kubernetes? Which namespace any pod takes if we don't specify any namespace? + +10.How ingress helps in kubernetes? + +11.Explain different types of services in kubernetes? + +12.Can you explain the concept of self-healing in Kubernetes and give examples of how it works? + +13.How does Kubernetes handle storage management for containers? + +14.How does the NodePort service work? + +15.What is a multinode cluster and single-node cluster in Kubernetes? + +16.Difference between create and apply in kubernetes? + +## These questions will help you in your next DevOps Interview. + +_Write a Blog and share it on LinkedIn._ + +**_Happy Learning :)_** + +[← Previous Day](../day36/README.md) | [Next Day →](../day38/README.md) diff --git a/2023/day37/tasks.md b/2023/day37/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day38/README.md b/2023/day38/README.md new file mode 100644 index 0000000000..8f51187e87 --- /dev/null +++ b/2023/day38/README.md @@ -0,0 +1,30 @@ +# Day 38 Getting Started with AWS Basics☁ + +![AWS](https://user-images.githubusercontent.com/115981550/217238286-6c6bc6e7-a1ac-4d12-98f3-f95ff5bf53fc.png) + +Congratulations!!!! you have come so far. Don't let your excuses break your consistency. Let's begin our new Journey with Cloud☁. By this time you have created multiple EC2 instances, if not let's begin the journey: + +## AWS: + +Amazon Web Services is one of the most popular Cloud Provider that has free tier too for students and Cloud enthutiasts for their Handson while learning (Create your free account today to explore more on it). + +Read from [here](https://aws.amazon.com/what-is-aws/) + +## IAM: + +AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. With IAM, you can centrally manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. +Read from [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) + +Get to know IAM more deeply [Click Here!!](https://www.youtube.com/watch?v=ORB4eY8EydA) + +### Task1: + +Create an IAM user with username of your own wish and grant EC2 Access. Launch your Linux instance through the IAM user that you created now and install jenkins and docker on your machine via single Shell Script. + +### Task2: + +In this task you need to prepare a devops team of avengers. Create 3 IAM users of avengers and assign them in devops groups with IAM policy. + +Post your progress on Linkedin. Till then Happy Learning :) + +[← Previous Day](../day37/README.md) | [Next Day →](../day39/README.md) diff --git a/2023/day38/tasks.md b/2023/day38/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day39/README.md b/2023/day39/README.md new file mode 100644 index 0000000000..9a7e3e934f --- /dev/null +++ b/2023/day39/README.md @@ -0,0 +1,41 @@ +# Day 39 AWS and IAM Basics☁ + +![AWS](https://miro.medium.com/max/1400/0*dIzXLQn6aBClm1TJ.png) + +By this time you have created multiple EC2 instances, and post installation manually installed applications like Jenkins, docker etc. +Now let's switch to little automation part. Sounds interesting??🤯 + +## AWS: + +Amazon Web Services is one of the most popular Cloud Provider that has free tier too for students and Cloud enthutiasts for their Handson while learning (Create your free account today to explore more on it). + +Read from [here](https://aws.amazon.com/what-is-aws/) + +## User Data in AWS: + +- When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives. +- You can also pass this data into the launch instance wizard as plain text, as a file (this is useful for launching instances using the command line tools), or as base64-encoded text (for API calls). +- This will save time and manual effort everytime you launch an instance and want to install any application on it like apache, docker, Jenkins etc + +Read more from [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) + +## IAM: + +AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. With IAM, you can centrally manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. +Read from [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) + +Get to know IAM more deeply🏊[Click Here!!](https://www.youtube.com/watch?v=ORB4eY8EydA) + +### Task1: + +- Launch EC2 instance with already installed Jenkins on it. Once server shows up in console, hit the IP address in browser and you Jenkins page should be visible. +- Take screenshot of Userdata and Jenkins page, this will verify the task completion. + +### Task2: + +- Read more on IAM Roles and explain the IAM Users, Groups and Roles in your own terms. +- Create three Roles named: DevOps-User, Test-User and Admin. + +Post your progress on Linkedin. Till then Happy Learning :) + +[← Previous Day](../day38/README.md) | [Next Day →](../day40/README.md) diff --git a/2023/day39/tasks.md b/2023/day39/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day40/README.md b/2023/day40/README.md new file mode 100644 index 0000000000..ce2dbcfda3 --- /dev/null +++ b/2023/day40/README.md @@ -0,0 +1,49 @@ +# Day 40 AWS EC2 Automation ☁ + +![AWS](https://www.eginnovations.com/blog/wp-content/uploads/2021/09/Amazon-AWS-Cloud-Topimage-1.jpg) + +I hope your journey with AWS cloud and automation is going well [](https://emojipedia.org/emoji/%F0%9F%98%8D/) + +### 😍 + +## Automation in EC2: + +Amazon EC2 or Amazon Elastic Compute Cloud can give you secure, reliable, high-performance, and cost-effective computing infrastructure to meet demanding business needs. + +Also, if you know a few things, you can automate many things. + +Read from [here](https://aws.amazon.com/ec2/) + +## Launch template in AWS EC2: + +- You can make a launch template with the configuration information you need to start an instance. You can save launch parameters in launch templates so you don't have to type them in every time you start a new instance. +- For example, a launch template can have the AMI ID, instance type, and network settings that you usually use to launch instances. +- You can tell the Amazon EC2 console to use a certain launch template when you start an instance. + +Read more from [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html) + +## Instance Types: + +Amazon EC2 has a large number of instance types that are optimised for different uses. The different combinations of CPU, memory, storage and networking capacity in instance types give you the freedom to choose the right mix of resources for your apps. Each instance type comes with one or more instance sizes, so you can adjust your resources to meet the needs of the workload you want to run. + +Read from [here](https://aws.amazon.com/ec2/instance-types/?trk=32f4fbd0-ffda-4695-a60c-8857fab7d0dd&sc_channel=ps&s_kwcid=AL!4422!3!536392685920!e!!g!!ec2%20instance%20types&ef_id=CjwKCAiA0JKfBhBIEiwAPhZXD_O1-3qZkRa-KScynbwjvHd3l4UHSTfKuigd5ZPukXoDXu-v3MtC7hoCafEQAvD_BwE:G:s&s_kwcid=AL!4422!3!536392685920!e!!g!!ec2%20instance%20types) + +## AMI: + +An Amazon Machine Image (AMI) is an image that AWS supports and keeps up to date. It contains the information needed to start an instance. When you launch an instance, you must choose an AMI. When you need multiple instances with the same configuration, you can launch them from a single AMI. + +### Task1: + +- Create a launch template with Amazon Linux 2 AMI and t2.micro instance type with Jenkins and Docker setup (You can use the Day 39 User data script for installing the required tools. + +- Create 3 Instances using Launch Template, there must be an option that shows number of instances to be launched ,can you find it? :) + +- You can go one step ahead and create an auto-scaling group, sounds tough? + +Check [this](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html#create-launch-template-for-auto-scaling) out + +Post your progress on Linkedin. + +Happy Learning :) + +[← Previous Day](../day39/README.md) | [Next Day →](../day41/README.md) diff --git a/2023/day40/tasks.md b/2023/day40/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day41/README.md b/2023/day41/README.md new file mode 100644 index 0000000000..0a1488f068 --- /dev/null +++ b/2023/day41/README.md @@ -0,0 +1,53 @@ +# Day 41: Setting up an Application Load Balancer with AWS EC2 🚀 ☁ + +![LB2](https://user-images.githubusercontent.com/115981550/218145297-d55fe812-32b7-4242-a4f8-eb66312caa2c.png) + +### Hi, I hope you had a great day yesterday learning about the launch template and instances in EC2. Today, we are going to dive into one of the most important concepts in EC2: Load Balancing. + +## What is Load Balancing? + +Load balancing is the distribution of workloads across multiple servers to ensure consistent and optimal resource utilization. It is an essential aspect of any large-scale and scalable computing system, as it helps you to improve the reliability and performance of your applications. + +## Elastic Load Balancing: + +**Elastic Load Balancing (ELB)** is a service provided by Amazon Web Services (AWS) that automatically distributes incoming traffic across multiple EC2 instances. ELB provides three types of load balancers: + +Read more from [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) + +1. **Application Load Balancer (ALB)** - _operates at layer 7 of the OSI model and is ideal for applications that require advanced routing and microservices._ + +- Read more from [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) + +2. **Network Load Balancer (NLB)** - _operates at layer 4 of the OSI model and is ideal for applications that require high throughput and low latency._ + +- Read more from [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) + +3. **Classic Load Balancer (CLB)** - _operates at layer 4 of the OSI model and is ideal for applications that require basic load balancing features._ + +- Read more [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html) + +## 🎯 Today's Tasks: + +### Task 1: + +- launch 2 EC2 instances with an Ubuntu AMI and use User Data to install the Apache Web Server. +- Modify the index.html file to include your name so that when your Apache server is hosted, it will display your name also do it for 2nd instance which include " TrainWithShubham Community is Super Aweasome :) ". +- Copy the public IP address of your EC2 instances. +- Open a web browser and paste the public IP address into the address bar. +- You should see a webpage displaying information about your PHP installation. + +### Task 2: + +- Create an Application Load Balancer (ALB) in EC2 using the AWS Management Console. +- Add EC2 instances which you launch in task-1 to the ALB as target groups. +- Verify that the ALB is working properly by checking the health status of the target instances and testing the load balancing capabilities. + +![LoadBalancer](https://user-images.githubusercontent.com/115981550/218143557-26ec33ce-99a7-4db6-a46f-1cf48ed77ae0.png) + +Need help with task? Check out this [Blog for assistance](https://rushikesh-mashidkar.hashnode.dev/create-an-application-load-balancer-elastic-load-balancing-using-aws-ec2-instance). + +Don't forget to share your progress on LinkedIn and have a great day🙌💥 + +Happy Learning! 😃 + +[← Previous Day](../day40/README.md) | [Next Day →](../day42/README.md) diff --git a/2023/day41/tasks.md b/2023/day41/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day42/README.md b/2023/day42/README.md new file mode 100644 index 0000000000..5f8a37ff09 --- /dev/null +++ b/2023/day42/README.md @@ -0,0 +1,28 @@ +# Day 42: IAM Programmatic access and AWS CLI 🚀 ☁ + +Today is more of a reading excercise and getting some programmatic access for your AWS account + +## IAM Programmatic access + +In order to access your AWS account from a terminal or system, you can use AWS Access keys and AWS Secret Access keys +Watch [this video](https://youtu.be/XYKqL5GFI-I) for more details. + +## AWS CLI + +The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. + +The AWS CLI v2 offers several new features including improved installers, new configuration options such as AWS IAM Identity Center (successor to AWS SSO), and various interactive features. + +## Task-01 + +- Create AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY from AWS Console. + +## Task-02 + +- Setup and install AWS CLI and configure your account credentials + +Let me know if you have any issues while doing the task. + +Happy Learning :) + +[← Previous Day](../day41/README.md) | [Next Day →](../day43/README.md) diff --git a/2023/day42/tasks.md b/2023/day42/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day43/README.md b/2023/day43/README.md new file mode 100644 index 0000000000..b838d01544 --- /dev/null +++ b/2023/day43/README.md @@ -0,0 +1,32 @@ +# Day 43: S3 Programmatic access with AWS-CLI 💻 📁 + +Hi, I hope you had a great day yesterday. Today as part of the #90DaysofDevOps Challenge we will be exploring most commonly used service in AWS i.e S3. + +![s3](https://user-images.githubusercontent.com/115981550/218308379-a2e841cf-6b77-4d02-bfbe-20d1bae09b20.png) + +# S3 + +Amazon Simple Storage Service (Amazon S3) is an object storage service that provides a secure and scalable way to store and access data on the cloud. It is designed for storing any kind of data, such as text files, images, videos, backups, and more. +Read more [here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) + +## Task-01 + +- Launch an EC2 instance using the AWS Management Console and connect to it using Secure Shell (SSH). +- Create an S3 bucket and upload a file to it using the AWS Management Console. +- Access the file from the EC2 instance using the AWS Command Line Interface (AWS CLI). + +Read more about S3 using aws-cli [here](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html) + +## Task-02 + +- Create a snapshot of the EC2 instance and use it to launch a new EC2 instance. +- Download a file from the S3 bucket using the AWS CLI. +- Verify that the contents of the file are the same on both EC2 instances. + +Added Some Useful commands to complete the task. [Click here for commands](https://github.com/LondheShubham153/90DaysOfDevOps/blob/833a67ac4ec17b992934cd6878875dccc4274f56/2023/day43/aws-cli.md) + +Let me know if you have any questions or face any issues while doing the tasks.🚀 + +Happy Learning :) + +[← Previous Day](../day42/README.md) | [Next Day →](../day44/README.md) diff --git a/2023/day43/aws-cli.md b/2023/day43/aws-cli.md new file mode 100644 index 0000000000..8c0f23fe2f --- /dev/null +++ b/2023/day43/aws-cli.md @@ -0,0 +1,21 @@ +Here are some commonly used AWS CLI commands for Amazon S3: + +`aws s3 ls` - This command lists all of the S3 buckets in your AWS account. + +`aws s3 mb s3://bucket-name` - This command creates a new S3 bucket with the specified name. + +`aws s3 rb s3://bucket-name` - This command deletes the specified S3 bucket. + +`aws s3 cp file.txt s3://bucket-name` - This command uploads a file to an S3 bucket. + +`aws s3 cp s3://bucket-name/file.txt .` - This command downloads a file from an S3 bucket to your local file system. + +`aws s3 sync local-folder s3://bucket-name` - This command syncs the contents of a local folder with an S3 bucket. + +`aws s3 ls s3://bucket-name` - This command lists the objects in an S3 bucket. + +`aws s3 rm s3://bucket-name/file.txt` - This command deletes an object from an S3 bucket. + +`aws s3 presign s3://bucket-name/file.txt` - This command generates a pre-signed URL for an S3 object, which can be used to grant temporary access to the object. + +`aws s3api list-buckets` - This command retrieves a list of all S3 buckets in your AWS account, using the S3 API. diff --git a/2023/day43/tasks.md b/2023/day43/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day44/README.md b/2023/day44/README.md new file mode 100644 index 0000000000..c836c86b29 --- /dev/null +++ b/2023/day44/README.md @@ -0,0 +1,23 @@ +# Day 44: Relational Database Service in AWS + +Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud + +## Task-01 + +- Create a Free tier RDS instance of MySQL +- Create an EC2 instance +- Create an IAM role with RDS access +- Assign the role to EC2 so that your EC2 Instance can connect with RDS +- Once the RDS instance is up and running, get the credentials and connect your EC2 instance using a MySQL client. + +Hint: + +You should install mysql client on EC2, and connect the Host and Port of RDS with this client. + +Post the screenshots once your EC2 instance can connect a MySQL server, that will be a small win for you. + +Watch [this video](https://youtu.be/MrA6Rk1Y82E) for reference. + +Happy Learning + +[← Previous Day](../day43/README.md) | [Next Day →](../day45/README.md) diff --git a/2023/day44/tasks.md b/2023/day44/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day45/README.md b/2023/day45/README.md new file mode 100644 index 0000000000..c2c11a93b2 --- /dev/null +++ b/2023/day45/README.md @@ -0,0 +1,18 @@ +# Day 45: Deploy Wordpress website on AWS + +Over 30% of all websites on the internet use WordPress as their content management system (CMS). It is most often used to run blogs, but it can also be used to run e-commerce sites, message boards, and many other popular things. This guide will show you how to set up a WordPress blog site. + +## Task-01 + +- As WordPress requires a MySQL database to store its data ,create an RDS as you did in Day 44 + +To configure this WordPress site, you will create the following resources in AWS: + +- An Amazon EC2 instance to install and host the WordPress application. +- An Amazon RDS for MySQL database to store your WordPress data. +- Setup the server and post your new Wordpress app. + +Read [this](https://aws.amazon.com/getting-started/hands-on/deploy-wordpress-with-amazon-rds/) for a detailed explanation +Happy Learning :) + +[← Previous Day](../day44/README.md) | [Next Day →](../day46/README.md) diff --git a/2023/day45/tasks.md b/2023/day45/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day46/README.md b/2023/day46/README.md new file mode 100644 index 0000000000..a44ae2f101 --- /dev/null +++ b/2023/day46/README.md @@ -0,0 +1,35 @@ +# Day-46: Set up CloudWatch alarms and SNS topic in AWS + +Hey learners, you have been using aws services atleast for last 45 days. Have you ever wondered what happen if for any service is charging you bill continously and you don't know till you loose all your pocket money ? + +Hahahaha😁, Well! we, as a responsible community ,always try to make it under free tier , but it's good to know and setup something , which will inform you whenever bill touches a Threshold. + +## What is Amazon CloudWatch? + +Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. + +Read more about cloudwatch from the official documentation [here](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) + +## What is Amazon SNS? + +Amazon Simple Notification Service is a notification service provided as part of Amazon Web Services since 2010. It provides a low-cost infrastructure for mass delivery of messages, predominantly to mobile users. + +Read more about it [here](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) + +## Task : + +- Create a CloudWatch alarm that monitors your billing and send an email to you when a it reaches $2. + +(You can keep it for your future use) + +- Delete your billing Alarm that you created now. + +(Now you also know how to delete as well. ) + +Need help with Cloudwatch? Check out this [official documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html) for assistance. + +Keep growing your AWS knowledge💥🙌 + +Happy Learning! :) + +[← Previous Day](../day45/README.md) | [Next Day →](../day47/README.md) diff --git a/2023/day46/tasks.md b/2023/day46/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day47/README.md b/2023/day47/README.md new file mode 100644 index 0000000000..7d3dc37e37 --- /dev/null +++ b/2023/day47/README.md @@ -0,0 +1,64 @@ +# Day 47: AWS Elastic Beanstalk +Today, we explore the new AWS service- Elastic Beanstalk. We'll also cover deploying a small web application (game) on this platform + +## What is AWS Elastic Beanstalk? +![image](https://github.com/Simbaa815/90DaysOfDevOps/assets/112085387/75f69087-d769-4586-b4a7-99a87feaec92) + +- AWS Elastic Beanstalk is a service used to deploy and scale web applications developed by developers. +- It supports multiple programming languages and runtime environments such as Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker. + +## Why do we need AWS Elastic Beanstalk? +- Previously, developers faced challenges in sharing software modules across geographically separated teams. +- AWS Elastic Beanstalk solves this problem by providing a service to easily share applications across different devices. + +## Advantages of AWS Elastic Beanstalk +- Highly scalable +- Fast and simple to begin +- Quick deployment +- Supports multi-tenant architecture +- Simplifies operations +- Cost efficient + +## Components of AWS Elastic Beanstalk +- Application Version: Represents a specific iteration or release of an application's codebase. +- Environment Tier: Defines the infrastructure resources allocated for an environment (e.g., web server environment, worker environment). +- Environment: Represents a collection of AWS resources running an application version. +- Configuration Template: Defines the settings for an environment, including instance types, scaling options, and more. + +## Elastic Beanstalk Environment +There are two types of environments: web server and worker. + +- Web server environments are front-end facing, accessed directly by clients using a URL. + +- Worker environments support backend applications or micro apps. + +## Task-01 +Deploy the [2048-game](https://github.com/Simbaa815/2048-game) using the AWS Elastic Beanstalk. + +If you ever find yourself facing a challenge, feel free to refer to this helpful [blog](https://devxblog.hashnode.dev/aws-elastic-beanstalk-deploying-the-2048-game) post for guidance and support. + +--- + +# Additional work + +## Test Knowledge on aws 💻 📈 +Today, we will be test the aws knowledge on services in AWS, as part of the 90 Days of DevOps Challenge. + + +## Task-01 + +- Launch an EC2 instance using the AWS Management Console and connect to it using SSH. +- Install a web server on the EC2 instance and deploy a simple web application. +- Monitor the EC2 instance using Amazon CloudWatch and troubleshoot any issues that arise. + +## Task-02 +- Create an Auto Scaling group using the AWS Management Console and configure it to launch EC2 instances in response to changes in demand. +- Use Amazon CloudWatch to monitor the performance of the Auto Scaling group and the EC2 instances and troubleshoot any issues that arise. +- Use the AWS CLI to view the state of the Auto Scaling group and the EC2 instances and verify that the correct number of instances are running. + + +We hope that these tasks will give you hands-on experience with aws services and help you understand how these services work together. If you have any questions or face any issues while doing the tasks, please let us know. + +Happy Learning :) + +[← Previous Day](../day46/README.md) | [Next Day →](../day48/README.md) diff --git a/2023/day47/tasks.md b/2023/day47/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day48/README.md b/2023/day48/README.md new file mode 100644 index 0000000000..01836eac4e --- /dev/null +++ b/2023/day48/README.md @@ -0,0 +1,40 @@ +# Day-48 - ECS + +Today will be a great learning for sure. I know many of you may not know about the term "ECS". As you know, 90 Days Of DevOps Challenge is mostly about 'learning new' , let's learn then ;) + +## What is ECS ? + +- ECS (Elastic Container Service) is a fully-managed container orchestration service provided by Amazon Web Services (AWS). It allows you to run and manage Docker containers on a cluster of virtual machines (EC2 instances) without having to manage the underlying infrastructure. + +With ECS, you can easily deploy, manage, and scale your containerized applications using the AWS Management Console, the AWS CLI, or the API. ECS supports both "Fargate" and "EC2 launch types", which means you can run your containers on AWS-managed infrastructure or your own EC2 instances. + +ECS also integrates with other AWS services, such as Elastic Load Balancing, Auto Scaling, and Amazon VPC, allowing you to build scalable and highly available applications. Additionally, ECS has support for Docker Compose and Kubernetes, making it easy to adopt existing container workflows. + +Overall, ECS is a powerful and flexible container orchestration service that can help simplify the deployment and management of containerized applications in AWS. + +## Difference between EKS and ECS ? + +- EKS (Elastic Kubernetes Service) and ECS (Elastic Container Service) are both container orchestration platforms provided by Amazon Web Services (AWS). While both platforms allow you to run containerized applications in the AWS cloud, there are some differences between the two. + +**Architecture**: +ECS is based on a centralized architecture, where there is a control plane that manages the scheduling of containers on EC2 instances. On the other hand, EKS is based on a distributed architecture, where the Kubernetes control plane is distributed across multiple EC2 instances. + +**Kubernetes Support**: +EKS is a fully managed Kubernetes service, meaning that it supports Kubernetes natively and allows you to run your Kubernetes workloads on AWS without having to manage the Kubernetes control plane. ECS, on the other hand, has its own orchestration engine and does not support Kubernetes natively. + +**Scaling**: +EKS is designed to automatically scale your Kubernetes cluster based on demand, whereas ECS requires you to configure scaling policies for your tasks and services. + +**Flexibility**: +EKS provides more flexibility than ECS in terms of container orchestration, as it allows you to customize and configure Kubernetes to meet your specific requirements. ECS is more restrictive in terms of the options available for container orchestration. + +**Community**: +Kubernetes has a large and active open-source community, which means that EKS benefits from a wide range of community-driven development and support. ECS, on the other hand, has a smaller community and is largely driven by AWS itself. + +In summary, EKS is a good choice if you want to use Kubernetes to manage your containerized workloads on AWS, while ECS is a good choice if you want a simpler, more managed platform for running your containerized applications. + +# Task : + +Set up ECS (Elastic Container Service) by setting up Nginx on ECS. + +[← Previous Day](../day47/README.md) | [Next Day →](../day49/README.md) diff --git a/2023/day48/tasks.md b/2023/day48/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day49/README.md b/2023/day49/README.md new file mode 100644 index 0000000000..ecc603177a --- /dev/null +++ b/2023/day49/README.md @@ -0,0 +1,25 @@ +# Day 49 - INTERVIEW QUESTIONS ON AWS + +Hey people, we have listened to your suggestions and we are looking forward to get more! +As you people have asked to put more interview based questions as part of Daily Task, So here it it :) + +## INTERVIEW QUESTIONS: + +- Name 5 aws services you have used and what's the use cases? +- What are the tools used to send logs to the cloud environment? +- What are IAM Roles? How do you create /manage them? +- How to upgrade or downgrade a system with zero downtime? +- What is infrastructure as code and how do you use it? +- What is a load balancer? Give scenarios of each kind of balancer based on your experience. +- What is CloudFormation and why is it used for? +- Difference between AWS CloudFormation and AWS Elastic Beanstalk? +- What are the kinds of security attacks that can occur on the cloud? And how can we minimize them? +- Can we recover the EC2 instance when we have lost the key? +- What is a gateway? +- What is the difference between the Amazon Rds, Dynamodb, and Redshift? +- Do you prefer to host a website on S3? What's the reason if your answer is either yes or no? + +Let's share your answer on LinkedIn in best possible way thinking you are in a interview table. +Happy Learning !! :) + +[← Previous Day](../day48/README.md) | [Next Day →](../day50/README.md) diff --git a/2023/day49/tasks.md b/2023/day49/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day50/README.md b/2023/day50/README.md new file mode 100644 index 0000000000..0340a36b09 --- /dev/null +++ b/2023/day50/README.md @@ -0,0 +1,30 @@ +# Day 50: Your CI/CD pipeline on AWS - Part-1 🚀 ☁ + +What if I tell you, in next 4 days, you'll be making a CI/CD pipeline on AWS with these tools. + +- CodeCommit +- CodeBuild +- CodeDeploy +- CodePipeline +- S3 + +## What is CodeCommit ? + +- CodeCommit is a managed source control service by AWS that allows users to store, manage, and version their source code and artifacts securely and at scale. It supports Git, integrates with other AWS services, enables collaboration through branch and merge workflows, and provides audit logs and compliance reports to meet regulatory requirements and track changes. Overall, CodeCommit provides developers with a reliable and efficient way to manage their codebase and set up a CI/CD pipeline for their software development projects. + +# Task-01 : + +- Set up a code repository on CodeCommit and clone it on your local. +- You need to setup GitCredentials in your AWS IAM. +- Use those credentials in your local and then clone the repository from CodeCommit + +# Task-02 : + +- Add a new file from local and commit to your local branch +- Push the local changes to CodeCommit repository. + +For more details watch [this](https://youtu.be/p5i3cMCQ760) video. + +Happy Learning :) + +[← Previous Day](../day49/README.md) | [Next Day →](../day51/README.md) diff --git a/2023/day50/tasks.md b/2023/day50/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day51/README.md b/2023/day51/README.md new file mode 100644 index 0000000000..01f0b70262 --- /dev/null +++ b/2023/day51/README.md @@ -0,0 +1,30 @@ +# Day 51: Your CI/CD pipeline on AWS - Part 2 🚀 ☁ + +On your journey of making a CI/CD pipeline on AWS with these tools, you completed AWS CodeCommit. + +Next few days you'll learn these tools/services: + +- CodeBuild +- CodeDeploy +- CodePipeline +- S3 + +## What is CodeBuild ? + +- AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. + +# Task-01 : + +- Read about Buildspec file for Codebuild. +- create a simple index.html file in CodeCommit Repository +- you have to build the index.html using nginx server + +# Task-02 : + +- Add buildspec.yaml file to CodeCommit Repository and complete the build process. + +For more details watch [this](https://youtu.be/p5i3cMCQ760) video. + +Happy Learning :) + +[← Previous Day](../day50/README.md) | [Next Day →](../day52/README.md) diff --git a/2023/day51/tasks.md b/2023/day51/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day52/README.md b/2023/day52/README.md new file mode 100644 index 0000000000..52dffd62ae --- /dev/null +++ b/2023/day52/README.md @@ -0,0 +1,31 @@ +# Day 52: Your CI/CD pipeline on AWS - Part 3 🚀 ☁ + +On your journey of making a CI/CD pipeline on AWS with these tools, you completed AWS CodeCommit & CodeBuild. + +Next few days you'll learn these tools/services: + +- CodeDeploy +- CodePipeline +- S3 + +## What is CodeDeploy ? + +- AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services. + +CodeDeploy can deploy application content that runs on a server and is stored in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories. CodeDeploy can also deploy a serverless Lambda function. You do not need to make changes to your existing code before you can use CodeDeploy. + +# Task-01 : + +- Read about Appspec.yaml file for CodeDeploy. +- Deploy index.html file on EC2 machine using nginx +- you have to setup a CodeDeploy agent in order to deploy code on EC2 + +# Task-02 : + +- Add appspec.yaml file to CodeCommit Repository and complete the deployment process. + +For more details watch [this](https://youtu.be/IUF-pfbYGvg) video. + +Happy Learning :) + +[← Previous Day](../day51/README.md) | [Next Day →](../day53/README.md) diff --git a/2023/day52/tasks.md b/2023/day52/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day53/README.md b/2023/day53/README.md new file mode 100644 index 0000000000..2139f0cb5d --- /dev/null +++ b/2023/day53/README.md @@ -0,0 +1,21 @@ +# Day 53: Your CI/CD pipeline on AWS - Part 4 🚀 ☁ + +On your journey of making a CI/CD pipeline on AWS with these tools, you completed AWS CodeCommit, CodeBuild & CodeDeploy. + +Finish Off in style with AWS CodePipeline + +## What is CodePipeline ? + +- CodePipeline builds, tests, and deploys your code every time there is a code change, based on the release process models you define. + Think of it as a CI/CD Pipeline service + +# Task-01 : + +- Create a Deployment group of Ec2 Instance. +- Create a CodePipeline that gets the code from CodeCommit, Builds the code using CodeBuild and deploys it to a Deployment Group. + +For more details watch [this](https://youtu.be/IUF-pfbYGvg) video. + +Happy Learning :) + +[← Previous Day](../day52/README.md) | [Next Day →](../day54/README.md) diff --git a/2023/day53/tasks.md b/2023/day53/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day54/README.md b/2023/day54/README.md new file mode 100644 index 0000000000..f134a32bf1 --- /dev/null +++ b/2023/day54/README.md @@ -0,0 +1,19 @@ +# Day 54: Understanding Infrastructure as Code and Configuration Management + +## What's the difference bhaiyya? + +When it comes to the cloud, Infrastructure as Code (IaC) and Configuration Management (CM) are inseparable. With IaC, a descriptive model is used for infrastructure management. To name a few examples of infrastructure: networks, virtual computers, and load balancers. Using an IaC model always results in the same setting. + +Throughout the lifecycle of a product, Configuration Management (CM) ensures that the performance, functional and physical inputs, requirements, design, and operations of that product remain consistent. + +# Task-01 + +- Read more about IaC and Config. Management Tools +- Give differences on both with suitable examples +- What are most commont IaC and Config management Tools? + +Write a blog on this topic in the most creative way and post it on linkedIn :) + +happy learning... + +[← Previous Day](../day53/README.md) | [Next Day →](../day55/README.md) diff --git a/2023/day54/tasks.md b/2023/day54/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day55/README.md b/2023/day55/README.md new file mode 100644 index 0000000000..5df87b107a --- /dev/null +++ b/2023/day55/README.md @@ -0,0 +1,28 @@ +# Day 55: Understanding Configuration Management with Ansible + +## What's this Ansible? + +Ansible is an open-source automation tool, or platform, used for IT tasks such as configuration management, application deployment, intraservice orchestration, and provisioning + +# Task-01 + +- Installation of Ansible on AWS EC2 (Master Node) + `sudo apt-add-repository ppa:ansible/ansible` `sudo apt update` + `sudo apt install ansible` + +# Task-02 + +- read more about Hosts file + `sudo nano /etc/ansible/hosts ansible-inventory --list -y` + +# Task-03 + +- Setup 2 more EC2 instances with same Private keys as the previous instance (Node) +- Copy the private key to master server where Ansible is setup +- Try a ping command using ansible to the Nodes. + +Write a blog on this topic with screenshots in the most creative way and post it on linkedIn :) + +happy learning... + +[← Previous Day](../day54/README.md) | [Next Day →](../day56/README.md) diff --git a/2023/day55/tasks.md b/2023/day55/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day56/README.md b/2023/day56/README.md new file mode 100644 index 0000000000..853372bae2 --- /dev/null +++ b/2023/day56/README.md @@ -0,0 +1,18 @@ +# Day 56: Understanding Ad-hoc commands in Ansible + +Ansible ad hoc commands are one-liners designed to achieve a very specific task they are like quick snippets and your compact swiss army knife when you want to do a quick task across multiple machines. + +To put simply, Ansible ad hoc commands are one-liner Linux shell commands and playbooks are like a shell script, a collective of many commands with logic. + +Ansible ad hoc commands come handy when you want to perform a quick task. + +# Task-01 + +- write an ansible ad hoc ping command to ping 3 servers from inventory file +- Write an ansible ad hoc command to check uptime + +- You can refer to [this](https://www.middlewareinventory.com/blog/ansible-ad-hoc-commands/) blog to understand the different examples of ad-hoc commands and try out them, post the screenshots in a blog with an explanation. + +happy Learning :) + +[← Previous Day](../day55/README.md) | [Next Day →](../day57/README.md) diff --git a/2023/day56/tasks.md b/2023/day56/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day57/README.md b/2023/day57/README.md new file mode 100644 index 0000000000..4866eecf58 --- /dev/null +++ b/2023/day57/README.md @@ -0,0 +1,13 @@ +# Day 57: Ansible Hands-on with video + +Ansible is fun, you saw in last few days how easy it is. + +Let's make it fun now, by using a video explanation for Ansible. + +# Task-01 + +- Write a Blog explanation for the [ansible video](https://youtu.be/SGB7EdiP39E) + +happy Learning :) + +[← Previous Day](../day56/README.md) | [Next Day →](../day58/README.md) diff --git a/2023/day57/tasks.md b/2023/day57/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day58/README.md b/2023/day58/README.md new file mode 100644 index 0000000000..f8facae4b7 --- /dev/null +++ b/2023/day58/README.md @@ -0,0 +1,23 @@ +# Day 58: Ansible Playbooks + +Ansible playbooks run multiple tasks, assign roles, and define configurations, deployment steps, and variables. If you’re using multiple servers, Ansible playbooks organize the steps between the assembled machines or servers and get them organized and running in the way the users need them to. Consider playbooks as the equivalent of instruction manuals. + +# Task-01 + +- Write an ansible playbook to create a file on a different server + +- Write an ansible playbook to create a new user. + +- Write an ansible playbook to install docker on a group of servers + +Watch [this](https://youtu.be/089mRKoJTzo) video to learn about ansible Playbooks + +# Task-02 + +- Write a blog about writing ansible playbooks with the best practices. + +Let me or anyone in the community know if you face any challenges + +happy Learning :) + +[← Previous Day](../day57/README.md) | [Next Day →](../day59/README.md) diff --git a/2023/day58/tasks.md b/2023/day58/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day59/README.md b/2023/day59/README.md new file mode 100644 index 0000000000..f8bf4d0908 --- /dev/null +++ b/2023/day59/README.md @@ -0,0 +1,26 @@ +# Day 59: Ansible Project 🔥 + +Ansible playbooks are amazing, as you learned yesterday. +What if you deploy a simple web app using ansible, sounds like a good project, right? + +# Task-01 + +- create 3 EC2 instances . make sure all three are created with same key pair + +- Install Ansible in host server + +- copy the private key from local to Host server (Ansible_host) at (/home/ubuntu/.ssh) + +- access the inventory file using sudo vim /etc/ansible/hosts + +- Create a playbook to install Nginx + +- deploy a sample webpage using the ansible playbook + +Read [this](https://medium.com/@sandeep010498/learn-ansible-with-real-time-project-cf6a0a512d45) Blog by [Sandeep Singh](https://medium.com/@sandeep010498) to clear all your doubts + +Let me or anyone in the community know if you face any challenges + +happy Learning :) + +[← Previous Day](../day58/README.md) | [Next Day →](../day60/README.md) diff --git a/2023/day59/tasks.md b/2023/day59/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day60/README.md b/2023/day60/README.md new file mode 100644 index 0000000000..ecae296195 --- /dev/null +++ b/2023/day60/README.md @@ -0,0 +1,31 @@ +# Day 60 - Terraform🔥 + +Hello Learners , you guys are doing every task by creating an ec2 instance (mostly). Today let’s automate this process . How to do it ? Well Terraform is the solution . + +## What is Terraform? + +Terraform is an infrastructure as code (IaC) tool that allows you to create, manage, and update infrastructure +resources such as virtual machines, networks, and storage in a repeatable, scalable, and automated way. + +## Task 1: + +Install Terraform on your system +Refer this [link](https://phoenixnap.com/kb/how-to-install-terraform) for installation + +## Task 2: Answer below questions + +- Why we use terraform? +- What is Infrastructure as Code (IaC)? +- What is Resource? +- What is Provider? +- Whats is State file in terraform? What’s the importance of it ? +- What is Desired and Current State? + +You can prepare for tomorrow's task from [here](https://www.youtube.com/live/965CaSveIEI?feature=share)🚀🚀 + +We Hope this tasks will help you understand how to write a basic Terraform configuration file and basic commands on Terraform. + +Don’t forget to post in on LinkedIn. +Happy Learning:) + +[← Previous Day](../day59/README.md) | [Next Day →](../day61/README.md) diff --git a/2023/day60/tasks.md b/2023/day60/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day61/README.md b/2023/day61/README.md new file mode 100644 index 0000000000..9d518b70db --- /dev/null +++ b/2023/day61/README.md @@ -0,0 +1,37 @@ +# Day 61- Terraform🔥 + +Hope you've already got the gist of What Working with Terraform would be like . Lets begin +with day 2 of Terraform ! + +## Task 1: + +find purpose of basic Terraform commands which you'll use often + +1. `terraform init` + +2. `terraform init -upgrade` + +3. `terraform plan` + +4. `terraform apply` + +5. `terraform validate` + +6. `terraform fmt` + +7. `terraform destroy` + +Also along with these tasks its important to know about Terraform in general- +Who are Terraform's main competitors? +The main competitors are: + +Ansible +Packer +Cloud Foundry +Kubernetes + +Want a Free video Course for terraform? Click [here](https://bit.ly/tws-terraform) + +Don't forget to share your learnings on Linkedin ! Happy Learning :) + +[← Previous Day](../day60/README.md) | [Next Day →](../day62/README.md) diff --git a/2023/day61/tasks.md b/2023/day61/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day62/README.md b/2023/day62/README.md new file mode 100644 index 0000000000..76f61b708a --- /dev/null +++ b/2023/day62/README.md @@ -0,0 +1,79 @@ +# Day 62 - Terraform and Docker 🔥 + +Terraform needs to be told which provider to be used in the automation, hence we need to give the provider name with source and version. +For Docker, we can use this block of code in your main.tf + +## Blocks and Resources in Terraform + +## Terraform block + +## Task-01 + +- Create a Terraform script with Blocks and Resources + +``` +terraform { + required_providers { + docker = { + source = "kreuzwerker/docker" + version = "~> 2.21.0" +} +} +} +``` + +### Note: kreuzwerker/docker, is shorthand for registry.terraform.io/kreuzwerker/docker. + +## Provider Block + +The provider block configures the specified provider, in this case, docker. A provider is a plugin that Terraform uses to create and manage your resources. + +``` +provider "docker" {} +``` + +## Resource + +Use resource blocks to define components of your infrastructure. A resource might be a physical or virtual component such as a Docker container, or it can be a logical resource such as a Heroku application. + +Resource blocks have two strings before the block: the resource type and the resource name. In this example, the first resource type is docker_image and the name is Nginx. + +## Task-02 + +- Create a resource Block for an nginx docker image + +Hint: + +``` +resource "docker_image" "nginx" { + name = "nginx:latest" + keep_locally = false +} +``` + +- Create a resource Block for running a docker container for nginx + +``` +resource "docker_container" "nginx" { + image = docker_image.nginx.latest + name = "tutorial" + ports { + internal = 80 + external = 80 + } +} +``` + +Note: In case Docker is not installed + +`sudo apt-get install docker.io` +`sudo docker ps` +`sudo chown $USER /var/run/docker.sock` + +# Video Course + +I can imagine, Terraform can be tricky, so best to use a Free video Course for terraform [here](https://bit.ly/tws-terraform) + +Happy Learning :) + +[← Previous Day](../day61/README.md) | [Next Day →](../day63/README.md) diff --git a/2023/day62/tasks.md b/2023/day62/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day63/README.md b/2023/day63/README.md new file mode 100644 index 0000000000..e4338fb906 --- /dev/null +++ b/2023/day63/README.md @@ -0,0 +1,62 @@ +# Day 63 - Terraform Variables + +variables in Terraform are quite important, as you need to hold values of names of instance, configs , etc. + +We can create a variables.tf file which will hold all the variables. + +``` +variable "filename" { +default = "/home/ubuntu/terrform-tutorials/terraform-variables/demo-var.txt" +} +``` + +``` +variable "content" { +default = "This is coming from a variable which was updated" +} +``` + +These variables can be accessed by var object in main.tf + +## Task-01 + +- Create a local file using Terraform + Hint: + +``` +resource "local_file" "devops" { +filename = var.filename +content = var.content +} +``` + +## Data Types in Terraform + +## Map + +``` +variable "file_contents" { +type = map +default = { +"statement1" = "this is cool" +"statement2" = "this is cooler" +} +} +``` + +## Task-02 + +- Use terraform to demonstrate usage of List, Set and Object datatypes +- Put proper screenshots of the outputs + +Use `terraform refresh` + +To refresh the state by your configuration file, reloads the variables + +# Video Course + +I can imagine, Terraform can be tricky, so best to use a Free video Course for terraform [here](https://bit.ly/tws-terraform) + +Happy Learning :) + +[← Previous Day](../day62/README.md) | [Next Day →](../day64/README.md) diff --git a/2023/day63/tasks.md b/2023/day63/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day64/README.md b/2023/day64/README.md new file mode 100644 index 0000000000..d30e1048d9 --- /dev/null +++ b/2023/day64/README.md @@ -0,0 +1,67 @@ +# Day 64 - Terraform with AWS + +Provisioning on AWS is quite easy and straightforward with Terraform. + +## Prerequisites + +### AWS CLI installed + +The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. + +### AWS IAM user + +IAM (Identity Access Management) AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. + +_In order to connect your AWS account and Terraform, you need the access keys and secret access keys exported to your machine._ + +``` +export AWS_ACCESS_KEY_ID= +export AWS_SECRET_ACCESS_KEY= +``` + +### Install required providers + +``` +terraform { + required_providers { + aws = { + source = "hashicorp/aws" + version = "~> 4.16" +} +} + required_version = ">= 1.2.0" +} +``` + +Add the region where you want your instances to be + +``` +provider "aws" { +region = "us-east-1" +} +``` + +## Task-01 + +- Provision an AWS EC2 instance using Terraform + +Hint: + +``` +resource "aws_instance" "aws_ec2_test" { + count = 4 + ami = "ami-08c40ec9ead489470" + instance_type = "t2.micro" + tags = { + Name = "TerraformTestServerInstance" + } +} +``` + +# Video Course + +I can imagine, Terraform can be tricky, so best to use a Free video Course for terraform [here](https://bit.ly/tws-terraform) + +Happy Learning :) + +[← Previous Day](../day63/README.md) | [Next Day →](../day65/README.md) diff --git a/2023/day64/tasks.md b/2023/day64/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day65/README.md b/2023/day65/README.md new file mode 100644 index 0000000000..904c6c1158 --- /dev/null +++ b/2023/day65/README.md @@ -0,0 +1,67 @@ +# Day 65 - Working with Terraform Resources 🚀 + +Yesterday, we saw how to create a Terraform script with Blocks and Resources. Today, we will dive deeper into Terraform resources. + +## Understanding Terraform Resources + +A resource in Terraform represents a component of your infrastructure, such as a physical server, a virtual machine, a DNS record, or an S3 bucket. Resources have attributes that define their properties and behaviors, such as the size and location of a virtual machine or the domain name of a DNS record. + +When you define a resource in Terraform, you specify the type of resource, a unique name for the resource, and the attributes that define the resource. Terraform uses the resource block to define resources in your Terraform configuration. + +## Task 1: Create a security group + +To allow traffic to the EC2 instance, you need to create a security group. Follow these steps: + +In your main.tf file, add the following code to create a security group: + +``` +resource "aws_security_group" "web_server" { + name_prefix = "web-server-sg" + + ingress { + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["0.0.0.0/0"] + } +} +``` + +- Run terraform init to initialize the Terraform project. + +- Run terraform apply to create the security group. + +## Task 2: Create an EC2 instance + +- Now you can create an EC2 instance with Terraform. Follow these steps: + +- In your main.tf file, add the following code to create an EC2 instance: + +``` +resource "aws_instance" "web_server" { + ami = "ami-0557a15b87f6559cf" + instance_type = "t2.micro" + key_name = "my-key-pair" + security_groups = [ + aws_security_group.web_server.name + ] + + user_data = <<-EOF + #!/bin/bash + echo "

Welcome to my website!

" > index.html + nohup python -m SimpleHTTPServer 80 & + EOF +} +``` + +Note: Replace the ami and key_name values with your own. You can find a list of available AMIs in the AWS documentation. + +Run terraform apply to create the EC2 instance. + +## Task 3: Access your website + +- Now that your EC2 instance is up and running, you can access the website you just hosted on it. Follow these steps: + +Happy Terraforming! + +[← Previous Day](../day64/README.md) | [Next Day →](../day66/README.md) diff --git a/2023/day65/tasks.md b/2023/day65/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day66/README.md b/2023/day66/README.md new file mode 100644 index 0000000000..630837a5ff --- /dev/null +++ b/2023/day66/README.md @@ -0,0 +1,26 @@ +# Day 66 - Terraform Hands-on Project - Build Your Own AWS Infrastructure with Ease using Infrastructure as Code (IaC) Techniques(Interview Questions) ☁️ + +Welcome back to your Terraform journey. + +In the previous tasks, you have learned about the basics of Terraform, its configuration file, and creating an EC2 instance using Terraform. Today, we will explore more about Terraform and create multiple resources. + +## Task: + +- Create a VPC (Virtual Private Cloud) with CIDR block 10.0.0.0/16 +- Create a public subnet with CIDR block 10.0.1.0/24 in the above VPC. +- Create a private subnet with CIDR block 10.0.2.0/24 in the above VPC. +- Create an Internet Gateway (IGW) and attach it to the VPC. +- Create a route table for the public subnet and associate it with the public subnet. This route table should have a route to the Internet Gateway. +- Launch an EC2 instance in the public subnet with the following details: +- AMI: ami-0557a15b87f6559cf +- Instance type: t2.micro +- Security group: Allow SSH access from anywhere +- User data: Use a shell script to install Apache and host a simple website +- Create an Elastic IP and associate it with the EC2 instance. +- Open the website URL in a browser to verify that the website is hosted successfully. + +#### This Terraform hands-on task is designed to test your proficiency in using Terraform for Infrastructure as Code (IaC) on AWS. You will be tasked with creating a VPC, subnets, an internet gateway, and launching an EC2 instance with a web server running on it. This task will showcase your skills in automating infrastructure deployment using Terraform. It's a popular interview question for companies looking for candidates with hands-on experience in Terraform. That's it for today. + +Happy Terraforming:) + +[← Previous Day](../day65/README.md) | [Next Day →](../day67/README.md) diff --git a/2023/day66/tasks.md b/2023/day66/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day67/README.md b/2023/day67/README.md new file mode 100644 index 0000000000..62e6f35476 --- /dev/null +++ b/2023/day67/README.md @@ -0,0 +1,22 @@ +# Day 67: AWS S3 Bucket Creation and Management + +## AWS S3 Bucket + +Amazon S3 (Simple Storage Service) is an object storage service that offers industry-leading scalability, data availability, security, and performance. It can be used for a variety of use cases, such as storing and retrieving data, hosting static websites, and more. + +In this task, you will learn how to create and manage S3 buckets in AWS. + +## Task + +- Create an S3 bucket using Terraform. +- Configure the bucket to allow public read access. +- Create an S3 bucket policy that allows read-only access to a specific IAM user or role. +- Enable versioning on the S3 bucket. + +## Resources + +[Terraform S3 bucket resource](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket) + +Good luck and happy learning! + +[← Previous Day](../day66/README.md) | [Next Day →](../day68/README.md) diff --git a/2023/day67/tasks.md b/2023/day67/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day68/README.md b/2023/day68/README.md new file mode 100644 index 0000000000..4185d8a5dd --- /dev/null +++ b/2023/day68/README.md @@ -0,0 +1,66 @@ +# Day 68 - Scaling with Terraform 🚀 + +Yesterday, we learned how to AWS S3 Bucket with Terraform. Today, we will see how to scale our infrastructure with Terraform. + +## Understanding Scaling + +Scaling is the process of adding or removing resources to match the changing demands of your application. As your application grows, you will need to add more resources to handle the increased load. And as the load decreases, you can remove the extra resources to save costs. + +Terraform makes it easy to scale your infrastructure by providing a declarative way to define your resources. You can define the number of resources you need and Terraform will automatically create or destroy the resources as needed. + +## Task 1: Create an Auto Scaling Group + +Auto Scaling Groups are used to automatically add or remove EC2 instances based on the current demand. Follow these steps to create an Auto Scaling Group: + +- In your main.tf file, add the following code to create an Auto Scaling Group: + +``` +resource "aws_launch_configuration" "web_server_as" { + image_id = "ami-005f9685cb30f234b" + instance_type = "t2.micro" + security_groups = [aws_security_group.web_server.name] + + user_data = <<-EOF + #!/bin/bash + echo "

You're doing really Great

" > index.html + nohup python -m SimpleHTTPServer 80 & + EOF +} + +resource "aws_autoscaling_group" "web_server_asg" { + name = "web-server-asg" + launch_configuration = aws_launch_configuration.web_server_lc.name + min_size = 1 + max_size = 3 + desired_capacity = 2 + health_check_type = "EC2" + load_balancers = [aws_elb.web_server_lb.name] + vpc_zone_identifier = [aws_subnet.public_subnet_1a.id, aws_subnet.public_subnet_1b.id] +} + + +``` + +- Run terraform apply to create the Auto Scaling Group. + +## Task 2: Test Scaling + +- Go to the AWS Management Console and select the Auto Scaling Groups service. + +- Select the Auto Scaling Group you just created and click on the "Edit" button. + +- Increase the "Desired Capacity" to 3 and click on the "Save" button. + +- Wait a few minutes for the new instances to be launched. + +- Go to the EC2 Instances service and verify that the new instances have been launched. + +- Decrease the "Desired Capacity" to 1 and wait a few minutes for the extra instances to be terminated. + +- Go to the EC2 Instances service and verify that the extra instances have been terminated. + +Congratulations🎊🎉 You have successfully scaled your infrastructure with Terraform. + +Happy Learning :) + +[← Previous Day](../day67/README.md) | [Next Day →](../day69/README.md) diff --git a/2023/day68/tasks.md b/2023/day68/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day69/README.md b/2023/day69/README.md new file mode 100644 index 0000000000..570803dbdd --- /dev/null +++ b/2023/day69/README.md @@ -0,0 +1,182 @@ +# Day 69 - Meta-Arguments in Terraform + +When you define a resource block in Terraform, by default, this specifies one resource that will be created. To manage several of the same resources, you can use either count or for_each, which removes the need to write a separate block of code for each one. Using these options reduces overhead and makes your code neater. + +count is what is known as a ‘meta-argument’ defined by the Terraform language. Meta-arguments help achieve certain requirements within the resource block. + +## Count + +The count meta-argument accepts a whole number and creates the number of instances of the resource specified. + +When each instance is created, it has its own distinct infrastructure object associated with it, so each can be managed separately. When the configuration is applied, each object can be created, destroyed, or updated as appropriate. + +eg. + +``` + +terraform { + +required_providers { + +aws = { + +source = "hashicorp/aws" + +version = "~> 4.16" + +} + +} + +required_version = ">= 1.2.0" + +} + + + +provider "aws" { + +region = "us-east-1" + +} + + + +resource "aws_instance" "server" { + +count = 4 + + + +ami = "ami-08c40ec9ead489470" + +instance_type = "t2.micro" + + + +tags = { + +Name = "Server ${count.index}" + +} + +} + + + +``` + +## for_each + +Like the count argument, the for_each meta-argument creates multiple instances of a module or resource block. However, instead of specifying the number of resources, the for_each meta-argument accepts a map or a set of strings. This is useful when multiple resources are required that have different values. Consider our Active directory groups example, with each group requiring a different owner. + +``` + +terraform { + +required_providers { + +aws = { + +source = "hashicorp/aws" + +version = "~> 4.16" + +} + +} + +required_version = ">= 1.2.0" + +} + + + +provider "aws" { + +region = "us-east-1" + +} + + + +locals { + +ami_ids = toset([ + +"ami-0b0dcb5067f052a63", + +"ami-08c40ec9ead489470", + +]) + +} + + + +resource "aws_instance" "server" { + +for_each = local.ami_ids + + + +ami = each.key + +instance_type = "t2.micro" + +tags = { + +Name = "Server ${each.key}" + +} + +} + + + +Multiple key value iteration + +locals { + +ami_ids = { + +"linux" :"ami-0b0dcb5067f052a63", + +"ubuntu": "ami-08c40ec9ead489470", + +} + +} + + + +resource "aws_instance" "server" { + +for_each = local.ami_ids + + + +ami = each.value + +instance_type = "t2.micro" + + + +tags = { + +Name = "Server ${each.key}" + +} + +} + +``` + +## Task-01 + +- Create the above Infrastructure as code and demonstrate the use of Count and for_each. +- Write about meta-arguments and its use in Terraform. + +Happy learning :) + +[← Previous Day](../day68/README.md) | [Next Day →](../day70/README.md) diff --git a/2023/day69/tasks.md b/2023/day69/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day70/README.md b/2023/day70/README.md new file mode 100644 index 0000000000..4a42230590 --- /dev/null +++ b/2023/day70/README.md @@ -0,0 +1,80 @@ +# Day 70 - Terraform Modules + +- Modules are containers for multiple resources that are used together. A module consists of a collection of .tf and/or .tf.json files kept together in a directory +- A module can call other modules, which lets you include the child module's resources into the configuration in a concise way. +- Modules can also be called multiple times, either within the same configuration or in separate configurations, allowing resource configurations to be packaged and re-used. + +### Below is the format on how to use modules: + +``` +# Creating a AWS EC2 Instance +resource "aws_instance" "server-instance" { + # Define number of instance + instance_count = var.number_of_instances + + # Instance Configuration + ami = var.ami + instance_type = var.instance_type + subnet_id = var.subnet_id + vpc_security_group_ids = var.security_group + + # Instance Tagsid + tags = { + Name = "${var.instance_name}" + } +} +``` + +``` +# Server Module Variables +variable "number_of_instances" { + description = "Number of Instances to Create" + type = number + default = 1 +} + +variable "instance_name" { + description = "Instance Name" +} + +variable "ami" { + description = "AMI ID" + default = "ami-xxxx" +} + +variable "instance_type" { + description = "Instance Type" +} + +variable "subnet_id" { + description = "Subnet ID" +} + +variable "security_group" { + description = "Security Group" + type = list(any) +} +``` + +``` +# Server Module Output +output "server_id" { + description = "Server ID" + value = aws_instance.server-instance.id +} + +``` + +## Task-01 + +Explain the below in your own words and it shouldnt be copied from Internet 😉 + +- Write about different modules Terraform. +- Difference between Root Module and Child Module. +- Is modules and Namespaces are same? Justify your answer for both Yes/No + +You all are doing great, and you have come so far. Well Done Everyone🎉 + +Thode mehnat aur krni hai bas to lge rho tab tak.....Happy learning :) + +[← Previous Day](../day69/README.md) | [Next Day →](../day71/README.md) diff --git a/2023/day70/tasks.md b/2023/day70/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day71/README.md b/2023/day71/README.md new file mode 100644 index 0000000000..7bcb7bb3e1 --- /dev/null +++ b/2023/day71/README.md @@ -0,0 +1,41 @@ +# Day 71 - Let's prepare for some interview questions of Terraform 🔥 + +### 1. What is Terraform and how it is different from other IaaC tools? + +### 2. How do you call a main.tf module? + +### 3. What exactly is Sentinel? Can you provide few examples where we can use for Sentinel policies? + +### 4. You have a Terraform configuration file that defines an infrastructure deployment. However, there are multiple instances of the same resource that need to be created. How would you modify the configuration file to achieve this? + +### 5. You want to know from which paths Terraform is loading providers referenced in your Terraform configuration (\*.tf files). You need to enable debug messages to find this out. Which of the following would achieve this? + +A. Set the environment variable TF_LOG=TRACE + +B. Set verbose logging for each provider in your Terraform configuration + +C. Set the environment variable TF_VAR_log=TRACE + +D. Set the environment variable TF_LOG_PATH + +### 6. Below command will destroy everything that is being created in the infrastructure. Tell us how would you save any particular resource while destroying the complete infrastructure. + +``` +terraform destroy +``` + +### 7. Which module is used to store .tfstate file in S3? + +### 8. How do you manage sensitive data in Terraform, such as API keys or passwords? + +### 9. You are working on a Terraform project that needs to provision an S3 bucket, and a user with read and write access to the bucket. What resources would you use to accomplish this, and how would you configure them? + +### 10. Who maintains Terraform providers? + +### 11. How can we export data from one module to another? + +# + +Waiting for your responses😉.....Till then Happy learning :) + +[← Previous Day](../day70/README.md) | [Next Day →](../day72/README.md) diff --git a/2023/day71/tasks.md b/2023/day71/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day72/README.md b/2023/day72/README.md new file mode 100644 index 0000000000..a283b10e39 --- /dev/null +++ b/2023/day72/README.md @@ -0,0 +1,16 @@ +Day 72 - Grafana🔥 + +Hello Learners , you guys are doing really a good job. You will not be there 24\*7 to monitor your resources. So, Today let’s monitor the resources in a smart way with - Grafana 🎉 + +## Task 1: + +> What is Grafana? What are the features of Grafana? +> Why Grafana? +> What type of monitoring can be done via Grafana? +> What databases work with Grafana? +> What are metrics and visualizations in Grafana? +> What is the difference between Grafana vs Prometheus? + +--- + +[← Previous Day](../day71/README.md) | [Next Day →](../day73/README.md) diff --git a/2023/day72/tasks.md b/2023/day72/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day73/README.md b/2023/day73/README.md new file mode 100644 index 0000000000..a1af9d7dc9 --- /dev/null +++ b/2023/day73/README.md @@ -0,0 +1,16 @@ +Day 73 - Grafana 🔥 +Hope you are now clear with the basics of grafana, like why we use, where we use, what can we do with this and so on. + +Now, let's do some practical stuff. + +--- + +Task: + +> Setup grafana in your local environment on AWS EC2. + +--- + +Ref: https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7042518379030556672-ZZA-?utm_source=share&utm_medium=member_desktop + +[← Previous Day](../day72/README.md) | [Next Day →](../day74/README.md) diff --git a/2023/day73/tasks.md b/2023/day73/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day74/README.md b/2023/day74/README.md new file mode 100644 index 0000000000..2877eeebd4 --- /dev/null +++ b/2023/day74/README.md @@ -0,0 +1,19 @@ +# Day 74 - Connecting EC2 with Grafana . + +You guys did amazing job last day setting up Grafana on Local 🔥. + +Now, let's do one step ahead. + +--- + +Task: + +Connect an Linux and one Windows EC2 instance with Grafana and monitor the different components of the server. + +--- + +Don't forget to share this amazing work over LinkedIn and Tag us. + +## Happy Learning :) + +[← Previous Day](../day73/README.md) | [Next Day →](../day75/README.md) diff --git a/2023/day74/tasks.md b/2023/day74/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day75/README.md b/2023/day75/README.md new file mode 100644 index 0000000000..3c75d41caa --- /dev/null +++ b/2023/day75/README.md @@ -0,0 +1,30 @@ +# Day 75 - Sending Docker Log to Grafana + +We have monitored ,😉 that you guys are understanding and doing amazing with monitoring tool. 👌 + +Today, make it little bit more complex but interesting 😍 and let's add one more **Project** 🔥 to your resume. + +--- + +## Task: + +- Install _Docker_ and start docker service on a Linux EC2 through [USER DATA](../day39/README.md) . +- Create 2 Docker containers and run any basic application on those containers (A simple todo app will work). +- Now intregrate the docker containers and share the real time logs with Grafana (Your Instance should be connected to Grafana and Docker plugin should be enabled on grafana). +- Check the logs or docker container name on Grafana UI. + +--- + +You can use [this video](https://youtu.be/y3SGHbixmJw) for your refernce. But it's always better to find your own way of doing. 😊 + +## Bonus : + +- As you have done this amazing task, here is one bonus link.❤️ + +## You can use this [refernce video](https://youtu.be/CCi957AnSfc) to intregrate _Prometheus_ with _Grafana_ and monitor Docker containers. Seems interesting ? + +Don't forget to share this amazing work over LinkedIn and Tag us. + +## Happy Learning :) + +[← Previous Day](../day74/README.md) | [Next Day →](../day76/README.md) diff --git a/2023/day75/tasks.md b/2023/day75/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day76/README.md b/2023/day76/README.md new file mode 100644 index 0000000000..7c3fbb0bd1 --- /dev/null +++ b/2023/day76/README.md @@ -0,0 +1,33 @@ +# Day 76 Build a Grafana dashboard + +A dashboard gives you an at-a-glance view of your data and lets you track metrics through different visualizations. + +Dashboards consist of panels, each representing a part of the story you want your dashboard to tell. + +Every panel consists of a query and a visualization. The query defines what data you want to display, whereas the visualization defines how the data is displayed. + +## Task 01 + +- In the sidebar, hover your cursor over the Create (plus sign) icon and then click Dashboard. + +- Click Add a new panel. + +- In the Query editor below the graph, enter the query from earlier and then press Shift + Enter: + +`sum(rate(tns_request_duration_seconds_count[5m])) by(route)` + +- In the Legend field, enter {{route}} to rename the time series in the legend. The graph legend updates when you click outside the field. + +- In the Panel editor on the right, under Settings, change the panel title to “Traffic”. + +- Click Apply in the top-right corner to save the panel and go back to the dashboard view. + +- Click the Save dashboard (disk) icon at the top of the dashboard to save your dashboard. + +- Enter a name in the Dashboard name field and then click Save. + +Read [this](https://grafana.com/tutorials/grafana-fundamentals/) in case you have any questions + +Do share some amazing Dashboards with the community + +[← Previous Day](../day75/README.md) | [Next Day →](../day77/README.md) diff --git a/2023/day76/tasks.md b/2023/day76/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day77/README.md b/2023/day77/README.md new file mode 100644 index 0000000000..7acf545be9 --- /dev/null +++ b/2023/day77/README.md @@ -0,0 +1,14 @@ +# Day 77 Alerting + +Grafana Alerting allows you to learn about problems in your systems moments after they occur. Create, manage, and take action on your alerts in a single, consolidated view, and improve your team’s ability to identify and resolve issues quickly. + +Grafana Alerting is available for Grafana OSS, Grafana Enterprise, or Grafana Cloud. With Mimir and Loki alert rules you can run alert expressions closer to your data and at massive scale, all managed by the Grafana UI you are already familiar with. + +## Task-01 + +- Setup [Grafana cloud](https://grafana.com/products/cloud/) +- Setup sample alerting + +Check out [this blog](https://grafana.com/docs/grafana/latest/alerting/) for more details + +[← Previous Day](../day76/README.md) | [Next Day →](../day78/README.md) diff --git a/2023/day77/tasks.md b/2023/day77/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day78/README.md b/2023/day78/README.md new file mode 100644 index 0000000000..631894de55 --- /dev/null +++ b/2023/day78/README.md @@ -0,0 +1,14 @@ +Day - 78 (Grafana Cloud) + +--- + +Task - 01 + +1. Setup alerts for EC2 instance. +2. Setup alerts for AWS Billing Alerts. + +--- + +For Reference: https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7044695663913148416-LfvD?utm_source=share&utm_medium=member_desktop + +[← Previous Day](../day77/README.md) | [Next Day →](../day79/README.md) diff --git a/2023/day78/tasks.md b/2023/day78/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day79/README.md b/2023/day79/README.md new file mode 100644 index 0000000000..4eb87c4c49 --- /dev/null +++ b/2023/day79/README.md @@ -0,0 +1,20 @@ +Day 79 - Prometheus 🔥 + +Now, the next step is to learn about the Prometheus. +It's an open-source system for monitoring services and alerts based on a time series data model. Prometheus collects data and metrics from different services and stores them according to a unique identifier—the metric name—and a time stamp. + +Tasks: + +--- + +1. What is the Architecture of Prometheus Monitoring? +2. What are the Features of Prometheus? +3. What are the Components of Prometheus? +4. What database is used by Prometheus? +5. What is the default data retention period in Prometheus? + +--- + +Ref: https://www.devopsschool.com/blog/top-50-prometheus-interview-questions-and-answers/ + +[← Previous Day](../day78/README.md) | [Next Day →](../day80/README.md) diff --git a/2023/day79/tasks.md b/2023/day79/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day80/README.md b/2023/day80/README.md new file mode 100644 index 0000000000..edbc3ec561 --- /dev/null +++ b/2023/day80/README.md @@ -0,0 +1,15 @@ +# Project-1 + +========= + +# Project Description + +The project aims to automate the building, testing, and deployment process of a web application using Jenkins and GitHub. The Jenkins pipeline will be triggered automatically by GitHub webhook integration when changes are made to the code repository. The pipeline will include stages such as building, testing, and deploying the application, with notifications and alerts for failed builds or deployments. + +## Task-01 + +Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7011367641952993281-DHn5?utm_source=share&utm_medium=member_desktop) + +Happy Learning :) + +[← Previous Day](../day79/README.md) | [Next Day →](../day81/README.md) diff --git a/2023/day80/tasks.md b/2023/day80/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day81/README.md b/2023/day81/README.md new file mode 100644 index 0000000000..a10675fa1c --- /dev/null +++ b/2023/day81/README.md @@ -0,0 +1,15 @@ +# Project-2 + +========= + +# Project Description + +The project is about automating the deployment process of a web application using Jenkins and its declarative syntax. The pipeline includes stages like building, testing, and deploying to a staging environment. It also includes running acceptance tests and deploying to production if all tests pass. + +## Task-01 + +Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7014971330496212992-6Q2m?utm_source=share&utm_medium=member_desktop) + +Happy Learning :) + +[← Previous Day](../day80/README.md) | [Next Day →](../day82/README.md) diff --git a/2023/day81/tasks.md b/2023/day81/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day82/README.md b/2023/day82/README.md new file mode 100644 index 0000000000..a17acccd92 --- /dev/null +++ b/2023/day82/README.md @@ -0,0 +1,15 @@ +# Project-3 + +========= + +# Project Description + +The project involves hosting a static website using an AWS S3 bucket. Amazon S3 is an object storage service that provides a simple web services interface to store and retrieve any amount of data. The website files will be uploaded to an S3 bucket and configured to function as a static website. The bucket will be configured with the appropriate permissions and a unique domain name, making the website publicly accessible. Overall, the project aims to leverage the benefits of AWS S3 to host and scale a static website in a cost-effective and scalable manner. + +## Task-01 + +Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_aws-project-devopsjobs-activity-7016427742300663808-JAQd?utm_source=share&utm_medium=member_desktop) + +Happy Learning :) + +[← Previous Day](../day81/README.md) | [Next Day →](../day83/README.md) diff --git a/2023/day82/tasks.md b/2023/day82/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day83/README.md b/2023/day83/README.md new file mode 100644 index 0000000000..dc80aefc33 --- /dev/null +++ b/2023/day83/README.md @@ -0,0 +1,15 @@ +# Project-4 + +========= + +# Project Description + +The project aims to deploy a web application using Docker Swarm, a container orchestration tool that allows for easy management and scaling of containerized applications. The project will utilize Docker Swarm's production-ready features such as load balancing, rolling updates, and service discovery to ensure high availability and reliability of the web application. The project will involve creating a Dockerfile to package the application into a container and then deploying it onto a Swarm cluster. The Swarm cluster will be configured to provide automated failover, load balancing, and horizontal scaling to the application. The goal of the project is to demonstrate the benefits of Docker Swarm for deploying and managing containerized applications in production environments. + +## Task-01 + +Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7034173810656296960-UjUw?utm_source=share&utm_medium=member_desktop) + +Happy Learning :) + +[← Previous Day](../day82/README.md) | [Next Day →](../day84/README.md) diff --git a/2023/day83/tasks.md b/2023/day83/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day84/README.md b/2023/day84/README.md new file mode 100644 index 0000000000..be78b29c8b --- /dev/null +++ b/2023/day84/README.md @@ -0,0 +1,15 @@ +# Project-5 + +========= + +# Project Description + +The project involves deploying a Netflix clone web application on a Kubernetes cluster, a popular container orchestration platform that simplifies the deployment and management of containerized applications. The project will require creating Docker images of the web application and its dependencies and deploying them onto the Kubernetes cluster using Kubernetes manifests. The Kubernetes cluster will provide benefits such as high availability, scalability, and automatic failover of the application. Additionally, the project will utilize Kubernetes tools such as Kubernetes Dashboard and kubectl to monitor and manage the deployed application. Overall, the project aims to demonstrate the power and benefits of Kubernetes for deploying and managing containerized applications at scale. + +## Task-01 + +Get a netflix clone form [GitHub](https://github.com/devandres-tech/Netflix-Clone), read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7034173810656296960-UjUw?utm_source=share&utm_medium=member_desktop) and follow the Redit clone steps to similarly deploy a Netflix Clone + +Happy Learning :) + +[← Previous Day](../day83/README.md) | [Next Day →](../day85/README.md) diff --git a/2023/day84/tasks.md b/2023/day84/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day85/README.md b/2023/day85/README.md new file mode 100644 index 0000000000..0cd64c996b --- /dev/null +++ b/2023/day85/README.md @@ -0,0 +1,26 @@ +# Project-6 + +========= + +# Project Description + +The project involves deploying a Node JS app on AWS ECS Fargate and AWS ECR. +Read More about the tech stack [here](https://faun.pub/what-is-amazon-ecs-and-ecr-how-does-they-work-with-an-example-4acbf9be8415) + +## Task-01 + +- Get a NodeJs application from [GitHub](https://github.com/LondheShubham153/node-todo-cicd). + +- Build the Dockerfile present in the repo + +- Setup AWS CLI and AWS Login in order to tag and push to ECR + +- Setup an ECS cluster + +- Create a Task Definition for the node js project with ECR image + +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day84/README.md) | [Next Day →](../day86/README.md) diff --git a/2023/day85/tasks.md b/2023/day85/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day86/README.md b/2023/day86/README.md new file mode 100644 index 0000000000..c8f809df7d --- /dev/null +++ b/2023/day86/README.md @@ -0,0 +1,24 @@ +# Project-7 + +========= + +# Project Description + +The project involves deploying a Portfolio app on AWS S3 using GitHub Actions. +Git Hub actions allows you to perform CICD with GitHub Repository integrated. + +## Task-01 + +- Get a Portfolio application from [GitHub](https://github.com/LondheShubham153/tws-portfolio). + +- Build the GitHub Actions Workflow + +- Setup AWS CLI and AWS Login in order to sync website to S3 (to be done as a part of YAML) + +- Follow this [video]() to understand it better + +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day85/README.md) | [Next Day →](../day87/README.md) diff --git a/2023/day86/tasks.md b/2023/day86/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day87/README.md b/2023/day87/README.md new file mode 100644 index 0000000000..fa123ea638 --- /dev/null +++ b/2023/day87/README.md @@ -0,0 +1,24 @@ +# Project-8 + +========= + +# Project Description + +The project involves deploying a react application on AWS Elastic BeanStalk using GitHub Actions. +Git Hub actions allows you to perform CICD with GitHub Repository integrated. + +## Task-01 + +- Get source code from [GitHub](https://github.com/sitchatt/AWS_Elastic_BeanStalk_On_EC2.git). + +- Setup AWS Elastic BeanStalk + +- Build the GitHub Actions Workflow + +- Follow this [blog](https://www.linkedin.com/posts/sitabja-chatterjee_effortless-deployment-of-react-app-to-aws-activity-7053579065487687680-wZI8?utm_source=share&utm_medium=member_desktop) to understand it better + +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day86/README.md) | [Next Day →](../day88/README.md) diff --git a/2023/day87/tasks.md b/2023/day87/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day88/README.md b/2023/day88/README.md new file mode 100644 index 0000000000..3668934da1 --- /dev/null +++ b/2023/day88/README.md @@ -0,0 +1,23 @@ +# Project-9 + +========= + +# Project Description + +The project involves deploying a Django Todo app on AWS EC2 using Kubeadm Kubernetes cluster. + +Kubernetes Cluster helps in Auto-scaling and Auto-healing of your application. + +## Task-01 + +- Get a Django Full Stack application from [GitHub](https://github.com/LondheShubham153/django-todo-cicd). + +- Setup the Kubernetes cluster using [this script](https://github.com/RishikeshOps/Scripts/blob/main/k8sss.sh) + +- Setup Deployment and Service for Kubernetes. + +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day87/README.md) | [Next Day →](../day89/README.md) diff --git a/2023/day88/tasks.md b/2023/day88/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day89/README.md b/2023/day89/README.md new file mode 100644 index 0000000000..45ee46628d --- /dev/null +++ b/2023/day89/README.md @@ -0,0 +1,19 @@ +# Project-10 + +========= + +# Project Description + +The project involves Mounting of AWS S3 Bucket On Amazon EC2 Linux Using S3FS. + +This is a AWS Mini Project that will teach you AWS, S3, EC2, S3FS. + +## Task-01 + +- Create IAM user and set policies for the project resources using this [blog](https://medium.com/@chetxn/project-8-devops-implementation-8300b9ed1f2). +- Utilize and make the best use of aws-cli +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day88/README.md) | [Next Day →](../day90/README.md) diff --git a/2023/day89/tasks.md b/2023/day89/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2023/day90/README.md b/2023/day90/README.md new file mode 100644 index 0000000000..d28985c060 --- /dev/null +++ b/2023/day90/README.md @@ -0,0 +1,29 @@ +# Day 90: The Awesome Finale! 🎉 🎉 + +🚀 Can you believe it? You've hit the jackpot – Day 90, the grand finale of our DevOps bonanza. Time to give yourself a virtual high-five! + +### What's Next? + +While this marks the end of the official 90-day journey, remember that your learning journey in DevOps is far from over. There's always something new to explore, tools to master, and techniques to refine. We're continuing to curate more content, challenges, and resources to help you advance your DevOps expertise. + +### Share Your Achievement + +Share your journey with the world! Post about your accomplishments on social media using the hashtag #90DaysOfDevOps. Inspire others to join the DevOps movement and take charge of their learning path. + +### Keep the Momentum Going! + +The knowledge and skills you've gained during these 90 days are just the beginning. Keep practicing, experimenting, and collaborating. DevOps is a continuous journey of improvement and innovation. + +### Star the Repository + +If you've found value in this repository and the DevOps content we've curated, consider showing your appreciation by starring this repository. Your support motivates us to keep creating high-quality content and resources for the community. + +**[🌟 Star this repository](https://github.com/LondheShubham153/90DaysOfDevOps)** + +Thank you for being part of the "90 Days of DevOps" adventure. +Keep coding, automating, deploying, and innovating! 🎈 + +With gratitude, +@TrainWithShubham + +[← Previous Day](../day89/README.md) diff --git a/2023/day90/tasks.md b/2023/day90/tasks.md deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/2024/day01/README.md b/2024/day01/README.md new file mode 100644 index 0000000000..33caf1a759 --- /dev/null +++ b/2024/day01/README.md @@ -0,0 +1,27 @@ +# Introduction - Day 1 + +Welcome to the #90DaysOfDevOps Challenge with the #TrainWithShubham Community! Today, we begin our journey into the world of DevOps. Here’s what you need to do: + +1. **Fork this Repository:** + - Go to the repository on GitHub and fork it to your own account. This will allow you to track your progress and contribute. + +2. **Start with a DevOps Roadmap:** + - Watch the introductory video on DevOps: [DevOps Roadmap](https://youtu.be/g_QHuGq3E2Y?si=fR9K56-JevZTfrBK) + +3. **Write a LinkedIn Post or a Small Article:** + - Share your understanding of DevOps based on the video and your research. Cover the following points: + + - **What is DevOps:** + + + - **What is Automation, Scaling, and Infrastructure:** + + + - **Why DevOps is Important:** + + + +4. **Engage with the Community:** + - Share your LinkedIn post or article link in the community forum or on social media using the hashtags #90DaysOfDevOps and #TrainWithShubham. + - Read and comment on posts from other participants to foster a collaborative learning environment. + diff --git a/2024/day02/readme.md b/2024/day02/readme.md new file mode 100644 index 0000000000..24bb7fe1e3 --- /dev/null +++ b/2024/day02/readme.md @@ -0,0 +1,41 @@ +## Basic linux commands + +### Listing commands +```ls option_flag arguments ```--> list the sub directories and files avaiable in the present directory + +Examples: + +- ``` ls -l ```--> list the files and directories in long list format with extra information +- ```ls -a ```--> list all including hidden files and directory +- ```ls *.sh``` --> list all the files having .sh extension. + +- ```ls -i ``` --> list the files and directories with index numbers inodes +- ``` ls -d */``` --> list only directories.(we can also specify a pattern) + +### Directoy commands +- ```pwd``` --> print work directory. Gives the present working directory. + +- ```cd path_to_directory``` --> change directory to the provided path + +- ```cd ~ ``` or just ```cd ``` --> change directory to the home directory + +- ``` cd - ``` --> Go to the last working directory. + +- ``` cd ..``` --> change directory to one step back. + +- ``` cd ../..``` --> Change directory to 2 levels back. + +- ``` mkdir directoryName``` --> to make a directory in a specific location + +Examples: +``` +mkdir newFolder # make a new folder 'newFolder' + +mkdir .NewFolder # make a hidden directory (also . before a file to make it hidden) + +mkdir A B C D #make multiple directories at the same time + +mkdir /home/user/Mydirectory # make a new folder in a specific location + +mkdir -p A/B/C/D # make a nested directory +``` diff --git a/2024/day03/README.md b/2024/day03/README.md new file mode 100644 index 0000000000..3fc984d91b --- /dev/null +++ b/2024/day03/README.md @@ -0,0 +1,20 @@ +# Day 3 Task: Basic Linux Commands with a Twist + +Task: What are the Linux commands to + +1. View the content of a file and display line numbers. +2. Change the access permissions of files to make them readable, writable, and executable by the owner only. +3. Check the last 10 commands you have run. +4. Remove a directory and all its contents. +5. Create a `fruits.txt` file, add content (one fruit per line), and display the content. +6. Add content in `devops.txt` (one in each line) - Apple, Mango, Banana, Cherry, Kiwi, Orange, Guava. Then, append "Pineapple" to the end of the file. +7. Show the first three fruits from the file in reverse order. +8. Show the bottom three fruits from the file, and then sort them alphabetically. +9. Create another file `Colors.txt`, add content (one color per line), and display the content. +10. Add content in `Colors.txt` (one in each line) - Red, Pink, White, Black, Blue, Orange, Purple, Grey. Then, prepend "Yellow" to the beginning of the file. +11. Find and display the lines that are common between `fruits.txt` and `Colors.txt`. +12. Count the number of lines, words, and characters in both `fruits.txt` and `Colors.txt`. + +Reference: [Linux Commands for DevOps Used Day-to-Day](https://www.linkedin.com/pulse/linux-commands-devops-used-day-to-day-activit-chetan-/) + +[← Previous Day](../day02/README.md) | [Next Day →](../day04/README.md) diff --git a/2024/day03/image/task 1.png b/2024/day03/image/task 1.png new file mode 100644 index 0000000000..6d43acbead Binary files /dev/null and b/2024/day03/image/task 1.png differ diff --git a/2024/day03/image/task 10.png b/2024/day03/image/task 10.png new file mode 100644 index 0000000000..bd1ad3ce03 Binary files /dev/null and b/2024/day03/image/task 10.png differ diff --git a/2024/day03/image/task 11.png b/2024/day03/image/task 11.png new file mode 100644 index 0000000000..92f1a020bf Binary files /dev/null and b/2024/day03/image/task 11.png differ diff --git a/2024/day03/image/task 12.png b/2024/day03/image/task 12.png new file mode 100644 index 0000000000..40cf2f5d66 Binary files /dev/null and b/2024/day03/image/task 12.png differ diff --git a/2024/day03/image/task 2.png b/2024/day03/image/task 2.png new file mode 100644 index 0000000000..321719e413 Binary files /dev/null and b/2024/day03/image/task 2.png differ diff --git a/2024/day03/image/task 3.png b/2024/day03/image/task 3.png new file mode 100644 index 0000000000..8264548702 Binary files /dev/null and b/2024/day03/image/task 3.png differ diff --git a/2024/day03/image/task 4.png b/2024/day03/image/task 4.png new file mode 100644 index 0000000000..f5f90b8a58 Binary files /dev/null and b/2024/day03/image/task 4.png differ diff --git a/2024/day03/image/task 5.png b/2024/day03/image/task 5.png new file mode 100644 index 0000000000..68966372f2 Binary files /dev/null and b/2024/day03/image/task 5.png differ diff --git a/2024/day03/image/task 6.png b/2024/day03/image/task 6.png new file mode 100644 index 0000000000..2ddfdbab26 Binary files /dev/null and b/2024/day03/image/task 6.png differ diff --git a/2024/day03/image/task 66.png b/2024/day03/image/task 66.png new file mode 100644 index 0000000000..5360649b4f Binary files /dev/null and b/2024/day03/image/task 66.png differ diff --git a/2024/day03/image/task 7.png b/2024/day03/image/task 7.png new file mode 100644 index 0000000000..e16aa39374 Binary files /dev/null and b/2024/day03/image/task 7.png differ diff --git a/2024/day03/image/task 8.png b/2024/day03/image/task 8.png new file mode 100644 index 0000000000..48cd782dfb Binary files /dev/null and b/2024/day03/image/task 8.png differ diff --git a/2024/day03/image/task 9.png b/2024/day03/image/task 9.png new file mode 100644 index 0000000000..8013d510c7 Binary files /dev/null and b/2024/day03/image/task 9.png differ diff --git a/2024/day03/solution.md b/2024/day03/solution.md new file mode 100644 index 0000000000..3f094c9649 --- /dev/null +++ b/2024/day03/solution.md @@ -0,0 +1,51 @@ + +# Basic Linux Commands - Day 3 + +Task 1: View the content of a file and display line numbers. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%201.png) + +Task 2: Change the access permissions of files to make them readable, writable, and executable by the owner only. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%202.png) + +Task 3: Check the last 10 commands you have run. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%203.png) + +Task 4: Remove a directory and all its contents. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%204.png) + +Task 5: Create a `fruits.txt` file, add content (one fruit per line), and display the content. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%205.png) + +Task 6: Add content in `devops.txt` (one in each line) - Apple, Mango, Banana, Cherry, Kiwi, Orange, Guava. Then, append "Pineapple" to the end of the file. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%206.png) +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%2066.png) + +Task 7: Show the first three fruits from the file in reverse order. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%207.png) + +Task 8: Show the bottom three fruits from the file, and then sort them alphabetically. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%208.png) + +Task 9: Create another file `Colors.txt`, add content (one color per line), and display the content. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%209.png) + +Task 10: Add content in `Colors.txt` (one in each line) - Red, Pink, White, Black, Blue, Orange, Purple, Grey. Then, prepend "Yellow" to the beginning of the file. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%2010.png) + +Task 11: Find and display the lines that are common between `fruits.txt` and `Colors.txt`. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%2011.png) + +Task 12: Count the number of lines, words, and characters in both `fruits.txt` and `Colors.txt`. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day03/image/task%2012.png) diff --git a/2024/day04/README.md b/2024/day04/README.md new file mode 100644 index 0000000000..1eca473867 --- /dev/null +++ b/2024/day04/README.md @@ -0,0 +1,31 @@ +# Day 4 Task: Basic Linux Shell Scripting for DevOps Engineers + +## What is Kernel? + +The kernel is a computer program that is the core of a computer’s operating system, with complete control over everything in the system. + +## What is Shell? + +A shell is a special user program that provides an interface for users to interact with operating system services. It accepts human-readable commands from users and converts them into instructions that the kernel can understand. The shell is a command language interpreter that executes commands read from input devices such as keyboards or from files. It starts when the user logs in or opens a terminal. + +## What is Linux Shell Scripting? + +Linux shell scripting involves writing programs (scripts) that can be run by a Linux shell, such as bash (Bourne Again Shell). These scripts automate tasks, perform system administration tasks, and facilitate the interaction between users and the operating system. + +**Tasks:** + +- Explain in your own words and with examples what Shell Scripting means for DevOps. +- What is `#!/bin/bash`? Can we write `#!/bin/sh` as well? +- Write a Shell Script that prints `I will complete #90DaysOfDevOps challenge`. +- Write a Shell Script that takes user input, input from arguments, and prints the variables. +- Provide an example of an If-Else statement in Shell Scripting by comparing two numbers. + +**Were the tasks challenging?** + +These tasks are designed to introduce you to basic concepts of Linux shell scripting for DevOps. Share your experience and solutions on LinkedIn and let me know how it went! :) + +**Article Reference:** [Click here to read basic Linux Shell Scripting](https://devopscube.com/linux-shell-scripting-for-devops/) + +**YouTube Video:** [EASIEST Shell Scripting Tutorial for DevOps Engineers](https://www.youtube.com/watch?v=_-D6gkRj7xc&list=PLlfy9GnSVerQr-Se9JRE_tZJk3OUoHCkh&index=3) + +[← Previous Day](../day03/README.md) | [Next Day →](../day05/README.md) diff --git a/2024/day04/image/task 1.png b/2024/day04/image/task 1.png new file mode 100644 index 0000000000..ffc9913f6e Binary files /dev/null and b/2024/day04/image/task 1.png differ diff --git a/2024/day04/image/task 11.png b/2024/day04/image/task 11.png new file mode 100644 index 0000000000..d4402482e1 Binary files /dev/null and b/2024/day04/image/task 11.png differ diff --git a/2024/day04/image/task 2.png b/2024/day04/image/task 2.png new file mode 100644 index 0000000000..4f7a735bc3 Binary files /dev/null and b/2024/day04/image/task 2.png differ diff --git a/2024/day04/image/task 3.png b/2024/day04/image/task 3.png new file mode 100644 index 0000000000..5baeb479a0 Binary files /dev/null and b/2024/day04/image/task 3.png differ diff --git a/2024/day04/image/task 4.png b/2024/day04/image/task 4.png new file mode 100644 index 0000000000..ea366a253a Binary files /dev/null and b/2024/day04/image/task 4.png differ diff --git a/2024/day04/image/task 5.png b/2024/day04/image/task 5.png new file mode 100644 index 0000000000..9ab2dc3eef Binary files /dev/null and b/2024/day04/image/task 5.png differ diff --git a/2024/day04/solution.md b/2024/day04/solution.md new file mode 100644 index 0000000000..b9020734eb --- /dev/null +++ b/2024/day04/solution.md @@ -0,0 +1,28 @@ + +# Day 4 Answers: Basic Linux Shell Scripting for DevOps Engineers + +Task 1: Explain in your own words and with examples what Shell Scripting means for DevOps. +- 'Shell Scripting is writing a series of commands in a script file to automate tasks in the Unix/Linux shell. For DevOps, shell scripting is crucial for automating repetitive tasks, managing system configurations, deploying applications, and integrating various tools and processes in a CI/CD pipeline. It enhances efficiency, reduces errors, and saves time.' + +Example: Automating server setup +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day04/image/task%201.png) +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day04/image/task%2011.png) + +Task 2: What is `#!/bin/bash`? Can we write `#!/bin/sh` as well? +- `#!/bin/bash` is called a "shebang" line. It indicates that the script should be run using the Bash shell. + - `#!/bin/bash`: Uses Bash as the interpreter. It supports advanced features like arrays, associative arrays, and functions. + - `#!/bin/sh`: Uses the Bourne shell. It’s more POSIX-compliant and is generally compatible with different Unix shells. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day04/image/task%202.png) + +Task 3: Write a Shell Script that prints `I will complete #90DaysOfDevOps challenge`. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day04/image/task%203.png) + +Task 4: Write a Shell Script that takes user input, input from arguments, and prints the variables. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day04/image/task%204.png) + +Task 5: Provide an example of an If-Else statement in Shell Scripting by comparing two numbers. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day04/image/task%205.png) diff --git a/2024/day05/README.md b/2024/day05/README.md new file mode 100644 index 0000000000..471ab91986 --- /dev/null +++ b/2024/day05/README.md @@ -0,0 +1,40 @@ +# Day 5 Task: Advanced Linux Shell Scripting for DevOps Engineers with User Management + +If you noticed that there are a total of 90 sub-directories in the directory '2023' of this repository, what did you think? How did I create 90 directories? Manually one by one, using a script, or a command? + +All 90 directories were created within seconds using a simple command: + +`mkdir day{1..90}` + +### Tasks + +1. **Create Directories Using Shell Script:** + - Write a bash script `createDirectories.sh` that, when executed with three arguments (directory name, start number of directories, and end number of directories), creates a specified number of directories with a dynamic directory name. + - Example 1: When executed as `./createDirectories.sh day 1 90`, it creates 90 directories as `day1 day2 day3 ... day90`. + - Example 2: When executed as `./createDirectories.sh Movie 20 50`, it creates 31 directories as `Movie20 Movie21 Movie22 ... Movie50`. + + Notes: You may need to use loops or commands (or both), based on your preference. [Check out this reference: Bash Scripting For Loop](https://www.geeksforgeeks.org/bash-scripting-for-loop/) + +2. **Create a Script to Backup All Your Work:** + - Backups are an important part of a DevOps Engineer's day-to-day activities. The video in the references will help you understand how a DevOps Engineer takes backups (it can feel a bit difficult but keep trying, nothing is impossible). + - Watch [this video](https://youtu.be/aolKiws4Joc) for guidance. + + In case of doubts, post them in the [Discord Channel for #90DaysOfDevOps](https://discord.gg/hs3Pmc5F). + +3. **Read About Cron and Crontab to Automate the Backup Script:** + - Cron is the system's main scheduler for running jobs or tasks unattended. A command called crontab allows the user to submit, edit, or delete entries to cron. A crontab file is a user file that holds the scheduling information. + - Watch this video for reference: [Cron and Crontab](https://youtu.be/aolKiws4Joc). + +4. **Read About User Management:** + - A user is an entity in a Linux operating system that can manipulate files and perform several other operations. Each user is assigned an ID that is unique within the system. IDs 0 to 999 are assigned to system users, and local user IDs start from 1000 onwards. + - Create 2 users and display their usernames. + - [Check out this reference: User Management in Linux](https://www.geeksforgeeks.org/user-management-in-linux/). + +5. **Post Your Progress:** + - Post your daily work on LinkedIn and let me know how it went! Writing an article about your experience is highly encouraged. + +**Were the tasks challenging?** + +These tasks are designed to push your skills and introduce you to advanced concepts in Linux shell scripting and user management. Share your experience and solutions on LinkedIn and let me know how it went! + +[← Previous Day](../day04/README.md) | [Next Day →](../day06/README.md) diff --git a/2024/day05/image/task 1-2.png b/2024/day05/image/task 1-2.png new file mode 100644 index 0000000000..66d467cf1d Binary files /dev/null and b/2024/day05/image/task 1-2.png differ diff --git a/2024/day05/image/task 1-3.png b/2024/day05/image/task 1-3.png new file mode 100644 index 0000000000..d5b2699043 Binary files /dev/null and b/2024/day05/image/task 1-3.png differ diff --git a/2024/day05/image/task 1.png b/2024/day05/image/task 1.png new file mode 100644 index 0000000000..1ac28abb7c Binary files /dev/null and b/2024/day05/image/task 1.png differ diff --git a/2024/day05/image/task 2-1.png b/2024/day05/image/task 2-1.png new file mode 100644 index 0000000000..f62a8a053a Binary files /dev/null and b/2024/day05/image/task 2-1.png differ diff --git a/2024/day05/image/task 2.png b/2024/day05/image/task 2.png new file mode 100644 index 0000000000..32fa6a6a33 Binary files /dev/null and b/2024/day05/image/task 2.png differ diff --git a/2024/day05/image/task 3-1.png b/2024/day05/image/task 3-1.png new file mode 100644 index 0000000000..14086027a7 Binary files /dev/null and b/2024/day05/image/task 3-1.png differ diff --git a/2024/day05/image/task 3.png b/2024/day05/image/task 3.png new file mode 100644 index 0000000000..0c6bf33d1d Binary files /dev/null and b/2024/day05/image/task 3.png differ diff --git a/2024/day05/image/task 4.png b/2024/day05/image/task 4.png new file mode 100644 index 0000000000..8145a7ab0a Binary files /dev/null and b/2024/day05/image/task 4.png differ diff --git a/2024/day05/solution.md b/2024/day05/solution.md new file mode 100644 index 0000000000..aea6029950 --- /dev/null +++ b/2024/day05/solution.md @@ -0,0 +1,41 @@ + +# Day 5 Answers: Advanced Linux Shell Scripting for DevOps Engineers with User Management + +### Tasks + +1. **Create Directories Using Shell Script:** + - Write a bash script `createDirectories.sh` that, when executed with three arguments (directory name, start number of directories, and end number of directories), creates a specified number of directories with a dynamic directory name. + - Example 1: When executed as `./createDirectories.sh day 1 90`, it creates 90 directories as `day1 day2 day3 ... day90`. + - Example 2: When executed as `./createDirectories.sh Movie 20 50`, it creates 31 directories as `Movie20 Movie21 Movie22 ... Movie50`. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day05/image/task%201.png) + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day05/image/task%201-2.png) + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day05/image/task%201-3.png) + +2. **Create a Script to Backup All Your Work:** + - Backups are an important part of a DevOps Engineer's day-to-day activities. The video in the references will help you understand how a DevOps Engineer takes backups (it can feel a bit difficult but keep trying, nothing is impossible). + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day05/image/task%202.png) + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day05/image/task%202-1.png) + +3. **Read About Cron and Crontab to Automate the Backup Script:** + - Cron is the system's main scheduler for running jobs or tasks unattended. A command called crontab allows the user to submit, edit, or delete entries to cron. A crontab file is a user file that holds the scheduling information. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day05/image/task%203.png) + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day05/image/task%203-1.png) + +4. **Read About User Management:** + - A user is an entity in a Linux operating system that can manipulate files and perform several other operations. Each user is assigned an ID that is unique within the system. IDs 0 to 999 are assigned to system users, and local user IDs start from 1000 onwards. + - Create 2 users and display their usernames. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day05/image/task%204.png) + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/). diff --git a/2024/day06/README.md b/2024/day06/README.md new file mode 100644 index 0000000000..f6e64a178d --- /dev/null +++ b/2024/day06/README.md @@ -0,0 +1,43 @@ +# Day 6 Task: File Permissions and Access Control Lists + +### Today is more on Reading, Learning, and Implementing File Permissions + +The concept of Linux file permission and ownership is important in Linux. Today, we will work on Linux permissions and ownership, and perform tasks related to both. + +## Tasks + +1. **Understanding File Permissions:** + - Create a simple file and run `ls -ltr` to see the details of the files. [Refer to Notes](https://github.com/LondheShubham153/90DaysOfDevOps/tree/master/2023/day06/notes) + - Each of the three permissions are assigned to three defined categories of users. The categories are: + - **Owner:** The owner of the file or application. + - Use `chown` to change the ownership permission of a file or directory. + - **Group:** The group that owns the file or application. + - Use `chgrp` to change the group permission of a file or directory. + - **Others:** All users with access to the system (outside the users in a group). + - Use `chmod` to change the other users' permissions of a file or directory. + - Task: Change the user permissions of the file and note the changes after running `ls -ltr`. + +2. **Writing an Article:** + - Write an article about file permissions based on your understanding from the notes. + +3. **Access Control Lists (ACL):** + - Read about ACL and try out the commands `getfacl` and `setfacl`. + - Task: Create a directory and set specific ACL permissions for different users and groups. Verify the permissions using `getfacl`. + +4. **Additional Tasks:** + - **Task:** Create a script that changes the permissions of multiple files in a directory based on user input. + - **Task:** Write a script that sets ACL permissions for a user on a given file, based on user input. + +5. **Understanding Sticky Bit, SUID, and SGID:** + - Read about sticky bit, SUID, and SGID. + - Task: Create examples demonstrating the use of sticky bit, SUID, and SGID, and explain their significance. + +6. **Backup and Restore Permissions:** + - Task: Create a script that backs up the current permissions of files in a directory to a file. + - Task: Create another script that restores the permissions from the backup file. + +In case of any doubts, post them on the [Discord Community](https://discord.gg/hs3Pmc5F). + +**Happy Learning!** + +[← Previous Day](../day05/README.md) | [Next Day →](../day07/README.md) diff --git a/2024/day06/image/task1.png b/2024/day06/image/task1.png new file mode 100644 index 0000000000..9c1d5b2bb6 Binary files /dev/null and b/2024/day06/image/task1.png differ diff --git a/2024/day06/image/task3.png b/2024/day06/image/task3.png new file mode 100644 index 0000000000..0e49d81490 Binary files /dev/null and b/2024/day06/image/task3.png differ diff --git a/2024/day06/image/task4-1.png b/2024/day06/image/task4-1.png new file mode 100644 index 0000000000..36cc2d3eec Binary files /dev/null and b/2024/day06/image/task4-1.png differ diff --git a/2024/day06/image/task4.png b/2024/day06/image/task4.png new file mode 100644 index 0000000000..cc4c72d08d Binary files /dev/null and b/2024/day06/image/task4.png differ diff --git a/2024/day06/image/task5-1.png b/2024/day06/image/task5-1.png new file mode 100644 index 0000000000..57e7b02381 Binary files /dev/null and b/2024/day06/image/task5-1.png differ diff --git a/2024/day06/image/task5-2.png b/2024/day06/image/task5-2.png new file mode 100644 index 0000000000..4a8805dc46 Binary files /dev/null and b/2024/day06/image/task5-2.png differ diff --git a/2024/day06/image/task5.png b/2024/day06/image/task5.png new file mode 100644 index 0000000000..238145ecda Binary files /dev/null and b/2024/day06/image/task5.png differ diff --git a/2024/day06/image/task6-1.png b/2024/day06/image/task6-1.png new file mode 100644 index 0000000000..2669695bfe Binary files /dev/null and b/2024/day06/image/task6-1.png differ diff --git a/2024/day06/image/task6.png b/2024/day06/image/task6.png new file mode 100644 index 0000000000..f4c5cfc449 Binary files /dev/null and b/2024/day06/image/task6.png differ diff --git a/2024/day06/solution.md b/2024/day06/solution.md new file mode 100644 index 0000000000..2a6dea82c8 --- /dev/null +++ b/2024/day06/solution.md @@ -0,0 +1,94 @@ +# Day 6 Answers: File Permissions and Access Control Lists + +### Tasks + +1. **Understanding File Permissions:** + - Create a simple file and run `ls -ltr` to see the details of the files. + - Each of the three permissions are assigned to three defined categories of users. The categories are: + - **Owner:** The owner of the file or application. + - Use `chown` to change the ownership permission of a file or directory. + - **Group:** The group that owns the file or application. + - Use `chgrp` to change the group permission of a file or directory. + - **Others:** All users with access to the system (outside the users in a group). + - Use `chmod` to change the other users' permissions of a file or directory. + - Task: Change the user permissions of the file and note the changes after running `ls -ltr`. + + **Answer** + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day06/image/task1.png) + +2. **Writing an Article:** + - Write an article about file permissions based on your understanding from the notes. + + **Answer** + + - **Understanding File Permissions in Linux** + - File permissions in Linux are critical for maintaining security and proper access control. They define who can read, write, and execute a file or directory. Here, we explore the concepts and commands related to file permissions. + + - **Basic Permissions** + - Permissions in Linux are represented by a three-digit number, where each digit represents a different set of users: owner, group, and others. + + - **Highest Permission:** `7` (4+2+1) + - **Maximum Permission:** `777`, but effectively `666` for files due to security reasons, meaning no user gets execute permission. + - **Effective Permission for Directories:** `755` + - **Lowest Permission:** `000` (not recommended) + - **Minimum Effective Permission for Files:** `644` (default umask value of `022`) + - **Default Directory Permission:** Includes execute permission for navigation + + - **Categories of Users** + - Each of the three permissions are assigned to three defined categories of users: + + - **Owner**: The owner of the file or application. + - Command: `chown` is used to change the ownership of a file or directory. + - **Group**: The group that owns the file or application. + - Command: `chgrp` is used to change the group permission of a file or directory. + - **Others**: All users with access to the system. + - Command: `chmod` is used to change the permissions for other users. + + - **Special Permissions** + - **SUID (Set User ID)**: If SUID is set on an executable file and a normal user executes it, the process will have the same rights as the owner of the file being executed instead of the normal user (e.g., `passwd` command). + - **SGID (Set Group ID)**: If SGID is set on any directory, all subdirectories and files created inside will inherit the group ownership of the main directory, regardless of who creates them. + - **Sticky Bit**: Used on folders to avoid deletion of a folder and its contents by other users though they have write permissions. Only the owner and root user can delete other users' data in the folder where the sticky bit is set. + +3. **Access Control Lists (ACL):** + - Read about ACL and try out the commands `getfacl` and `setfacl`. + - Task: Create a directory and set specific ACL permissions for different users and groups. Verify the permissions using `getfacl`. + + **Answer** + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day06/image/task3.png) + +4. **Additional Tasks:** + - **Task:** Create a script that changes the permissions of multiple files in a directory based on user input. + + **Answer** + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day06/image/task4.png) + + - **Task:** Write a script that sets ACL permissions for a user on a given file, based on user input. + + **Answer** + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day06/image/task4-1.png) + +5. **Understanding Sticky Bit, SUID, and SGID:** + - Read about sticky bit, SUID, and SGID. + - Sticky bit: Used on directories to prevent users from deleting files they do not own. + - SUID (Set User ID): Allows users to run an executable with the permissions of the executable's owner. + - SGID (Set Group ID): Allows users to run an executable with the permissions of the executable's group. + - Task: Create examples demonstrating the use of sticky bit, SUID, and SGID, and explain their significance. + + **Answer** + - Sticky bit: + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day06/image/task5.png) + - SUID: + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day06/image/task5-1.png) + - SGID: + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day06/image/task5-2.png) + +6. **Backup and Restore Permissions:** + - Task: Create a script that backs up the current permissions of files in a directory to a file. + + **Answer** + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day06/image/task6.png) + + - Task: Create another script that restores the permissions from the backup file. + + **Answer** + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day06/image/task6-1.png) diff --git a/2024/day07/README.md b/2024/day07/README.md new file mode 100644 index 0000000000..9e9a2f36c1 --- /dev/null +++ b/2024/day07/README.md @@ -0,0 +1,58 @@ +# Day 7 Task: Understanding Package Manager and Systemctl + +### What is a Package Manager in Linux? + +In simpler words, a package manager is a tool that allows users to install, remove, upgrade, configure, and manage software packages on an operating system. The package manager can be a graphical application like a software center or a command line tool like apt-get or pacman. + +You’ll often find me using the term ‘package’ in tutorials and articles. To understand a package manager, you must understand what a package is. + +### What is a Package? + +A package is usually referred to as an application but it could be a GUI application, command line tool, or a software library (required by other software programs). A package is essentially an archive file containing the binary executable, configuration file, and sometimes information about the dependencies. + +### Different Kinds of Package Managers + +Package managers differ based on the packaging system but the same packaging system may have more than one package manager. + +For example, RPM has Yum and DNF package managers. For DEB, you have apt-get, aptitude command line-based package managers. + +## Tasks + +1. **Install Docker and Jenkins:** + - Install Docker and Jenkins on your system from your terminal using package managers. + +2. **Write a Blog or Article:** + - Write a small blog or article on how to install these tools using package managers on Ubuntu and CentOS. + +### Systemctl and Systemd + +Systemctl is used to examine and control the state of the “systemd” system and service manager. Systemd is a system and service manager for Unix-like operating systems (most distributions, but not all). + +## Tasks + +1. **Check Docker Service Status:** + - Check the status of the Docker service on your system (ensure you have completed the installation tasks above). + +2. **Manage Jenkins Service:** + - Stop the Jenkins service and post before and after screenshots. + +3. **Read About Systemctl vs. Service:** + - Read about the differences between the `systemctl` and `service` commands. + - Example: `systemctl status docker` vs. `service docker status`. + + For reference, read [this article](https://www.howtogeek.com/devops/how-to-check-if-the-docker-daemon-or-a-container-is-running/#:~:text=Checking%20With%20Systemctl&text=Check%20what%27s%20displayed%20under%20%E2%80%9CActive,running%20sudo%20systemctl%20start%20docker%20). + +### Additional Tasks + +4. **Automate Service Management:** + - Write a script to automate the starting and stopping of Docker and Jenkins services. + +5. **Enable and Disable Services:** + - Use systemctl to enable Docker to start on boot and disable Jenkins from starting on boot. + +6. **Analyze Logs:** + - Use journalctl to analyze the logs of the Docker and Jenkins services. Post your findings. + +#### Post about your progress and invite your friends to join the #90DaysOfDevOps challenge. + +[← Previous Day](../day06/README.md) | [Next Day →](../day08/README.md) diff --git a/2024/day07/image/task1-2.png b/2024/day07/image/task1-2.png new file mode 100644 index 0000000000..973ed75f93 Binary files /dev/null and b/2024/day07/image/task1-2.png differ diff --git a/2024/day07/image/task1.png b/2024/day07/image/task1.png new file mode 100644 index 0000000000..36c1c2e283 Binary files /dev/null and b/2024/day07/image/task1.png differ diff --git a/2024/day07/image/task4.png b/2024/day07/image/task4.png new file mode 100644 index 0000000000..2d0936df22 Binary files /dev/null and b/2024/day07/image/task4.png differ diff --git a/2024/day07/image/task5-1.png b/2024/day07/image/task5-1.png new file mode 100644 index 0000000000..7381edc4bd Binary files /dev/null and b/2024/day07/image/task5-1.png differ diff --git a/2024/day07/image/task5.png b/2024/day07/image/task5.png new file mode 100644 index 0000000000..eec8472230 Binary files /dev/null and b/2024/day07/image/task5.png differ diff --git a/2024/day07/image/task6-1.png b/2024/day07/image/task6-1.png new file mode 100644 index 0000000000..dec83cdce9 Binary files /dev/null and b/2024/day07/image/task6-1.png differ diff --git a/2024/day07/image/task6.png b/2024/day07/image/task6.png new file mode 100644 index 0000000000..9290cfc617 Binary files /dev/null and b/2024/day07/image/task6.png differ diff --git a/2024/day07/image/taskj2.png b/2024/day07/image/taskj2.png new file mode 100644 index 0000000000..3cfd509f89 Binary files /dev/null and b/2024/day07/image/taskj2.png differ diff --git a/2024/day07/solution.md b/2024/day07/solution.md new file mode 100644 index 0000000000..6ef7028b70 --- /dev/null +++ b/2024/day07/solution.md @@ -0,0 +1,167 @@ +# Day 7 Answers: Understanding Package Manager and Systemctl + +## Tasks + +1. **Install Docker and Jenkins:** + - Install Docker and Jenkins on your system from your terminal using package managers. + + **Answer** + - **First-Installing Docker** + - Update the package list and install required packages: + ```bash + sudo apt update + sudo apt install apt-transport-https ca-certificates curl software-properties-common + - Add Docker’s official GPG key: + ```bash + curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - + - Add the Docker APT repository: + ```bash + sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" + - Update the package list again: + ```bash + sudo apt update + - Install Docker: + ```bash + sudo apt install docker-ce + - Check Docker installation: + ```bash + sudo systemctl status docker + + - **Installing Jenkins** + - Add the Jenkins repository key to the system: + ```bash + curl -fsSL https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add - + - Add the Jenkins repository: + ```bash + sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list' + - Update the package list: + ```bash + sudo apt update + - Install Jenkins: + ```bash + sudo apt install jenkins + - Start Jenkins: + ```bash + sudo systemctl start jenkins + - Note: + - First, check whether JAVA is installed or not. + ```bash + java -version + - If you have not installed + ```bash + sudo apt install default-jre + + Output + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day07/image/task1.png) + + Output (Jenkins-UI) + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day07/image/task1-2.png) + +2. **Write a Blog or Article:** + - Write a small blog or article on how to install these tools using package managers on Ubuntu and CentOS. + + **Answer** + 1. Introduction: + - Briefly introduce Docker and Jenkins. + - Mention the operating systems (Ubuntu and CentOS) covered. + 2. Installing Docker on Ubuntu: + - List the steps as detailed above. + 3. Installing Docker on CentOS: + - Provide similar steps adjusted for CentOS. + 4. Installing Jenkins on Ubuntu: + - List the steps as detailed above. + 5. Installing Jenkins on CentOS: + - Provide similar steps adjusted for CentOS. + +### Systemctl and Systemd + +Systemctl is used to examine and control the state of the “systemd” system and service manager. Systemd is a system and service manager for Unix-like operating systems (most distributions, but not all). + +## Tasks + +1. **Check Docker Service Status:** + - Check the status of the Docker service on your system (ensure you have completed the installation tasks above). + + **Answer** + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day07/image/task5.png) + +2. **Manage Jenkins Service:** + - Stop the Jenkins service and post before and after screenshots. + + **Answer** + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day07/image/taskj2.png) + +3. **Read About Systemctl vs. Service:** + - Read about the differences between the `systemctl` and `service` commands. + - Example: `systemctl status docker` vs. `service docker status`. + + **Answer** + - Understanding the `systemctl` and `service` Commands + - Both `systemctl` and `service` commands are used to manage system services in Linux, but they differ in terms of usage, functionality, and the system architectures they support. + - **`systemctl` Command** + - `systemctl` is a command used to introspect and control the state of the `systemd` system and service manager. It is more modern and is used in systems that use `systemd` as their init system, which is common in many contemporary Linux distributions. + - Examples: + - Check the status of the Docker service: + ```bash + sudo systemctl status docker + - Start the Jenkins service: + ```bash + sudo systemctl start jenkins + - Stop the Docker service: + ```bash + sudo systemctl stop docker + - Enable the Jenkins service to start at boot: + ```bash + sudo systemctl enable jenkins + + - **`service` Command** + - 'service' is a command that works with the older 'init' systems (like SysVinit). It provides a way to start, stop, and check the status of services. While it is still available on systems using 'systemd' for backward compatibility, its usage is generally discouraged in favor of 'systemctl'. + - Examples: + - Check the status of the Docker service: + ```bash + sudo service docker status + - Start the Jenkins service: + ```bash + sudo service jenkins start + - Stop the Docker service: + ```bash + sudo service docker stop + + - **Key Differences** + - 1 System Architecture: + - `systemctl` works with `systemd`. + - `service` works with SysVinit and is compatible with `systemd` for backward compatibility. + - 2 Functionality: + - `systemctl` offers more functionality and control over services, including management of the service's state (start, stop, restart, reload), enabling/disabling services at boot, and querying detailed service status. + - `service` provides basic functionality for managing services, such as starting, stopping, and checking the status of services. + - 3 Syntax and Usage: + - `systemctl` uses a more unified syntax for managing services. + - `service` has a simpler and more traditional syntax. + +### Additional Tasks + +4. **Automate Service Management:** + - Write a script to automate the starting and stopping of Docker and Jenkins services. + + **Answer** + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day07/image/task4.png) + +5. **Enable and Disable Services:** + - Use systemctl to enable Docker to start on boot and disable Jenkins from starting on boot. + + **Answer** + - Enable Docker to start on boot: + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day07/image/task5.png) + + - Disable Jenkins from starting on boot: + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day07/image/task5-1.png) + +6. **Analyze Logs:** + - Use journalctl to analyze the logs of the Docker and Jenkins services. Post your findings. + + **Answer** + - Docker Logs: + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day07/image/task6.png) + + - Jenkins Logs: + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day07/image/task6-1.png) \ No newline at end of file diff --git a/2024/day08/README.md b/2024/day08/README.md new file mode 100644 index 0000000000..0f7c48f506 --- /dev/null +++ b/2024/day08/README.md @@ -0,0 +1,29 @@ +# Day 8 Task: Shell Scripting Challenge + +### Task 1: Comments +In bash scripts, comments are used to add explanatory notes or disable certain lines of code. Your task is to create a bash script with comments explaining what the script does. + +### Task 2: Echo +The echo command is used to display messages on the terminal. Your task is to create a bash script that uses echo to print a message of your choice. + +### Task 3: Variables +Variables in bash are used to store data and can be referenced by their name. Your task is to create a bash script that declares variables and assigns values to them. + +### Task 4: Using Variables +Now that you have declared variables, let's use them to perform a simple task. Create a bash script that takes two variables (numbers) as input and prints their sum using those variables. + +### Task 5: Using Built-in Variables +Bash provides several built-in variables that hold useful information. Your task is to create a bash script that utilizes at least three different built-in variables to display relevant information. + +### Task 6: Wildcards +Wildcards are special characters used to perform pattern matching when working with files. Your task is to create a bash script that utilizes wildcards to list all the files with a specific extension in a directory. + +## Submission Instructions: +- Create a single bash script that completes all the tasks mentioned above. +- Add comments at appropriate places to explain what each part of the script does. +- Ensure that your script is well-documented and easy to understand. +- To submit your entry, create a GitHub repository and commit your script to it. + +**Good luck with Day 8 of the Bash Scripting Challenge! Tomorrow, the difficulty will increase as we move on to more advanced concepts. Happy scripting!** + +[← Previous Day](../day07/README.md) | [Next Day →](../day09/README.md) diff --git a/2024/day08/image/task1.png b/2024/day08/image/task1.png new file mode 100644 index 0000000000..c5bbcb006b Binary files /dev/null and b/2024/day08/image/task1.png differ diff --git a/2024/day08/image/task2.png b/2024/day08/image/task2.png new file mode 100644 index 0000000000..a2b9968c52 Binary files /dev/null and b/2024/day08/image/task2.png differ diff --git a/2024/day08/image/task3.png b/2024/day08/image/task3.png new file mode 100644 index 0000000000..b3ca5d7638 Binary files /dev/null and b/2024/day08/image/task3.png differ diff --git a/2024/day08/image/task4.png b/2024/day08/image/task4.png new file mode 100644 index 0000000000..451315a0b4 Binary files /dev/null and b/2024/day08/image/task4.png differ diff --git a/2024/day08/image/task5.png b/2024/day08/image/task5.png new file mode 100644 index 0000000000..6e27850692 Binary files /dev/null and b/2024/day08/image/task5.png differ diff --git a/2024/day08/image/task6.png b/2024/day08/image/task6.png new file mode 100644 index 0000000000..2c987608db Binary files /dev/null and b/2024/day08/image/task6.png differ diff --git a/2024/day08/solution.md b/2024/day08/solution.md new file mode 100644 index 0000000000..3890e5a171 --- /dev/null +++ b/2024/day08/solution.md @@ -0,0 +1,47 @@ +# Day 8 Answers: Shell Scripting Challenge + +## Tasks + +1. **Comments** + - In bash scripts, comments are used to add explanatory notes or disable certain lines of code. Your task is to create a bash script with comments explaining what the script does. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day08/image/task1.png) + +2. **Echo** + - The echo command is used to display messages on the terminal. Your task is to create a bash script that uses echo to print a message of your choice. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day08/image/task2.png) + +3. **Variables** + - Variables in bash are used to store data and can be referenced by their name. Your task is to create a bash script that declares variables and assigns values to them. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day08/image/task3.png) + +4. **Using Variables** + - Now that you have declared variables, let's use them to perform a simple task. Create a bash script that takes two variables (numbers) as input and prints their sum using those variables. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day08/image/task4.png) + +5. **Using Built-in Variables** + - Bash provides several built-in variables that hold useful information. Your task is to create a bash script that utilizes at least three different built-in variables to display relevant information. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day08/image/task5.png) + +6. **Wildcards** + - Wildcards are special characters used to perform pattern matching when working with files. Your task is to create a bash script that utilizes wildcards to list all the files with a specific extension in a directory. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day08/image/task6.png) + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day09/README.md b/2024/day09/README.md new file mode 100644 index 0000000000..33140c0997 --- /dev/null +++ b/2024/day09/README.md @@ -0,0 +1,80 @@ +# Day 9 Task: Shell Scripting Challenge Directory Backup with Rotation + + +## Challenge Description + +Your task is to create a bash script that takes a directory path as a command-line argument and performs a backup of the directory. The script should create timestamped backup folders and copy all the files from the specified directory into the backup folder. + +Additionally, the script should implement a rotation mechanism to keep only the last 3 backups. This means that if there are more than 3 backup folders, the oldest backup folders should be removed to ensure only the most recent backups are retained. + +> The script will create a timestamped backup folder inside the specified directory and copy all the files into it. It will also check for existing backup folders and remove the oldest backups to keep only the last 3 backups. + +## Example Usage + +Assume the script is named `backup_with_rotation.sh`. Here's an example of how it will look, +also assuming the script is executed with the following commands on different dates: + +1. First Execution (2023-07-30): + +``` +$ ./backup_with_rotation.sh /home/user/documents +``` + +Output: + +``` +Backup created: /home/user/documents/backup_2023-07-30_12-30-45 +Backup created: /home/user/documents/backup_2023-07-30_15-20-10 +Backup created: /home/user/documents/backup_2023-07-30_18-40-55 +``` + +After this execution, the /home/user/documents directory will contain the following items: + +``` +backup_2023-07-30_12-30-45 +backup_2023-07-30_15-20-10 +backup_2023-07-30_18-40-55 +file1.txt +file2.txt +... +``` + +2. Second Execution (2023-08-01): + +``` +$ ./backup_with_rotation.sh /home/user/documents +``` + +Output: + +``` +Backup created: /home/user/documents/backup_2023-08-01_09-15-30 +``` + +After this execution, the /home/user/documents directory will contain the following items: + +``` +backup_2023-07-30_15-20-10 +backup_2023-07-30_18-40-55 +backup_2023-08-01_09-15-30 +file1.txt +file2.txt +... +``` + +In this example, the script creates backup folders with timestamped names and retains only the last 3 backups while removing the older backups. + +## Submission Instructions + +Create a bash script named backup_with_rotation.sh that implements the Directory Backup with Rotation as described in the challenge. + +Happy Learning + +[← Previous Day](../day08/README.md) | [Next Day →](../day10/README.md) + + +Add comments in the script to explain the purpose and logic of each part. + +Submit your entry by pushing the script to your GitHub repository. + +Congratulations on completing Day 2 of the Bash Scripting Challenge! The challenge focuses on creating a backup script with rotation capabilities to manage multiple backups efficiently. Happy scripting and backing up! diff --git a/2024/day09/image/bash1.png b/2024/day09/image/bash1.png new file mode 100644 index 0000000000..480cb95551 Binary files /dev/null and b/2024/day09/image/bash1.png differ diff --git a/2024/day09/image/task1-2.png b/2024/day09/image/task1-2.png new file mode 100644 index 0000000000..7dec90a14d Binary files /dev/null and b/2024/day09/image/task1-2.png differ diff --git a/2024/day09/image/task11.png b/2024/day09/image/task11.png new file mode 100644 index 0000000000..1cc450b1b5 Binary files /dev/null and b/2024/day09/image/task11.png differ diff --git a/2024/day09/image/task2.png b/2024/day09/image/task2.png new file mode 100644 index 0000000000..e0c46c457f Binary files /dev/null and b/2024/day09/image/task2.png differ diff --git a/2024/day09/image/task3.png b/2024/day09/image/task3.png new file mode 100644 index 0000000000..2100b8ad70 Binary files /dev/null and b/2024/day09/image/task3.png differ diff --git a/2024/day09/solution.md b/2024/day09/solution.md new file mode 100644 index 0000000000..d03651b86c --- /dev/null +++ b/2024/day09/solution.md @@ -0,0 +1,45 @@ +# Day 9 Answers: Shell Scripting Challenge Directory Backup with Rotation + +## Tasks + +1. **Challenge Description** + + Your task is to create a bash script that takes a directory path as a command-line argument and performs a backup of the directory. The script should create timestamped backup folders and copy all the files from the specified directory into the backup folder. + + Additionally, the script should implement a rotation mechanism to keep only the last 3 backups. This means that if there are more than 3 backup folders, the oldest backup folders should be removed to ensure only the most recent backups are retained. + + > The script will create a timestamped backup folder inside the specified directory and copy all the files into it. It will also check for existing backup folders and remove the oldest backups to keep only the last 3 backups. + + **Answer** + + **Create a Folder And Make Some File** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day09/image/task11.png) + + - Note: + - First, check whether zip is installed or not. + ```bash + zip + - If you have not installed + ```bash + sudo apt install zip + + **Crontab Job Scheduling:** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day09/image/task2.png) + - Auto scheduling through `crontab job scheduling`: + ```bash + * 1 * * * bash /root/backup.sh /root/datafile /root/backup + + **It will take a backup every hour, and the oldest backups will be deleted, leaving only the latest three backups visible:** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day09/image/task3.png) + + **Bash Script:** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day09/image/bash1.png) + + **Reference** + [TrainWithShubham - Production Backup Rotation | Shell Scripting For DevOps Engineer](https://youtu.be/PZYJ33bMXAw?si=Zb50P67x_F32ikeO) + + [LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day10/README.md b/2024/day10/README.md new file mode 100644 index 0000000000..36a0b90a5e --- /dev/null +++ b/2024/day10/README.md @@ -0,0 +1,55 @@ +# Day 10 Task: Log Analyzer and Report Generator + +## Challenge Title: Log Analyzer and Report Generator + +## Scenario + +You are a system administrator responsible for managing a network of servers. Every day, a log file is generated on each server containing important system events and error messages. As part of your daily tasks, you need to analyze these log files, identify specific events, and generate a summary report. + +## Task + +Write a Bash script that automates the process of analyzing log files and generating a daily summary report. The script should perform the following steps: + +1. **Input:** The script should take the path to the log file as a command-line argument. + +2. **Error Count:** Analyze the log file and count the number of error messages. An error message can be identified by a specific keyword (e.g., "ERROR" or "Failed"). Print the total error count. + +3. **Critical Events:** Search for lines containing the keyword "CRITICAL" and print those lines along with the line number. + +4. **Top Error Messages:** Identify the top 5 most common error messages and display them along with their occurrence count. + +5. **Summary Report:** Generate a summary report in a separate text file. The report should include: + - Date of analysis + - Log file name + - Total lines processed + - Total error count + - Top 5 error messages with their occurrence count + - List of critical events with line numbers + +6. **Optional Enhancement:** Add a feature to automatically archive or move processed log files to a designated directory after analysis. + +## Tips + +- Use `grep`, `awk`, and other command-line tools to process the log file. +- Utilize arrays or associative arrays to keep track of error messages and their counts. +- Use appropriate error handling to handle cases where the log file doesn't exist or other issues arise. + +## Sample Log File + +A sample log file named `sample_log.log` has been provided in the same directory as this challenge file. You can use this file to test your script or use [this](https://github.com/logpai/loghub/blob/master/Zookeeper/Zookeeper_2k.log) + +## How to Participate + +1. Clone this repository or download the challenge file from the provided link. +2. Write your Bash script to complete the log analyzer and report generator task. +3. Use the provided `sample_log.log` or create your own log files for testing. +4. Test your script with various log files and scenarios to ensure accuracy. +5. Submit your completed script by the end of Day 10 of the 90-day DevOps challenge. + +## Submission + +Submit your completed script by [creating a pull request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request) or sending the script file to the challenge organizer. + +Good luck and happy scripting! + +[← Previous Day](../day09/README.md) | [Next Day →](../day11/README.md) diff --git a/2024/day10/image/output.png b/2024/day10/image/output.png new file mode 100644 index 0000000000..9cc079f6ab Binary files /dev/null and b/2024/day10/image/output.png differ diff --git a/2024/day10/image/task1.png b/2024/day10/image/task1.png new file mode 100644 index 0000000000..c4e888729e Binary files /dev/null and b/2024/day10/image/task1.png differ diff --git a/2024/day10/image/task2.png b/2024/day10/image/task2.png new file mode 100644 index 0000000000..24d646220b Binary files /dev/null and b/2024/day10/image/task2.png differ diff --git a/2024/day10/solution.md b/2024/day10/solution.md new file mode 100644 index 0000000000..803b46c7d7 --- /dev/null +++ b/2024/day10/solution.md @@ -0,0 +1,53 @@ +# Day 10 Answers: Log Analyzer and Report Generator + +## Scenario + +You are a system administrator responsible for managing a network of servers. Every day, a log file is generated on each server containing important system events and error messages. As part of your daily tasks, you need to analyze these log files, identify specific events, and generate a summary report. + +## Task + +Write a Bash script that automates the process of analyzing log files and generating a daily summary report. The script should perform the following steps: + +1. **Input:** The script should take the path to the log file as a command-line argument. + +2. **Error Count:** Analyze the log file and count the number of error messages. An error message can be identified by a specific keyword (e.g., "ERROR" or "Failed"). Print the total error count. + +3. **Critical Events:** Search for lines containing the keyword "CRITICAL" and print those lines along with the line number. + +4. **Top Error Messages:** Identify the top 5 most common error messages and display them along with their occurrence count. + +5. **Summary Report:** Generate a summary report in a separate text file. The report should include: + - Date of analysis + - Log file name + - Total lines processed + - Total error count + - Top 5 error messages with their occurrence count + - List of critical events with line numbers + +

Answer

+ + - **First created a folder and then a log file.** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day10/image/task1.png) + + - **Bash Code for Reference.** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day10/image/task2.png) + + -

Output

+ + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day10/image/output.png) + +6. **Optional Enhancement:** Add a feature to automatically archive or move processed log files to a designated directory after analysis. + +## Tips + +- Use `grep`, `awk`, and other command-line tools to process the log file. +- Utilize arrays or associative arrays to keep track of error messages and their counts. +- Use appropriate error handling to handle cases where the log file doesn't exist or other issues arise. + +## Sample Log File + +A sample log file named `sample_log.log` has been provided in the same directory as this challenge file. You can use this file to test your script or use [this](https://github.com/logpai/loghub/blob/master/Zookeeper/Zookeeper_2k.log) + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day11/README.md b/2024/day11/README.md new file mode 100644 index 0000000000..192cbbb6c8 --- /dev/null +++ b/2024/day11/README.md @@ -0,0 +1,68 @@ +# Day 11 Task: Error Handling in Shell Scripting + +## Learning Objectives +Understanding how to handle errors in shell scripts is crucial for creating robust and reliable scripts. Today, you'll learn how to use various techniques to handle errors effectively in your bash scripts. + +## Topics to Cover +1. **Understanding Exit Status**: Every command returns an exit status (0 for success and non-zero for failure). Learn how to check and use exit statuses. +2. **Using `if` Statements for Error Checking**: Learn how to use `if` statements to handle errors. +3. **Using `trap` for Cleanup**: Understand how to use the `trap` command to handle unexpected errors and perform cleanup. +4. **Redirecting Errors**: Learn how to redirect errors to a file or `/dev/null`. +5. **Creating Custom Error Messages**: Understand how to create meaningful error messages for debugging and user information. + +## Tasks + +### Task 1: Checking Exit Status +- Write a script that attempts to create a directory and checks if the command was successful. If not, print an error message. + +### Task 2: Using `if` Statements for Error Checking +- Modify the script from Task 1 to include more commands (e.g., creating a file inside the directory) and use `if` statements to handle errors at each step. + +### Task 3: Using `trap` for Cleanup +- Write a script that creates a temporary file and sets a `trap` to delete the file if the script exits unexpectedly. + +### Task 4: Redirecting Errors +- Write a script that tries to read a non-existent file and redirects the error message to a file called `error.log`. + +### Task 5: Creating Custom Error Messages +- Modify one of the previous scripts to include custom error messages that provide more context about what went wrong. + +## Example Scripts + +### Example 1: Checking Exit Status +```bash +#!/bin/bash +mkdir /tmp/mydir +if [ $? -ne 0 ]; then + echo "Failed to create directory /tmp/mydir" +fi +``` + +### Example 2: Trap +```bash +#!/bin/bash +tempfile=$(mktemp) +trap "rm -f $tempfile" EXIT + +echo "This is a temporary file." > $tempfile +cat $tempfile +# Simulate an error +exit 1 +``` + +### Example 3: Redirecting Errors +```bash +#!/bin/bash +cat non_existent_file.txt 2> error.log +``` + +### Example 4: Custom Error Messages +```bash +#!/bin/bash +mkdir /tmp/mydir +if [ $? -ne 0 ]; then + echo "Error: Directory /tmp/mydir could not be created. Check if you have the necessary permissions." +fi +``` + +[← Previous Day](../day10/README.md) | [Next Day →](../day12/README.md) diff --git a/2024/day11/image/task1.png b/2024/day11/image/task1.png new file mode 100644 index 0000000000..5b22fef75d Binary files /dev/null and b/2024/day11/image/task1.png differ diff --git a/2024/day11/image/task2.png b/2024/day11/image/task2.png new file mode 100644 index 0000000000..568e60540a Binary files /dev/null and b/2024/day11/image/task2.png differ diff --git a/2024/day11/image/task3.png b/2024/day11/image/task3.png new file mode 100644 index 0000000000..a79ed5cdb4 Binary files /dev/null and b/2024/day11/image/task3.png differ diff --git a/2024/day11/image/task4.png b/2024/day11/image/task4.png new file mode 100644 index 0000000000..60f6fbe28b Binary files /dev/null and b/2024/day11/image/task4.png differ diff --git a/2024/day11/image/task5.png b/2024/day11/image/task5.png new file mode 100644 index 0000000000..8a1b45091d Binary files /dev/null and b/2024/day11/image/task5.png differ diff --git a/2024/day11/image/task5ka1.png b/2024/day11/image/task5ka1.png new file mode 100644 index 0000000000..ea7a3b5d21 Binary files /dev/null and b/2024/day11/image/task5ka1.png differ diff --git a/2024/day11/solution.md b/2024/day11/solution.md new file mode 100644 index 0000000000..55dd5dfd94 --- /dev/null +++ b/2024/day11/solution.md @@ -0,0 +1,92 @@ +# Day 11 Answers: Error Handling in Shell Scripting + +## Learning Objectives +Understanding how to handle errors in shell scripts is crucial for creating robust and reliable scripts. Today, you'll learn how to use various techniques to handle errors effectively in your bash scripts. + +## Topics to Cover +1. **Understanding Exit Status**: Every command returns an exit status (0 for success and non-zero for failure). Learn how to check and use exit statuses. +2. **Using `if` Statements for Error Checking**: Learn how to use `if` statements to handle errors. +3. **Using `trap` for Cleanup**: Understand how to use the `trap` command to handle unexpected errors and perform cleanup. +4. **Redirecting Errors**: Learn how to redirect errors to a file or `/dev/null`. +5. **Creating Custom Error Messages**: Understand how to create meaningful error messages for debugging and user information. + +## Tasks with Answers + +### Task 1: Checking Exit Status +- Write a script that attempts to create a directory and checks if the command was successful. If not, print an error message. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day11/image/task1.png) + +### Task 2: Using `if` Statements for Error Checking +- Modify the script from Task 1 to include more commands (e.g., creating a file inside the directory) and use `if` statements to handle errors at each step. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day11/image/task2.png) + +### Task 3: Using `trap` for Cleanup +- Write a script that creates a temporary file and sets a `trap` to delete the file if the script exits unexpectedly. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day11/image/task3.png) + +### Task 4: Redirecting Errors +- Write a script that tries to read a non-existent file and redirects the error message to a file called `error.log`. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day11/image/task4.png) + +### Task 5: Creating Custom Error Messages +- Modify one of the previous scripts to include custom error messages that provide more context about what went wrong. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day11/image/task5.png) + + - **I also intentionally created an error by not creating the file, so it showed me this error. I did this for reference.** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day11/image/task5ka1.png) + +## Example Scripts + +### Example 1: Checking Exit Status +```bash +#!/bin/bash +mkdir /tmp/mydir +if [ $? -ne 0 ]; then + echo "Failed to create directory /tmp/mydir" +fi +``` + +### Example 2: Trap +```bash +#!/bin/bash +tempfile=$(mktemp) +trap "rm -f $tempfile" EXIT + +echo "This is a temporary file." > $tempfile +cat $tempfile +# Simulate an error +exit 1 +``` + +### Example 3: Redirecting Errors +```bash +#!/bin/bash +cat non_existent_file.txt 2> error.log +``` + +### Example 4: Custom Error Messages +```bash +#!/bin/bash +mkdir /tmp/mydir +if [ $? -ne 0 ]; then + echo "Error: Directory /tmp/mydir could not be created. Check if you have the necessary permissions." +fi +``` + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day12/README.md b/2024/day12/README.md new file mode 100644 index 0000000000..342e048218 --- /dev/null +++ b/2024/day12/README.md @@ -0,0 +1,26 @@ +# Day 12 Task: Deep Dive in Git & GitHub for DevOps Engineers + +## Find the answers by your understandings (Shouldn't be copied from the internet & use hand-made diagrams) of the questions below and write a blog on it. + +1. What is Git and why is it important? +2. What is the difference between Main Branch and Master Branch? +3. Can you explain the difference between Git and GitHub? +4. How do you create a new repository on GitHub? +5. What is the difference between a local & remote repository? How to connect local to remote? + +## Tasks + +### Task 1: +- Set your user name and email address, which will be associated with your commits. + +### Task 2: +- Create a repository named "DevOps" on GitHub. +- Connect your local repository to the repository on GitHub. +- Create a new file in Devops/Git/Day-02.txt & add some content to it. +- Push your local commits to the repository on GitHub. + +Reference: [YouTube Video](https://youtu.be/AT1uxOLsCdk) + +Note: These steps assume that you have already installed Git on your computer and have created a GitHub account. If you need help with these prerequisites, you can refer to the [guide](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). + +[← Previous Day](../day11/README.md) | [Next Day →](../day13/README.md) diff --git a/2024/day12/image/connect_your_local_repository_to_the_repository_on_github.png b/2024/day12/image/connect_your_local_repository_to_the_repository_on_github.png new file mode 100644 index 0000000000..a1718a646f Binary files /dev/null and b/2024/day12/image/connect_your_local_repository_to_the_repository_on_github.png differ diff --git a/2024/day12/image/create_a_new_file.png b/2024/day12/image/create_a_new_file.png new file mode 100644 index 0000000000..41f14eeebc Binary files /dev/null and b/2024/day12/image/create_a_new_file.png differ diff --git a/2024/day12/image/create_a_new_repository.png b/2024/day12/image/create_a_new_repository.png new file mode 100644 index 0000000000..922ce166a9 Binary files /dev/null and b/2024/day12/image/create_a_new_repository.png differ diff --git a/2024/day12/image/gitui.png b/2024/day12/image/gitui.png new file mode 100644 index 0000000000..26479aee99 Binary files /dev/null and b/2024/day12/image/gitui.png differ diff --git a/2024/day12/image/gitui1.png b/2024/day12/image/gitui1.png new file mode 100644 index 0000000000..6c7b26718a Binary files /dev/null and b/2024/day12/image/gitui1.png differ diff --git a/2024/day12/image/gitui2.png b/2024/day12/image/gitui2.png new file mode 100644 index 0000000000..ab87db3a81 Binary files /dev/null and b/2024/day12/image/gitui2.png differ diff --git a/2024/day12/image/push_repository.png b/2024/day12/image/push_repository.png new file mode 100644 index 0000000000..070391b49a Binary files /dev/null and b/2024/day12/image/push_repository.png differ diff --git a/2024/day12/image/set_user_name_and_email_address.png b/2024/day12/image/set_user_name_and_email_address.png new file mode 100644 index 0000000000..e01f65ad95 Binary files /dev/null and b/2024/day12/image/set_user_name_and_email_address.png differ diff --git a/2024/day12/solution.md b/2024/day12/solution.md new file mode 100644 index 0000000000..5d6c2884df --- /dev/null +++ b/2024/day12/solution.md @@ -0,0 +1,94 @@ +# Day 12 Answers: Deep Dive in Git & GitHub for DevOps Engineers + +## Find the answers by your understandings (Shouldn't be copied from the internet & use hand-made diagrams) of the questions below and write a blog on it. + +1. What is Git and why is it important? + - **Git** is a distributed version control system that allows multiple developers to work on a project simultaneously without overwriting each other's changes. It helps track changes in source code during software development, enabling collaboration, version control, and efficient management of code changes. + + **Importance of Git:** + - **Version Control:** Keeps track of changes, allowing you to revert to previous versions if needed. + - **Collaboration:** Multiple developers can work on the same project simultaneously. + - **Branching:** Allows you to work on different features or fixes in isolation. + - **Backup::** Acts as a backup of your codebase. + +2. What is the difference between Main Branch and Master Branch? + - Traditionally, **master** was the default branch name in Git repositories. However, many communities have moved to using **main** as the default branch name to be more inclusive and avoid potentially offensive terminology. + + - Main Branch vs. Master Branch: + - **Main Branch:** The new default branch name used in many modern repositories. + - **Master Branch:** The traditional default branch name used in older repositories. + + The traditional default branch name used in older repositories. + + +3. Can you explain the difference between Git and GitHub? + - **Git** is a version control system, while **GitHub** is a web-based platform that uses Git for version control and adds collaboration features like pull requests, issue tracking, and project management. + - Git: + - Command-line tool. + - Manages local repositories. + - GitHub: + - Hosting service for Git repositories. + - Adds collaboration tools and user interfaces. + +4. How do you create a new repository on GitHub? + 1. Go to GitHub. + 2. Click on the + icon in the top right corner. + 3. Select New repository. + 4. Enter a repository name (e.g., "DevOps"). + 5. Click Create repository. + +5. What is the difference between a local & remote repository? How to connect local to remote? + - Local Repository: + - Stored on your local machine. + - Contains your working directory and Git database. + - Remote Repository: + - Hosted on a server (e.g., GitHub). + - Allows collaboration with other developers. + - Connecting Local to Remote: + 1. Initialize a local repository: `git init` + 2. Add a remote: `git remote add origin ` + +## Tasks with Answers + +### Task 1: +- Set your user name and email address, which will be associated with your commits. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day12/image/set_user_name_and_email_address.png) + +### Task 2: +- Create a repository named "DevOps" on GitHub. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day12/image/create_a_new_repository.png) + +- Connect your local repository to the repository on GitHub. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day12/image/connect_your_local_repository_to_the_repository_on_github.png) + +- Create a new file in Devops/Git/Day-12.txt & add some content to it. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day12/image/create_a_new_file.png) + +- Push your local commits to the repository on GitHub. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day12/image/push_repository.png) + +**After that if you check it on GitHub then it's output will look like this** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day12/image/gitui.png) + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day12/image/gitui1.png) + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day12/image/gitui2.png) + + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day13/README.md b/2024/day13/README.md new file mode 100644 index 0000000000..01595118d0 --- /dev/null +++ b/2024/day13/README.md @@ -0,0 +1,99 @@ +# Day 13 Task: Advance Git & GitHub for DevOps Engineers + +## Git Branching + +Branches are a core concept in Git that allow you to isolate development work without affecting other parts of your repository. Each repository has one default branch, and can have multiple other branches. You can merge a branch into another branch using a pull request. + +Branches let you develop features, fix bugs, or safely experiment with new ideas in a contained area of your repository. + +## Git Revert and Reset + +Git reset and git revert are two commonly used commands that allow you to remove or edit changes you’ve made in the code in previous commits. Both commands can be very useful in different scenarios. + +## Git Rebase and Merge + +### What Is Git Rebase? + +Git rebase is a command that lets users integrate changes from one branch to another, and the commit history is modified once the action is complete. Git rebase helps keep a clean project history. + +### What Is Git Merge? + +Git merge is a command that allows developers to merge Git branches while keeping the logs of commits on branches intact. Even though merging and rebasing do similar things, they handle commit logs differently. + +For a better understanding of Git Rebase and Merge, check out this [article](https://www.simplilearn.com/git-rebase-vs-merge-article). + +## Tasks + +### Task 1: Feature Development with Branches + +1. **Create a Branch and Add a Feature:** + - Add a text file called `version01.txt` inside the `Devops/Git/` directory with “This is the first feature of our application” written inside. + - Create a new branch from `master`. + ```bash + git checkout -b dev + ``` + - Commit your changes with a message reflecting the added feature. + ```bash + git add Devops/Git/version01.txt + git commit -m "Added new feature" + ``` + +2. **Push Changes to GitHub:** + - Push your local commits to the repository on GitHub. + ```bash + git push origin dev + ``` + +3. **Add More Features with Separate Commits:** + - Update `version01.txt` with the following lines, committing after each change: + - 1st line: `This is the bug fix in development branch` + ```bash + echo "This is the bug fix in development branch" >> Devops/Git/version01.txt + git commit -am "Added feature2 in development branch" + ``` + - 2nd line: `This is gadbad code` + ```bash + echo "This is gadbad code" >> Devops/Git/version01.txt + git commit -am "Added feature3 in development branch" + ``` + - 3rd line: `This feature will gadbad everything from now` + ```bash + echo "This feature will gadbad everything from now" >> Devops/Git/version01.txt + git commit -am "Added feature4 in development branch" + ``` + +4. **Restore the File to a Previous Version:** + - Revert or reset the file to where the content should be “This is the bug fix in development branch”. + ```bash + git revert HEAD~2 + ``` + +### Task 2: Working with Branches + +1. **Demonstrate Branches:** + - Create 2 or more branches and take screenshots to show the branch structure. + +2. **Merge Changes into Master:** + - Make some changes to the `dev` branch and merge it into `master`. + ```bash + git checkout master + git merge dev + ``` + +3. **Practice Rebase:** + - Try rebasing and observe the differences. + ```bash + git rebase master + ``` + +## Note: + +Following best practices for branching is important. Check out these [best practices](https://www.flagship.io/git-branching-strategies/) that the industry follows. + +Simple Reference on branching: [video](https://youtu.be/NzjK9beT_CY) + +Advanced Reference on branching: [video](https://youtu.be/7xhkEQS3dXw) + +Share your learnings from this task on LinkedIn using #90DaysOfDevOps Challenge. Happy Learning! + +[← Previous Day](../day12/README.md) | [Next Day →](../day14/README.md) diff --git a/2024/day13/image/1 Create a Branch and Add a Feature.png b/2024/day13/image/1 Create a Branch and Add a Feature.png new file mode 100644 index 0000000000..e8eb507afa Binary files /dev/null and b/2024/day13/image/1 Create a Branch and Add a Feature.png differ diff --git a/2024/day13/image/10 Screenshot of branch structure.png b/2024/day13/image/10 Screenshot of branch structure.png new file mode 100644 index 0000000000..f5977c5c15 Binary files /dev/null and b/2024/day13/image/10 Screenshot of branch structure.png differ diff --git a/2024/day13/image/11 Merge Changes into Master_main.png b/2024/day13/image/11 Merge Changes into Master_main.png new file mode 100644 index 0000000000..7f55293ba6 Binary files /dev/null and b/2024/day13/image/11 Merge Changes into Master_main.png differ diff --git a/2024/day13/image/12 Practice Rebase.png b/2024/day13/image/12 Practice Rebase.png new file mode 100644 index 0000000000..58ba6de2e3 Binary files /dev/null and b/2024/day13/image/12 Practice Rebase.png differ diff --git a/2024/day13/image/2 Create a new branch.png b/2024/day13/image/2 Create a new branch.png new file mode 100644 index 0000000000..ea4441ec7e Binary files /dev/null and b/2024/day13/image/2 Create a new branch.png differ diff --git a/2024/day13/image/3 Commit your changes with a message reflecting.png b/2024/day13/image/3 Commit your changes with a message reflecting.png new file mode 100644 index 0000000000..9f6cdcffda Binary files /dev/null and b/2024/day13/image/3 Commit your changes with a message reflecting.png differ diff --git a/2024/day13/image/4 Push your local commits to the repository on GitHub.png b/2024/day13/image/4 Push your local commits to the repository on GitHub.png new file mode 100644 index 0000000000..510598ce73 Binary files /dev/null and b/2024/day13/image/4 Push your local commits to the repository on GitHub.png differ diff --git a/2024/day13/image/5 This is the bug fix in development branch.png b/2024/day13/image/5 This is the bug fix in development branch.png new file mode 100644 index 0000000000..642b6da7d0 Binary files /dev/null and b/2024/day13/image/5 This is the bug fix in development branch.png differ diff --git a/2024/day13/image/6 This is gadbad code.png b/2024/day13/image/6 This is gadbad code.png new file mode 100644 index 0000000000..c0727a3fa1 Binary files /dev/null and b/2024/day13/image/6 This is gadbad code.png differ diff --git a/2024/day13/image/7 This feature will gadbad everything from now.png b/2024/day13/image/7 This feature will gadbad everything from now.png new file mode 100644 index 0000000000..ee362b2ab5 Binary files /dev/null and b/2024/day13/image/7 This feature will gadbad everything from now.png differ diff --git a/2024/day13/image/8 Restore the File to a Previous Version.png b/2024/day13/image/8 Restore the File to a Previous Version.png new file mode 100644 index 0000000000..cf13f6b475 Binary files /dev/null and b/2024/day13/image/8 Restore the File to a Previous Version.png differ diff --git a/2024/day13/image/9 Create 2 or more branches.png b/2024/day13/image/9 Create 2 or more branches.png new file mode 100644 index 0000000000..b3e0ef69b3 Binary files /dev/null and b/2024/day13/image/9 Create 2 or more branches.png differ diff --git a/2024/day13/solution.md b/2024/day13/solution.md new file mode 100644 index 0000000000..ab95d67568 --- /dev/null +++ b/2024/day13/solution.md @@ -0,0 +1,140 @@ +# Day 13 Answers: Advance Git & GitHub for DevOps Engineers + +## Git Branching + +Branches are a core concept in Git that allow you to isolate development work without affecting other parts of your repository. Each repository has one default branch, and can have multiple other branches. You can merge a branch into another branch using a pull request. + +Branches let you develop features, fix bugs, or safely experiment with new ideas in a contained area of your repository. + +## Git Revert and Reset + +Git reset and git revert are two commonly used commands that allow you to remove or edit changes you’ve made in the code in previous commits. Both commands can be very useful in different scenarios. + +## Git Rebase and Merge + +### What Is Git Rebase? + +Git rebase is a command that lets users integrate changes from one branch to another, and the commit history is modified once the action is complete. Git rebase helps keep a clean project history. + +### What Is Git Merge? + +Git merge is a command that allows developers to merge Git branches while keeping the logs of commits on branches intact. Even though merging and rebasing do similar things, they handle commit logs differently. + +For a better understanding of Git Rebase and Merge, check out this [article](https://www.simplilearn.com/git-rebase-vs-merge-article). + +## Tasks with Answers + +### Task 1: Feature Development with Branches + +1. **Create a Branch and Add a Feature:** + - Add a text file called `version01.txt` inside the `Devops/Git/` directory with “This is the first feature of our application” written inside. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/1%20Create%20a%20Branch%20and%20Add%20a%20Feature.png) + + - Create a new branch from `master`. + ```bash + git checkout -b dev + ``` +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/2%20Create%20a%20new%20branch.png) + + - Commit your changes with a message reflecting the added feature. + ```bash + git add Devops/Git/version01.txt + git commit -m "Added new feature" + ``` + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/3%20Commit%20your%20changes%20with%20a%20message%20reflecting.png) + +2. **Push Changes to GitHub:** + - Push your local commits to the repository on GitHub. + ```bash + git push origin dev + ``` +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/4%20Push%20your%20local%20commits%20to%20the%20repository%20on%20GitHub.png) + +3. **Add More Features with Separate Commits:** + - Update `version01.txt` with the following lines, committing after each change: + - 1st line: `This is the bug fix in development branch` + ```bash + echo "This is the bug fix in development branch" >> Devops/Git/version01.txt + git commit -am "Added feature2 in development branch" + ``` +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/5%20This%20is%20the%20bug%20fix%20in%20development%20branch.png) + + - 2nd line: `This is gadbad code` + ```bash + echo "This is gadbad code" >> Devops/Git/version01.txt + git commit -am "Added feature3 in development branch" + ``` +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/6%20This%20is%20gadbad%20code.png) + + - 3rd line: `This feature will gadbad everything from now` + ```bash + echo "This feature will gadbad everything from now" >> Devops/Git/version01.txt + git commit -am "Added feature4 in development branch" + ``` +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/7%20This%20feature%20will%20gadbad%20everything%20from%20now.png) + +4. **Restore the File to a Previous Version:** + - Revert or reset the file to where the content should be “This is the bug fix in development branch”. + ```bash + git revert HEAD~2 + ``` +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/8%20Restore%20the%20File%20to%20a%20Previous%20Version.png) + +This command reverts the last two commits, effectively removing the "gadbad code" and "gadbad everything" lines. + +### Task 2: Working with Branches + +1. **Demonstrate Branches:** + - Create 2 or more branches and take screenshots to show the branch structure. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/9%20Create%202%20or%20more%20branches.png) + +2. **Merge Changes into Master:** + - Make some changes to the `dev` branch and merge it into `master`. + ```bash + git checkout master + git merge dev + ``` +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/11%20Merge%20Changes%20into%20Master_main.png) + + - Screenshot of branch structure: + - To visualize the branch structure, you can use `git log` with graph options or a graphical tool like GitKraken. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/10%20Screenshot%20of%20branch%20structure.png) + +3. **Practice Rebase:** + - Try rebasing and observe the differences. + ```bash + git rebase master + ``` +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day13/image/12%20Practice%20Rebase.png) + + - During a rebase, Git re-applies commits from the current branch (in this case, dev) onto the target branch (master). This results in a linear commit history. + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day14/Git_cheat_sheet_rajat.pdf b/2024/day14/Git_cheat_sheet_rajat.pdf new file mode 100644 index 0000000000..db74f1d531 Binary files /dev/null and b/2024/day14/Git_cheat_sheet_rajat.pdf differ diff --git a/2024/day14/Linux_cheat_sheet_rajat.pdf b/2024/day14/Linux_cheat_sheet_rajat.pdf new file mode 100644 index 0000000000..3a9dd4008a Binary files /dev/null and b/2024/day14/Linux_cheat_sheet_rajat.pdf differ diff --git a/2024/day14/README.md b/2024/day14/README.md new file mode 100644 index 0000000000..597a8ed666 --- /dev/null +++ b/2024/day14/README.md @@ -0,0 +1,32 @@ +# Day 14 Task: Create a Linux & Git-GitHub Cheat Sheet + +## Finally!! 🎉 + +You have completed the Linux & Git-GitHub hands-on tasks, and I hope you have learned something interesting from it. 🙌 + +Now, let's create an interesting 😉 assignment that will not only help you in the future but also benefit the DevOps community! + +## Task: Create a Cheat Sheet + +Let’s make a well-articulated and documented **cheat sheet** with all the commands you learned so far in Linux and Git-GitHub, along with a brief description of their usage. + +Show us your knowledge mixed with your creativity 😎. + +### Guidelines + +- The cheat sheet should be unique and reflect your understanding. +- Include all the important commands you have learned. +- Provide a brief description of each command's usage. +- Make it visually appealing and easy to understand. + +### Reference + +For your reference, check out this [cheat sheet](https://education.github.com/git-cheat-sheet-education.pdf). However, ensure that your cheat sheet is unique. + +### Share Your Work + +Post your cheat sheet on LinkedIn and spread the knowledge. 😃 + +**Happy Learning! :)** + +[← Previous Day](../day13/README.md) | [Next Day →](../day15/README.md) diff --git a/2024/day14/solution.md b/2024/day14/solution.md new file mode 100644 index 0000000000..ab0ff2c138 --- /dev/null +++ b/2024/day14/solution.md @@ -0,0 +1,81 @@ +# Day 14 Answers: Create a Linux & Git-GitHub Cheat Sheet + +## Finally!! 🎉 + +You have completed the Linux & Git-GitHub hands-on tasks, and I hope you have learned something interesting from it. 🙌 + +Now, let's create an interesting 😉 assignment that will not only help you in the future but also benefit the DevOps community! + +## Tasks with Answers: Create a Cheat Sheet + +Let’s make a well-articulated and documented **cheat sheet** with all the commands you learned so far in Linux and Git-GitHub, along with a brief description of their usage. + +Show us your knowledge mixed with your creativity 😎. + +### Guidelines + +- The cheat sheet should be unique and reflect your understanding. +- Include all the important commands you have learned. +- Provide a brief description of each command's usage. +- Make it visually appealing and easy to understand. + +## Linux Commands / Git Commands + +### File and Directory Management +- `ls` - Lists files and directories. +- `cd ` - Changes the directory. +- `pwd` - Prints current directory. +- `mkdir ` - Creates a new directory. +- `rm ` - Removes a file. +- `rm -r ` - Removes a directory and its contents. +- `cp ` - Copies files or directories. +- `mv ` - Moves or renames files or directories. +- `touch ` - Creates or updates a file. + +### Viewing and Editing Files +- `cat ` - Displays file content. +- `less ` - Views file content one screen at a time. +- `nano ` - Edits files using nano editor. +- `vim ` - Edits files using vim editor. + +### System Information +- `uname -a` - Displays system information. +- `top` - Shows real-time system processes. +- `df -h` - Displays disk usage. +- `free -h` - Displays memory usage. + +### Permissions +- `chmod ` - Changes file permissions. +- `chown : ` - Changes file owner and group. + +### Networking +- `ping ` - Sends ICMP echo requests. +- `ifconfig` - Displays or configures network interfaces. + +## Git Commands + +### Configuration +- `git config --global user.name "Your Name"` - Sets global user name. +- `git config --global user.email "your.email@example.com"` - Sets global user email. + +### Repository Management +- `git init` - Initializes a new repository. +- `git clone ` - Clones a repository. + +### Basic Operations +- `git status` - Shows working tree status. +- `git add ` - Stages changes. +- `git commit -m "message"` - Commits changes. +- `git push` - Pushes changes to remote repository. +- `git checkout -b dev` - Create a new branch from `master`. +- `git checkout` - switch to another branch and check it out into your working directory. +- `git log --oneline --graph --all` - visualize the branch structure. +- `git push origin dev` - Push Changes to GitHub. +- `git merge dev` - merge it into `master/main`. +- `git log` - show all commits in the current branch’s history. + +### Reference + +For your reference, check out this [cheat sheet](https://education.github.com/git-cheat-sheet-education.pdf). However, ensure that your cheat sheet is unique. + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) \ No newline at end of file diff --git a/2024/day15/README.md b/2024/day15/README.md new file mode 100644 index 0000000000..51132d5ef0 --- /dev/null +++ b/2024/day15/README.md @@ -0,0 +1,33 @@ +# Day 15 Task: Basics of Python for DevOps Engineers + +## Hello Dosto + +Let's start with the basics of Python, as this is also important for DevOps Engineers to build logic and programs. + +### What is Python? + +- Python is an open-source, general-purpose, high-level, and object-oriented programming language. +- It was created by **Guido van Rossum**. +- Python consists of vast libraries and various frameworks like Django, TensorFlow, Flask, Pandas, Keras, etc. + +### How to Install Python? + +You can install Python on your system, whether it is Windows, macOS, Ubuntu, CentOS, etc. Below are the links for the installation: + +- [Windows Installation](https://www.python.org/downloads/) +- Ubuntu: `apt-get install python3.6` + +## Tasks + +### Task 1: + +1. Install Python on your respective OS, and check the version. +2. Read about different data types in Python. + +You can get the complete playlist [here](https://www.youtube.com/watch?v=abPgj_3hzVY&list=PLlfy9GnSVerS_L5z0COaF7rsbgWmJXTOM) 🙌 + +Don't forget to share your journey over LinkedIn. Let the community know that you have started another chapter of your journey. + +**Happy Learning, Ruko Mat Phod do! 😃** + +[← Previous Day](../day14/README.md) | [Next Day →](../day16/README.md) diff --git a/2024/day15/image/Installation_Python.png b/2024/day15/image/Installation_Python.png new file mode 100644 index 0000000000..b6813fdb25 Binary files /dev/null and b/2024/day15/image/Installation_Python.png differ diff --git a/2024/day15/solution.md b/2024/day15/solution.md new file mode 100644 index 0000000000..b7cc4fc986 --- /dev/null +++ b/2024/day15/solution.md @@ -0,0 +1,85 @@ +# Day 15 Answers: Basics of Python for DevOps Engineers + +## What is Python? + +Python is an open-source, general-purpose, high-level, and object-oriented programming language created by Guido van Rossum. It has a vast ecosystem of libraries and frameworks, such as Django, TensorFlow, Flask, Pandas, Keras, and many more. + +## How to Install Python + +### Windows Installation + +1. Go to the [Python website](https://www.python.org/downloads/). +2. Download the latest version of Python. +3. Run the installer and follow the instructions. +4. Check the installation by opening a command prompt and typing: + ```bash + python --version + +### Ubuntu Installation + - `sudo apt-get update` + - `sudo apt-get install python3.6` + +### macOS Installation + +1. Download the installer from the [Python website](https://www.python.org/downloads/macos/). +2. Follow the installation instructions. +3. Check the installation by opening a terminal and typing: + - `python3 --version` + +## Tasks with Answers + +### Task 1: + +1. Install Python on your respective OS, and check the version. + +**Answer** + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day15/image/Installation_Python.png) + +### 2. Read about different data types in Python. + - Python supports several data types, which can be categorized as follows: + - **Numeric Types:** + - **int:** Integer values + - `x = 10` + + - **float:** Floating-point values + - `y = 10.5` + + - **complex:** Complex numbers + - `z = 3 + 5j` + + - **Sequence Types:** + - **str:** String values + - `name = "bhavin"` + + - **list:** Ordered collection of items + - `fruits = ["apple", "banana", "cherry"]` + + - **tuple:** Ordered, immutable collection of items + - `coordinates = (10.0, 20.0)` + + - **Mapping Types:** + - **dict:** Key-value pairs + - `person = {"name": "bhavin", "age": 24}` + + - **Set Types:** + - **set:** Unordered collection of unique items + - `unique_numbers = {1, 2, 3, 4, 5}` + + - **frozenset:** Immutable set + - `frozen_numbers = frozenset([1, 2, 3, 4, 5])` + + - **Boolean Type:** + - **bool:** Boolean values + - `is_active = True` + + - **None Type:** + - **NoneType:** Represents the absence of a value + - `data = None` + + +You can get the complete playlist [here](https://www.youtube.com/watch?v=abPgj_3hzVY&list=PLlfy9GnSVerS_L5z0COaF7rsbgWmJXTOM) 🙌 + +**Happy Learning, Ruko Mat Phod do! 😃** + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day16/README.md b/2024/day16/README.md new file mode 100644 index 0000000000..1c353ede6d --- /dev/null +++ b/2024/day16/README.md @@ -0,0 +1,32 @@ +# Day 16 Task: Docker for DevOps Engineers + +### Docker + +Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called containers that have everything the software needs to run, including libraries, system tools, code, and runtime. Using Docker, you can quickly deploy and scale applications into any environment and know your code will run. + +## Tasks + +As you have already installed Docker in previous tasks, now is the time to run Docker commands. + +- Use the `docker run` command to start a new container and interact with it through the command line. [Hint: `docker run hello-world`] + +- Use the `docker inspect` command to view detailed information about a container or image. + +- Use the `docker port` command to list the port mappings for a container. + +- Use the `docker stats` command to view resource usage statistics for one or more containers. + +- Use the `docker top` command to view the processes running inside a container. + +- Use the `docker save` command to save an image to a tar archive. + +- Use the `docker load` command to load an image from a tar archive. + +These tasks involve simple operations that can be used to manage images and containers. + +For reference, you can watch this video: +https://youtu.be/Tevxhn6Odc8 + +You can post on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. Happy Learning :) + +[← Previous Day](../day15/README.md) | [Next Day →](../day17/README.md) diff --git a/2024/day16/image/1_Start_a_New_Container.png b/2024/day16/image/1_Start_a_New_Container.png new file mode 100644 index 0000000000..0e94004c2e Binary files /dev/null and b/2024/day16/image/1_Start_a_New_Container.png differ diff --git a/2024/day16/image/2_docker_inspect.png b/2024/day16/image/2_docker_inspect.png new file mode 100644 index 0000000000..727afa83c8 Binary files /dev/null and b/2024/day16/image/2_docker_inspect.png differ diff --git a/2024/day16/image/3_docker_port.png b/2024/day16/image/3_docker_port.png new file mode 100644 index 0000000000..ab925e2ba4 Binary files /dev/null and b/2024/day16/image/3_docker_port.png differ diff --git a/2024/day16/image/4_docker_stats.png b/2024/day16/image/4_docker_stats.png new file mode 100644 index 0000000000..80658299fe Binary files /dev/null and b/2024/day16/image/4_docker_stats.png differ diff --git a/2024/day16/image/5_docker_top.png b/2024/day16/image/5_docker_top.png new file mode 100644 index 0000000000..ed4a28ca2a Binary files /dev/null and b/2024/day16/image/5_docker_top.png differ diff --git a/2024/day16/image/6_docker_save.png b/2024/day16/image/6_docker_save.png new file mode 100644 index 0000000000..8d05ed92ee Binary files /dev/null and b/2024/day16/image/6_docker_save.png differ diff --git a/2024/day16/image/7_docker_load.png b/2024/day16/image/7_docker_load.png new file mode 100644 index 0000000000..6a2ab833f4 Binary files /dev/null and b/2024/day16/image/7_docker_load.png differ diff --git a/2024/day16/solution.md b/2024/day16/solution.md new file mode 100644 index 0000000000..82723386d6 --- /dev/null +++ b/2024/day16/solution.md @@ -0,0 +1,63 @@ +# Day 16 Answers: Docker for DevOps Engineers + +### Docker + +Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called containers that have everything the software needs to run, including libraries, system tools, code, and runtime. Using Docker, you can quickly deploy and scale applications into any environment and know your code will run. + +## Tasks with Answers + +As you have already installed Docker in previous tasks, now is the time to run Docker commands. + +### 1. Use the `docker run` command to start a new container and interact with it through the command line. [Hint: `docker run hello-world`] + +**Answer** + - This command runs the `hello-world` image, which prints a message confirming that Docker is working correctly. +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day16/image/1_Start_a_New_Container.png) + +### 2. Use the `docker inspect` command to view detailed information about a container or image. + +**Answer** + - View Detailed Information About a Container or Image: + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day16/image/2_docker_inspect.png) + +### 3. Use the `docker port` command to list the port mappings for a container. + +**Answer** + - This command maps port 8181 on the host to port 82 in the container and lists the port mappings. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day16/image/3_docker_port.png) + +### 4. Use the `docker stats` command to view resource usage statistics for one or more containers. + +**Answer** + - This command provides a live stream of resource usage statistics for all running containers. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day16/image/4_docker_stats.png) + +### 5. Use the `docker top` command to view the processes running inside a container. + +**Answer** + - This command lists the processes running inside the `my_container2` container. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day16/image/5_docker_top.png) + +### 6. Use the `docker save` command to save an image to a tar archive. + +**Answer** + - This command saves the `nginx` image to a tar archive named `my_image.tar`. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day16/image/6_docker_save.png) + +### 7. Use the `docker load` command to load an image from a tar archive. + +**Answer** + - This command loads the image from the `my_image.tar` archive into Docker. + +![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day16/image/7_docker_load.png) + +These tasks involve simple operations that can be used to manage images and containers. + +For reference, you can watch this video: [Docker Tutorial on AWS EC2 as DevOps Engineer // DevOps Project Bootcamp Day 2](https://youtu.be/Tevxhn6Odc8). + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day17/README.md b/2024/day17/README.md new file mode 100644 index 0000000000..fcb70606dc --- /dev/null +++ b/2024/day17/README.md @@ -0,0 +1,28 @@ +## Day 17 Task: Docker Project for DevOps Engineers + +### You people are doing just amazing in **#90daysofdevops**. Today's challenge is so special because you are going to do a DevOps project with Docker. Are you excited? 😍 + +# Dockerfile + +Docker is a tool that makes it easy to run applications in containers. Containers are like small packages that hold everything an application needs to run. To create these containers, developers use something called a Dockerfile. + +A Dockerfile is like a set of instructions for making a container. It tells Docker what base image to use, what commands to run, and what files to include. For example, if you were making a container for a website, the Dockerfile might tell Docker to use an official web server image, copy the files for your website into the container, and start the web server when the container starts. + +For more about Dockerfile, visit [here](https://rushikesh-mashidkar.hashnode.dev/dockerfile-docker-compose-swarm-and-volumes). + +## Task + +- Create a Dockerfile for a simple web application (e.g. a Node.js or Python app) +- Build the image using the Dockerfile and run the container +- Verify that the application is working as expected by accessing it in a web browser +- Push the image to a public or private repository (e.g. Docker Hub) + +For a reference project, visit [here](https://youtu.be/Tevxhn6Odc8). + +If you want to dive further, watch this [bootcamp](https://youtube.com/playlist?list=PLlfy9GnSVerRqYJgVYO0UiExj5byjrW8u). + +You can share your learning with everyone over LinkedIn and tag us along. 😃 + +Happy Learning :) + +[← Previous Day](../day16/README.md) | [Next Day →](../day18/README.md) diff --git a/2024/day17/code.txt b/2024/day17/code.txt new file mode 100644 index 0000000000..0abb085a27 --- /dev/null +++ b/2024/day17/code.txt @@ -0,0 +1,49 @@ +root@Bhavin-Savaliya:~/flask-app# history + + 1 clear + 2 ls + 3 docker ps + 4 docker + 5 docker --version + 6 systemctl status docker + 7 clear + 8 ls + 9 mkdir flask-app + 10 ls + 11 cd flask-app + 12 vim app.py + 13 ls + 14 cat app.py + 15 clear + 16 ls + 17 vim requirements.txt + 18 ls + 19 cat requirements.txt + 20 clear + 21 ls + 22 vim Dockerfile + 23 ls + 24 cat Dockerfile + 25 clear + 26 ls + 27 docker build -t flask-app . + 28 RUN pip install + 29 apt pip install + 30 pip + 31 apt install python3-pip + 32 pip + 33 python --version + 34 python3 --version + 35 docker build -t flask-app . + 36 pip install -r requirements.txt + 37 ls + 38 vim requirements.txt + 39 cat requirements.txt + 40 clear + 41 docker build -t flask-app . + 42 docker run -d -p 5000:5000 flask-app + 43 docker tag flask-app bhavin1998/flask-app + 44 docker push bhavin1998/flask-app + 45 docker login + 46 docker push bhavin1998/flask-app + 47 history diff --git a/2024/day17/image/1_Create_a_new_directory.png b/2024/day17/image/1_Create_a_new_directory.png new file mode 100644 index 0000000000..d362313e63 Binary files /dev/null and b/2024/day17/image/1_Create_a_new_directory.png differ diff --git a/2024/day17/image/2_app_py.png b/2024/day17/image/2_app_py.png new file mode 100644 index 0000000000..972f4781ea Binary files /dev/null and b/2024/day17/image/2_app_py.png differ diff --git a/2024/day17/image/3_Create_a_requirements_file.png b/2024/day17/image/3_Create_a_requirements_file.png new file mode 100644 index 0000000000..1de8f30ace Binary files /dev/null and b/2024/day17/image/3_Create_a_requirements_file.png differ diff --git a/2024/day17/image/4_Create_a_Dockerfile.png b/2024/day17/image/4_Create_a_Dockerfile.png new file mode 100644 index 0000000000..1b3e55ceb8 Binary files /dev/null and b/2024/day17/image/4_Create_a_Dockerfile.png differ diff --git a/2024/day17/image/5_build_the_docker_image.png b/2024/day17/image/5_build_the_docker_image.png new file mode 100644 index 0000000000..385925cd99 Binary files /dev/null and b/2024/day17/image/5_build_the_docker_image.png differ diff --git a/2024/day17/image/6_Run_the_Container.png b/2024/day17/image/6_Run_the_Container.png new file mode 100644 index 0000000000..ce1021f569 Binary files /dev/null and b/2024/day17/image/6_Run_the_Container.png differ diff --git a/2024/day17/image/7_Verify_the_Application.png b/2024/day17/image/7_Verify_the_Application.png new file mode 100644 index 0000000000..399638cf61 Binary files /dev/null and b/2024/day17/image/7_Verify_the_Application.png differ diff --git a/2024/day17/image/8_Tag_the_Image.png b/2024/day17/image/8_Tag_the_Image.png new file mode 100644 index 0000000000..a74fc1826f Binary files /dev/null and b/2024/day17/image/8_Tag_the_Image.png differ diff --git a/2024/day17/image/9_Push_the_Image.png b/2024/day17/image/9_Push_the_Image.png new file mode 100644 index 0000000000..afdd53c17e Binary files /dev/null and b/2024/day17/image/9_Push_the_Image.png differ diff --git a/2024/day17/solution.md b/2024/day17/solution.md new file mode 100644 index 0000000000..37d3cf61bf --- /dev/null +++ b/2024/day17/solution.md @@ -0,0 +1,87 @@ +# Day 17 Answers: Docker Project for DevOps Engineers + +### You people are doing just amazing in **#90daysofdevops**. Today's challenge is so special because you are going to do a DevOps project with Docker. Are you excited? 😍 + +# Dockerfile + +Docker is a tool that makes it easy to run applications in containers. Containers are like small packages that hold everything an application needs to run. To create these containers, developers use something called a Dockerfile. + +A Dockerfile is like a set of instructions for making a container. It tells Docker what base image to use, what commands to run, and what files to include. For example, if you were making a container for a website, the Dockerfile might tell Docker to use an official web server image, copy the files for your website into the container, and start the web server when the container starts. + +For more about Dockerfile, visit [here](https://rushikesh-mashidkar.hashnode.dev/dockerfile-docker-compose-swarm-and-volumes). + +## Tasks with Answers + +**1. Create a Dockerfile for a simple web application (e.g. a Node.js or Python app)** + - **1. Create a Simple Flask Application** + - Create a new directory for your project and navigate into it: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day17/image/1_Create_a_new_directory.png) + + - Create a new file named `app.py` and add the following content: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day17/image/2_app_py.png) + + - Create a requirements file named `requirements.txt` and add the following content: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day17/image/3_Create_a_requirements_file.png) + + - **2. Create a Dockerfile** + - Create a file named `Dockerfile` in the same directory and add the following content: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day17/image/4_Create_a_Dockerfile.png) + +**2. Build the image using the Dockerfile and run the container** + - To build the Docker image, run the following command in the directory containing the Dockerfile: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day17/image/5_build_the_docker_image.png) + + - Run the Container + - To run the container, use the following command: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day17/image/6_Run_the_Container.png) + +**3. Verify that the application is working as expected by accessing it in a web browser** + - Open your web browser and navigate to `http://localhost:5000`. You should see the message "Hello, World!". + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day17/image/7_Verify_the_Application.png) + +**4. Push the image to a public or private repository (e.g. Docker Hub)** + - To push the image to Docker Hub, you need to tag it with your Docker Hub username and repository name, then push it. + - **1. Tag the Image** + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day17/image/8_Tag_the_Image.png) + + - **2. Push the Image** + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day17/image/9_Push_the_Image.png) + +For a reference project, visit [here](https://youtu.be/Tevxhn6Odc8). + +If you want to dive further, watch this [bootcamp](https://youtube.com/playlist?list=PLlfy9GnSVerRqYJgVYO0UiExj5byjrW8u). + +You can share your learning with everyone over LinkedIn and tag us along. 😃 + +Happy Learning :) + +[Code for Reference](https://raw.githubusercontent.com/Bhavin213/90DaysOfDevOps/master/2024/day17/code.txt) + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day18/README.md b/2024/day18/README.md new file mode 100644 index 0000000000..94b0a0c850 --- /dev/null +++ b/2024/day18/README.md @@ -0,0 +1,42 @@ +# Day 18 Task: Docker for DevOps Engineers + +Till now you have created a Dockerfile and pushed it to the repository. Let's move forward and dig deeper into other Docker concepts. Today, let's study Docker Compose! 😃 + +## Docker Compose + +- Docker Compose is a tool that was developed to help define and share multi-container applications. +- With Compose, we can create a YAML file to define the services and, with a single command, spin everything up or tear it all down. +- Learn more about Docker Compose [here](https://tecadmin.net/tutorial/docker/docker-compose/). + +## What is YAML? + +- YAML is a data serialization language that is often used for writing configuration files. Depending on whom you ask, YAML stands for "Yet Another Markup Language" or "YAML Ain’t Markup Language" (a recursive acronym), which emphasizes that YAML is for data, not documents. +- YAML is a popular programming language because it is human-readable and easy to understand. +- YAML files use a .yml or .yaml extension. +- Read more about it [here](https://www.redhat.com/en/topics/automation/what-is-yaml). + +## Task 1 + +Learn how to use the docker-compose.yml file to set up the environment, configure the services and links between different containers, and also to use environment variables in the docker-compose.yml file. + +[Sample docker-compose.yml file](https://github.com/LondheShubham153/90DaysOfDevOps/blob/master/2023/day18/docker-compose.yaml) + +## Task 2 + +- Pull a pre-existing Docker image from a public repository (e.g. Docker Hub) and run it on your local machine. Run the container as a non-root user (Hint: Use the `usermod` command to give the user permission to Docker). Make sure you reboot the instance after giving permission to the user. +- Inspect the container's running processes and exposed ports using the `docker inspect` command. +- Use the `docker logs` command to view the container's log output. +- Use the `docker stop` and `docker start` commands to stop and start the container. +- Use the `docker rm` command to remove the container when you're done. + +## How to Run Docker Commands Without Sudo? + +- Make sure Docker is installed and the system is updated (This was already completed as part of previous tasks): +- `sudo usermod -a -G docker $USER` +- Reboot the machine. + +For reference, you can watch this [video](https://youtu.be/Tevxhn6Odc8). + +You can post on LinkedIn and let us know what you have learned from this task by using #90DaysOfDevOps Challenge. Happy Learning! :) + +[← Previous Day](../day17/README.md) | [Next Day →](../day19/README.md) diff --git a/2024/day18/image/10_Remove_the_container.png b/2024/day18/image/10_Remove_the_container.png new file mode 100644 index 0000000000..d12da96fde Binary files /dev/null and b/2024/day18/image/10_Remove_the_container.png differ diff --git a/2024/day18/image/1_docker_compose_yml_file.png b/2024/day18/image/1_docker_compose_yml_file.png new file mode 100644 index 0000000000..360cec6f4b Binary files /dev/null and b/2024/day18/image/1_docker_compose_yml_file.png differ diff --git a/2024/day18/image/2_Pull_the_Docker_image.png b/2024/day18/image/2_Pull_the_Docker_image.png new file mode 100644 index 0000000000..9b89256509 Binary files /dev/null and b/2024/day18/image/2_Pull_the_Docker_image.png differ diff --git a/2024/day18/image/3_Add_the_current_user_to_the_Docker_group.png b/2024/day18/image/3_Add_the_current_user_to_the_Docker_group.png new file mode 100644 index 0000000000..4913b0619a Binary files /dev/null and b/2024/day18/image/3_Add_the_current_user_to_the_Docker_group.png differ diff --git a/2024/day18/image/4_Reboot_the_machine_to_apply_the_changes.png b/2024/day18/image/4_Reboot_the_machine_to_apply_the_changes.png new file mode 100644 index 0000000000..413557ff26 Binary files /dev/null and b/2024/day18/image/4_Reboot_the_machine_to_apply_the_changes.png differ diff --git a/2024/day18/image/5_Run_the_Docker_container.png b/2024/day18/image/5_Run_the_Docker_container.png new file mode 100644 index 0000000000..d336fb981f Binary files /dev/null and b/2024/day18/image/5_Run_the_Docker_container.png differ diff --git a/2024/day18/image/6_Inspect_the_container.png b/2024/day18/image/6_Inspect_the_container.png new file mode 100644 index 0000000000..83314b3bc4 Binary files /dev/null and b/2024/day18/image/6_Inspect_the_container.png differ diff --git a/2024/day18/image/7_View_the_logs.png b/2024/day18/image/7_View_the_logs.png new file mode 100644 index 0000000000..517655a0d4 Binary files /dev/null and b/2024/day18/image/7_View_the_logs.png differ diff --git a/2024/day18/image/8_Stop_the_container.png b/2024/day18/image/8_Stop_the_container.png new file mode 100644 index 0000000000..79b905025a Binary files /dev/null and b/2024/day18/image/8_Stop_the_container.png differ diff --git a/2024/day18/image/9_Start_the_container.png b/2024/day18/image/9_Start_the_container.png new file mode 100644 index 0000000000..05c4d0cd00 Binary files /dev/null and b/2024/day18/image/9_Start_the_container.png differ diff --git a/2024/day18/image/task1.png b/2024/day18/image/task1.png new file mode 100644 index 0000000000..8b13789179 --- /dev/null +++ b/2024/day18/image/task1.png @@ -0,0 +1 @@ + diff --git a/2024/day18/solution.md b/2024/day18/solution.md new file mode 100644 index 0000000000..dbbe507b09 --- /dev/null +++ b/2024/day18/solution.md @@ -0,0 +1,108 @@ +# Day 18 Answers: Docker for DevOps Engineers + +Till now you have created a Dockerfile and pushed it to the repository. Let's move forward and dig deeper into other Docker concepts. Today, let's study Docker Compose! 😃 + +## Docker Compose + +- Docker Compose is a tool that was developed to help define and share multi-container applications. +- With Compose, we can create a YAML file to define the services and, with a single command, spin everything up or tear it all down. +- Learn more about Docker Compose [here](https://tecadmin.net/tutorial/docker/docker-compose/). + +## What is YAML? + +- YAML is a data serialization language that is often used for writing configuration files. Depending on whom you ask, YAML stands for "Yet Another Markup Language" or "YAML Ain’t Markup Language" (a recursive acronym), which emphasizes that YAML is for data, not documents. +- YAML is a popular programming language because it is human-readable and easy to understand. +- YAML files use a .yml or .yaml extension. +- Read more about it [here](https://www.redhat.com/en/topics/automation/what-is-yaml). + +## Tasks with Answers + +## Task 1 + +Learn how to use the docker-compose.yml file to set up the environment, configure the services and links between different containers, and also to use environment variables in the docker-compose.yml file. + +[Sample docker-compose.yml file](https://github.com/LondheShubham153/90DaysOfDevOps/blob/master/2023/day18/docker-compose.yaml) + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/1_docker_compose_yml_file.png) + +## Task 2 + + - **1. Pull a pre-existing Docker image from a public repository (e.g. Docker Hub) and run it on your local machine. Run the container as a non-root user (Hint: Use the `usermod` command to give the user permission to Docker). Make sure you reboot the instance after giving permission to the user.** + - Pull the Docker image: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/2_Pull_the_Docker_image.png) + + - Add the current user to the Docker group: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/3_Add_the_current_user_to_the_Docker_group.png) + + - Reboot the machine to apply the changes: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/4_Reboot_the_machine_to_apply_the_changes.png) + + - Run the Docker container: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/5_Run_the_Docker_container.png) + + - **2. Inspect the container's running processes and exposed ports using the `docker inspect` command.** + - Inspect the container: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/6_Inspect_the_container.png) + + - **3. Use the `docker logs` command to view the container's log output.** + - View the logs: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/7_View_the_logs.png) + + - **4. Use the `docker stop` and `docker start` commands to stop and start the container.** + - Stop the container: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/8_Stop_the_container.png) + + - Start the container: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/9_Start_the_container.png) + + - **5. Use the `docker rm` command to remove the container when you're done.** + - Remove the container: + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/10_Remove_the_container.png) + +## How to Run Docker Commands Without Sudo? + +- Make sure Docker is installed and the system is updated (This was already completed as part of previous tasks): + - `sudo usermod -a -G docker $USER` + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/3_Add_the_current_user_to_the_Docker_group.png) + + - Reboot the machine. + + **Answer** + + ![image](https://github.com/Bhavin213/90DaysOfDevOps/blob/master/2024/day18/image/4_Reboot_the_machine_to_apply_the_changes.png) + +For reference, you can watch this [video](https://youtu.be/Tevxhn6Odc8). + +[LinkedIn](https://www.linkedin.com/in/bhavin-savaliya/) diff --git a/2024/day19/README.md b/2024/day19/README.md new file mode 100644 index 0000000000..2f6e8a3fad --- /dev/null +++ b/2024/day19/README.md @@ -0,0 +1,37 @@ +# Day 19 Task: Docker for DevOps Engineers + +**So far, you've learned how to create a docker-compose.yml file and push it to the repository. Let's move forward and explore more Docker Compose concepts. Today, let's study Docker Volume and Docker Network!** 😃 + +## Docker Volume + +Docker allows you to create volumes, which are like separate storage areas that can be accessed by containers. They enable you to store data, like a database, outside the container, so it doesn't get deleted when the container is removed. You can also mount the same volume to multiple containers, allowing them to share data. For more details, check out this [reference](https://docs.docker.com/storage/volumes/). + +## Docker Network + +Docker allows you to create virtual networks, where you can connect multiple containers together. This way, the containers can communicate with each other and with the host machine. Each container has its own storage space, but if we want to share storage between containers, we need to use volumes. For more details, check out this [reference](https://docs.docker.com/network/). + +## Task 1 + +Create a multi-container docker-compose file that will bring up and bring down containers in a single shot (e.g., create application and database containers). + +### Hints: + +- Use the `docker-compose up` command with the `-d` flag to start a multi-container application in detached mode. +- Use the `docker-compose scale` command to increase or decrease the number of replicas for a specific service. You can also add [`replicas`](https://stackoverflow.com/questions/63408708/how-to-scale-from-within-docker-compose-file) in the deployment file for auto-scaling. +- Use the `docker-compose ps` command to view the status of all containers, and `docker-compose logs` to view the logs of a specific service. +- Use the `docker-compose down` command to stop and remove all containers, networks, and volumes associated with the application. + +## Task 2 + +- Learn how to use Docker Volumes and Named Volumes to share files and directories between multiple containers. +- Create two or more containers that read and write data to the same volume using the `docker run --mount` command. +- Verify that the data is the same in all containers by using the `docker exec` command to run commands inside each container. +- Use the `docker volume ls` command to list all volumes and the `docker volume rm` command to remove the volume when you're done. + +## Project Opportunity + +You can use this task as a project to add to your resume. + +You can post on LinkedIn and let us know what you have learned from this task by using #90DaysOfDevOps Challenge. Happy Learning! 🙂 + +[← Previous Day](../day18/README.md) | [Next Day →](../day20/README.md) diff --git a/2024/day19/images/Screenshot (113).png b/2024/day19/images/Screenshot (113).png new file mode 100644 index 0000000000..300e47838f Binary files /dev/null and b/2024/day19/images/Screenshot (113).png differ diff --git a/2024/day19/images/Screenshot (114).png b/2024/day19/images/Screenshot (114).png new file mode 100644 index 0000000000..07c61f9b00 Binary files /dev/null and b/2024/day19/images/Screenshot (114).png differ diff --git a/2024/day19/images/Screenshot (116).png b/2024/day19/images/Screenshot (116).png new file mode 100644 index 0000000000..e759cd57cb Binary files /dev/null and b/2024/day19/images/Screenshot (116).png differ diff --git a/2024/day19/images/Screenshot (117).png b/2024/day19/images/Screenshot (117).png new file mode 100644 index 0000000000..ac701b1583 Binary files /dev/null and b/2024/day19/images/Screenshot (117).png differ diff --git a/2024/day19/images/Screenshot (118).png b/2024/day19/images/Screenshot (118).png new file mode 100644 index 0000000000..de7e89538a Binary files /dev/null and b/2024/day19/images/Screenshot (118).png differ diff --git a/2024/day19/images/Screenshot (119).png b/2024/day19/images/Screenshot (119).png new file mode 100644 index 0000000000..eaf96af7f1 Binary files /dev/null and b/2024/day19/images/Screenshot (119).png differ diff --git a/2024/day19/images/Screenshot (120).png b/2024/day19/images/Screenshot (120).png new file mode 100644 index 0000000000..b9d7ee8ca2 Binary files /dev/null and b/2024/day19/images/Screenshot (120).png differ diff --git a/2024/day20/Docker_cheat_sheet.pdf b/2024/day20/Docker_cheat_sheet.pdf new file mode 100644 index 0000000000..230a961288 Binary files /dev/null and b/2024/day20/Docker_cheat_sheet.pdf differ diff --git a/2024/day20/README.md b/2024/day20/README.md new file mode 100644 index 0000000000..045045a634 --- /dev/null +++ b/2024/day20/README.md @@ -0,0 +1,17 @@ +# Day 20 Task: Docker for DevOps Engineers + +## Finally!! 🎉 + +You have completed ✅ the Docker hands-on sessions, and I hope you have learned something valuable from it. 🙌 + +Now it's time to take your Docker skills to the next level by creating a comprehensive cheat-sheet of all the commands you've learned so far. This cheat-sheet should include commands for both Docker and Docker Compose, along with brief explanations of their usage. Not only will this cheat-sheet help you in the future, but it will also serve as a valuable resource for the DevOps community. 😊🙌 + +So, put your knowledge and creativity to the test and create a cheat-sheet that truly stands out! 🚀 + +For reference, I have added a [cheatsheet](https://cdn.hashnode.com/res/hashnode/image/upload/v1670863735841/r6xdXpsap.png?auto=compress,format&format=webp). Make sure your cheat-sheet is UNIQUE. + +Post it on LinkedIn and share your knowledge with the community. 😃 + +**Happy Learning :)** + +[← Previous Day](../day19/README.md) | [Next Day →](../day21/README.md) diff --git a/2024/day21/README.md b/2024/day21/README.md new file mode 100644 index 0000000000..304185270a --- /dev/null +++ b/2024/day21/README.md @@ -0,0 +1,44 @@ +# Day 21 Task: Important Docker Interview Questions + +## Docker Interview + +Docker is a crucial topic for DevOps Engineer interviews, especially for freshers. Here are some essential questions to help you prepare and ace your Docker interviews: + +## Questions + +- What is the difference between an Image, Container, and Engine? +- What is the difference between the Docker command COPY vs ADD? +- What is the difference between the Docker command CMD vs RUN? +- How will you reduce the size of a Docker image? +- Why and when should you use Docker? +- Explain the Docker components and how they interact with each other. +- Explain the terminology: Docker Compose, Dockerfile, Docker Image, Docker Container. +- In what real scenarios have you used Docker? +- Docker vs Hypervisor? +- What are the advantages and disadvantages of using Docker? +- What is a Docker namespace? +- What is a Docker registry? +- What is an entry point? +- How to implement CI/CD in Docker? +- Will data on the container be lost when the Docker container exits? +- What is a Docker swarm? +- What are the Docker commands for the following: + - Viewing running containers + - Running a container under a specific name + - Exporting a Docker image + - Importing an existing Docker image + - Deleting a container + - Removing all stopped containers, unused networks, build caches, and dangling images? +- What are the common Docker practices to reduce the size of Docker images? +- How do you troubleshoot a Docker container that is not starting? +- Can you explain the Docker networking model? +- How do you manage persistent storage in Docker? +- How do you secure a Docker container? +- What is Docker overlay networking? +- How do you handle environment variables in Docker? + +These questions will help you in your next DevOps interview. Write a blog and share it on LinkedIn to showcase your knowledge. + +**Happy Learning :)** + +[← Previous Day](../day20/README.md) | [Next Day →](../day22/README.md) diff --git a/2024/day22/README.md b/2024/day22/README.md new file mode 100644 index 0000000000..80e0bdaa43 --- /dev/null +++ b/2024/day22/README.md @@ -0,0 +1,38 @@ +# Day-22 : Getting Started with Jenkins 😃 +**Linux, Git, Git-Hub, Docker finish ho chuka hai to chaliye seekhte hai inko deploy krne ke lye CI-CD tool:** + +## What is Jenkins? +- Jenkins is an open source continuous integration-continuous delivery and deployment (CI/CD) automation software DevOps tool written in the Java programming language. It is used to implement CI/CD workflows, called pipelines. + +- Jenkins is a tool that is used for automation, and it is an open-source server that allows all the developers to build, test and deploy software. It works or runs on java as it is written in java. By using Jenkins we can make a continuous integration of projects(jobs) or end-to-endpoint automation. + +- Jenkins achieves Continuous Integration with the help of plugins. Plugins allow the integration of Various DevOps stages. If you want to integrate a particular tool, you need to install the plugins for that tool. For example Git, Maven 2 project, Amazon EC2, HTML publisher etc. + +### Let us do discuss the necessity of this tool before going ahead to the procedural part for installation: + +- Nowadays, humans are becoming lazy😴 day by day so even having digital screens and just one click button in front of us then also need some automation. + +- Here, I’m referring to that part of automation where we need not have to look upon a process(here called a job) for completion and after it doing another job. For that, we have Jenkins with us. + +Note: By now Jenkins should be installed on your machine(as it was a part of previous tasks, if not follow [Installation Guide](https://youtu.be/OkVtBKqMt7I)) + +## Tasks: + +### Task 1: Write a small article in your own words about +- what Jenkins is and why it is used. Avoid copying directly from the internet. +- Reflect on how Jenkins integrates into the DevOps lifecycle and its benefits. +- Discuss the role of Jenkins in automating the build, test, and deployment processes. + +### Task 2: Create a Freestyle Pipeline to Print "Hello World" + +Create a freestyle pipeline in Jenkins that: +- Prints "Hello World" +- Prints the current date and time +- Clones a GitHub repository and lists its contents +- Configure the pipeline to run periodically (e.g., every hour). + +### Share Your Progress + +Don't forget to post your progress on LinkedIn to share your learning journey with others. Happy learning and good luck with your DevOps challenge! + +[← Previous Day](../day21/README.md) | [Next Day →](../day23/README.md) diff --git a/2024/day23/README.md b/2024/day23/README.md new file mode 100644 index 0000000000..ae0717014a --- /dev/null +++ b/2024/day23/README.md @@ -0,0 +1,39 @@ +# Day 23 Task: Jenkins Freestyle Project for DevOps Engineers + +The community is absolutely crushing it in the #90daysofdevops journey. Today's challenge is particularly exciting as it involves creating a Jenkins Freestyle Project, an excellent opportunity for DevOps engineers to showcase their skills and push their limits. Who's ready to dive in and make it happen? 😍 + +## What is CI/CD? + +- **CI (Continuous Integration)** is the practice of automating the integration of code changes from multiple developers into a single codebase. It involves developers frequently committing their work into a central code repository (such as GitHub or Stash). Automated tools then build the newly committed code and perform tasks like code review, ensuring that the code is integrated smoothly. The key goals of Continuous Integration are to find and address bugs quickly, make the integration process easier across a team of developers, improve software quality, and reduce the time it takes to release new features. + +- **CD (Continuous Delivery)** follows Continuous Integration and ensures that new changes can be released to customers quickly and without errors. This includes running integration and regression tests in a staging environment (similar to production) to ensure the final release is stable. Continuous Delivery automates the release process, ensuring a release-ready product at all times and allowing deployment at any moment. + +## What Is a Build Job? + +A Jenkins build job contains the configuration for automating specific tasks or steps in the application building process. These tasks include gathering dependencies, compiling, archiving, transforming code, testing, and deploying code in different environments. + +Jenkins supports several types of build jobs, such as freestyle projects, pipelines, multi-configuration projects, folders, multibranch pipelines, and organization folders. + +## What is a Freestyle Project? 🤔 + +A freestyle project in Jenkins is a type of project that allows you to build, test, and deploy software using various options and configurations. Here are a few tasks you could complete with a freestyle project in Jenkins: + +### Task 1 + +- Create an agent for your app (which you deployed using Docker in a previous task). +- Create a new Jenkins freestyle project for your app. +- In the "Build" section of the project, add a build step to run the `docker build` command to build the image for the container. +- Add a second step to run the `docker run` command to start a container using the image created in the previous step. + +### Task 2 + +- Create a Jenkins project to run the `docker-compose up -d` command to start multiple containers defined in the compose file (Hint: use the application and database docker-compose file from Day 19). +- Set up a cleanup step in the Jenkins project to run the `docker-compose down` command to stop and remove the containers defined in the compose file. + +For reference on Jenkins Freestyle Projects, visit [here](https://youtu.be/wwNWgG5htxs). + +You can post on LinkedIn and let us know what you have learned from this task as part of the #90DaysOfDevOps Challenge. + +**Happy Learning :)** + +[← Previous Day](../day22/README.md) | [Next Day →](../day24/README.md) diff --git a/2024/day24/README.md b/2024/day24/README.md new file mode 100644 index 0000000000..50a518e7de --- /dev/null +++ b/2024/day24/README.md @@ -0,0 +1,29 @@ +# Day 24 Task: Complete Jenkins CI/CD Project + +Let's create a comprehensive CI/CD pipeline for your Node.js application! 😍 + +## Did you finish Day 23? + +- Day 23 focused on Jenkins CI/CD, ensuring you understood the basics. Today, you'll take it a step further by completing a full project from start to finish, which you can proudly add to your resume. +- As you've already worked with Docker and Docker Compose, you'll be integrating these tools into a live project. + +## Task 1 + +1. Fork [this repository](https://github.com/LondheShubham153/node-todo-cicd.git). +2. Set up a connection between your Jenkins job and your GitHub repository through GitHub Integration. +3. Learn about [GitHub WebHooks](https://betterprogramming.pub/how-too-add-github-webhook-to-a-jenkins-pipeline-62b0be84e006) and ensure you have the CI/CD setup configured. +4. Refer to [this video](https://youtu.be/nplH3BzKHPk) for a step-by-step guide on the entire project. + +## Task 2 + +1. In the "Execute Shell" section of your Jenkins job, run the application using Docker Compose. +2. Create a Docker Compose file for this project (a valuable open-source contribution). +3. Run the project and celebrate your accomplishment! 🎉 + +For a detailed walkthrough and hands-on experience with the project, visit [this video](https://youtu.be/nplH3BzKHPk). + +You can post on LinkedIn and share your experiences and learnings from this task using the #90DaysOfDevOps Challenge. + +**Happy Learning :)** + +[← Previous Day](../day23/README.md) | [Next Day →](../day25/README.md) diff --git a/2024/day25/README.md b/2024/day25/README.md new file mode 100644 index 0000000000..6b0d0f32ad --- /dev/null +++ b/2024/day25/README.md @@ -0,0 +1,31 @@ +# Day 25 Task: Complete Jenkins CI/CD Project - Continued with Documentation + +You've been making amazing progress, so let's take a moment to catch up and refine our work. Today's focus is on completing the Jenkins CI/CD project from Day 24 and creating thorough documentation for it. + +## Did you finish Day 24? + +- Day 24 provided an end-to-end project experience, and adding this to your resume will be a significant achievement. + +- Take your time to finish the project, create comprehensive documentation, and make sure to highlight it in your resume and share your experience. + +## Task 1 + +- Document the entire process from cloning the repository to adding webhooks, deployment, and more. Create a detailed README file for your project. You can refer to [this example](https://github.com/LondheShubham153/fynd-my-movie/blob/master/README.md) for inspiration. + +- A well-written README file will not only help others understand your project but also make it easier for you to revisit and use the project in the future. + +## Task 2 + +- As it's a lighter day, set a small goal for yourself. Consider something you've been meaning to accomplish and use this time to focus on it. + +- Share your goal and how you plan to achieve it using [this template](https://www.linkedin.com/posts/shubhamlondhe1996_taking-resolutions-and-having-goals-for-an-activity-7023858409762373632-s2J8?utm_source=share&utm_medium=member_desktop). + +- Having small, achievable goals and strategies for reaching them is essential. Don't forget to reward yourself for your efforts! + +For a detailed walkthrough and project guidance, visit [this video](https://youtu.be/nplH3BzKHPk). + +You can post on LinkedIn and let us know what you have learned from this task using the #90DaysOfDevOps Challenge. + +**Happy Learning :)** + +[← Previous Day](../day24/README.md) | [Next Day →](../day26/README.md) diff --git a/2024/day26/README.md b/2024/day26/README.md new file mode 100644 index 0000000000..b0d65accb6 --- /dev/null +++ b/2024/day26/README.md @@ -0,0 +1,59 @@ +# Day 26 Task: Jenkins Declarative Pipeline + +One of the most important parts of your DevOps and CICD journey is a Declarative Pipeline Syntax of Jenkins + +## Some terms for your Knowledge + +**What is Pipeline -** A pipeline is a collection of steps or jobs interlinked in a sequence. + +**Declarative:** Declarative is a more recent and advanced implementation of a pipeline as a code. + +**Scripted:** Scripted was the first and most traditional implementation of the pipeline as a code in Jenkins. It was designed as a general-purpose DSL (Domain Specific Language) built with Groovy. + +# Why you should have a Pipeline + +The definition of a Jenkins Pipeline is written into a text file (called a [`Jenkinsfile`](https://www.jenkins.io/doc/book/pipeline/jenkinsfile)) which in turn can be committed to a project’s source control repository. +This is the foundation of "Pipeline-as-code"; treating the CD pipeline as a part of the application to be versioned and reviewed like any other code. + +**Creating a `Jenkinsfile` and committing it to source control provides a number of immediate benefits:** + +- Automatically creates a Pipeline build process for all branches and pull requests. +- Code review/iteration on the Pipeline (along with the remaining source code). + +# Pipeline syntax + +```groovy +pipeline { + agent any + stages { + stage('Build') { + steps { + // + } + } + stage('Test') { + steps { + // + } + } + stage('Deploy') { + steps { + // + } + } + } +} +``` + +# Task-01 + +- Create a New Job, this time select Pipeline instead of Freestyle Project. +- Follow the Official Jenkins [Hello world example](https://www.jenkins.io/doc/pipeline/tour/hello-world/) +- Complete the example using the Declarative pipeline +- In case of any issues feel free to post on any Groups, [Discord](https://discord.gg/Q6ntmMtH) or [Telegram](https://t.me/trainwithshubham) + +You can post your progress on LinkedIn and let us know what you have learned from this task by #90DaysOfDevOps Challenge. + +Happy Learning:) + +[← Previous Day](../day25/README.md) | [Next Day →](../day27/README.md) diff --git a/2024/day27/README.md b/2024/day27/README.md new file mode 100644 index 0000000000..277a2db069 --- /dev/null +++ b/2024/day27/README.md @@ -0,0 +1,43 @@ +# Day 27 Task: Jenkins Declarative Pipeline with Docker + +Day 26 was all about a Declarative pipeline, now its time to level up things, let's integrate Docker and your Jenkins declarative pipeline + +## Use your Docker Build and Run Knowledge + +**docker build -** you can use `sh 'docker build . -t ' ` in your pipeline stage block to run the docker build command. (Make sure you have docker installed with correct permissions. + +**docker run:** you can use `sh 'docker run -d '` in your pipeline stage block to build the container. + +**How will the stages look** + +```groovy +stages { + stage('Build') { + steps { + sh 'docker build -t trainwithshubham/django-app:latest' + } + } + } +``` + +# Task-01 + +- Create a docker-integrated Jenkins declarative pipeline +- Use the above-given syntax using `sh` inside the stage block +- You will face errors in case of running a job twice, as the docker container will be already created, so for that do task 2 + +# Task-02 + +- Create a docker-integrated Jenkins declarative pipeline using the `docker` groovy syntax inside the stage block. +- You won't face errors, you can Follow [this documentation](https://tempora-mutantur.github.io/jenkins.io/github_pages_test/doc/book/pipeline/docker/) + +- Complete your previous projects using this Declarative pipeline approach + +- In case of any issues feel free to post on any Groups, [Discord](https://discord.gg/Q6ntmMtH) or [Telegram](https://t.me/trainwithshubham) + +Are you enjoying the #90DaysOfDevOps Challenge? +Let me know how are feeling after 4 weeks of DevOps Learnings, + +Happy Learning:) + +[← Previous Day](../day26/README.md) | [Next Day →](../day28/README.md) diff --git a/2024/day28/README.md b/2024/day28/README.md new file mode 100644 index 0000000000..fc4bdc9f6c --- /dev/null +++ b/2024/day28/README.md @@ -0,0 +1,52 @@ +# Day 28 Task: Jenkins Agents + +## Jenkins Master (Server) + +The Jenkins master server is the central control unit that manages the overall orchestration of workflows defined in pipelines. It handles tasks such as scheduling jobs, monitoring job status, and managing configurations. The master serves the Jenkins UI and acts as the control node, delegating job execution to agents. + +## Jenkins Agent + +A Jenkins agent is a separate machine or container that executes the tasks defined in Jenkins jobs. When a job is triggered on the master, the actual execution occurs on the assigned agent. Each agent is identified by a unique label, allowing the master to delegate jobs to the appropriate agent. + +For small teams or projects, a single Jenkins installation may suffice. However, as the number of projects grows, it becomes necessary to scale. Jenkins supports this by allowing a master to connect with multiple agents, enabling distributed job execution. + +

+ +## Pre-requisites + +To set up an agent, you'll need a fresh Ubuntu 22.04 Linux installation. Ensure Java (the same version as on the Jenkins master server) and Docker are installed on the agent machine. + +*Note: While creating an agent, ensure that permissions, rights, and ownership are appropriately set for Jenkins users.* + +## Task 01 + +1. **Create an Agent:** + - Set up a new node in Jenkins by creating an agent. + +2. **AWS EC2 Instance Setup:** + - Create a new AWS EC2 instance and connect it to the master (where Jenkins is installed). + +3. **Master-Agent Connection:** + - Establish a connection between the master and agent using SSH and a public-private key pair exchange. + - Verify the agent's status in the "Nodes" section. + + You can follow [this article](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7017885886461698048-os5f?utm_source=share&utm_medium=member_android) for detailed instructions. + +## Task 02 + +1. **Run Previous Jobs on the New Agent:** + - Use the agent to run the Jenkins jobs you built on Day 26 and Day 27. + +2. **Labeling:** + - Assign labels to the agent and configure your master server to trigger builds on the appropriate agent based on these labels. + +3. **Support:** + - If you encounter any issues, feel free to seek help on [Discord](https://discord.gg/Q6ntmMtH) or [Telegram](https://t.me/trainwithshubham). + +## Reflection + +Are you enjoying the #90DaysOfDevOps Challenge? Share your thoughts and experiences after four weeks of learning DevOps. + +**Happy Learning! :)** + +[← Previous Day](../day27/README.md) | [Next Day →](../day29/README.md) diff --git a/2024/day29/README.md b/2024/day29/README.md new file mode 100644 index 0000000000..87ea08aae8 --- /dev/null +++ b/2024/day29/README.md @@ -0,0 +1,43 @@ +## Day 29 Task: Jenkins Important Interview Questions + +

+ +## Jenkins Interview + +Here are some Jenkins-specific questions related to Docker and other DevOps concepts that can be useful during a DevOps Engineer interview: + +### General Questions + +1. **What’s the difference between continuous integration, continuous delivery, and continuous deployment?** +2. **Benefits of CI/CD.** +3. **What is meant by CI-CD?** +4. **What is Jenkins Pipeline?** +5. **How do you configure a job in Jenkins?** +6. **Where do you find errors in Jenkins?** +7. **In Jenkins, how can you find log files?** +8. **Jenkins workflow and write a script for this workflow?** +9. **How to create continuous deployment in Jenkins?** +10. **How to build a job in Jenkins?** +11. **Why do we use pipelines in Jenkins?** +12. **Is Jenkins alone sufficient for automation?** +13. **How will you handle secrets in Jenkins?** +14. **Explain the different stages in a CI-CD setup.** +15. **Name some of the plugins in Jenkins.** + +### Scenario-Based Questions + +1. **You have a Jenkins pipeline that deploys to a staging environment. Suddenly, the deployment failed due to a missing configuration file. How would you troubleshoot and resolve this issue?** +2. **Imagine you have a Jenkins job that is taking significantly longer to complete than expected. What steps would you take to identify and mitigate the issue?** +3. **You need to implement a secure method to manage environment-specific secrets for different stages (development, staging, production) in your Jenkins pipeline. How would you approach this?** +4. **Suppose your Jenkins master node is under heavy load and build times are increasing. What strategies can you use to distribute the load and ensure efficient build processing?** +5. **A developer commits a code change that breaks the build. How would you set up Jenkins to automatically handle such scenarios and notify the relevant team members?** +6. **You are tasked with setting up a Jenkins pipeline for a multi-branch project. How would you handle different configurations and build steps for different branches?** +7. **How would you implement a rollback strategy in a Jenkins pipeline to revert to a previous stable version if the deployment fails?** +8. **In a scenario where you have multiple teams working on different projects, how would you structure Jenkins jobs and pipelines to ensure efficient resource utilization and manage permissions?** +9. **Your Jenkins agents are running in a cloud environment, and you notice that build times fluctuate due to varying resource availability. How would you optimize the performance and cost of these agents?** + +These questions will help you prepare for your next DevOps interview. Consider writing a blog and sharing your experiences and knowledge on LinkedIn. + +**Happy Learning! :)** + +[← Previous Day](../day28/README.md) | [Next Day →](../day30/README.md) diff --git a/2024/day30/README.md b/2024/day30/README.md new file mode 100644 index 0000000000..af4d37aa2f --- /dev/null +++ b/2024/day30/README.md @@ -0,0 +1,29 @@ +## Day 30 Task: Kubernetes Architecture + +

+ +## Kubernetes Overview + +With the widespread adoption of [containers](https://cloud.google.com/containers) among organizations, Kubernetes, the container-centric management software, has become a standard to deploy and operate containerized applications and is one of the most important parts of DevOps. + +Originally developed at Google and released as open-source in 2014. Kubernetes builds on 15 years of running Google's containerized workloads and the valuable contributions from the open-source community. Inspired by Google’s internal cluster management system, [Borg](https://research.google.com/pubs/pub43438.html), + +## Tasks + +1. What is Kubernetes? Write in your own words and why do we call it k8s? + +2. What are the benefits of using k8s? + +3. Explain the architecture of Kubernetes, refer to [this video](https://youtu.be/FqfoDUhzyDo) + +4. What is Control Plane? + +5. Write the difference between kubectl and kubelets. + +6. Explain the role of the API server. + +Kubernetes architecture is important, so make sure you spend a day understanding it. [This video](https://youtu.be/FqfoDUhzyDo) will surely help you. + +_Happy Learning :)_ + +[← Previous Day](../day29/README.md) | [Next Day →](../day31/README.md) diff --git a/2024/day31/README.md b/2024/day31/README.md new file mode 100644 index 0000000000..5b2a6b79e5 --- /dev/null +++ b/2024/day31/README.md @@ -0,0 +1,65 @@ +## Day 31 Task: Launching your First Kubernetes Cluster with Nginx running + +### Awesome! You learned the architecture of one of the top most important tool "Kubernetes" in your previous task. + +## What about doing some hands-on now? + +Let's read about minikube and implement _k8s_ in our local machine + +1. **What is minikube?** + +_Ans_:- Minikube is a tool which quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows. It can deploy as a VM, a container, or on bare-metal. + +Minikube is a pared-down version of Kubernetes that gives you all the benefits of Kubernetes with a lot less effort. + +This makes it an interesting option for users who are new to containers, and also for projects in the world of edge computing and the Internet of Things. + +2. **Features of minikube** + +_Ans_ :- + +(a) Supports the latest Kubernetes release (+6 previous minor versions) + +(b) Cross-platform (Linux, macOS, Windows) + +(c) Deploy as a VM, a container, or on bare-metal + +(d) Multiple container runtimes (CRI-O, containerd, docker) + +(e) Direct API endpoint for blazing fast image load and build + +(f) Advanced features such as LoadBalancer, filesystem mounts, FeatureGates, and network policy + +(g) Addons for easily installed Kubernetes applications + +(h) Supports common CI environments + +## Task-01: + +## Install minikube on your local + +For installation, you can Visit [this page](https://minikube.sigs.k8s.io/docs/start/). + +If you want to try an alternative way, you can check [this](https://k8s-docs.netlify.app/en/docs/tasks/tools/install-minikube/). + +## Let's understand the concept **pod** + +_Ans:-_ + +Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. + +A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific "logical host": it contains one or more application containers which are relatively tightly coupled. + +You can read more about pod from [here](https://kubernetes.io/docs/concepts/workloads/pods/) . + +## Task-02: + +## Create your first pod on Kubernetes through minikube. + +We are suggesting you make an nginx pod, but you can always show your creativity and do it on your own. + +**Having an issue? Don't worry, adding a sample yaml file for pod creation, you can always refer that.** + +_Happy Learning :)_ + +[← Previous Day](../day30/README.md) | [Next Day →](../day32/README.md) diff --git a/2024/day31/pod.yml b/2024/day31/pod.yml new file mode 100644 index 0000000000..cfc02a372d --- /dev/null +++ b/2024/day31/pod.yml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: Pod +metadata: + name: nginx +spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + + +# After creating this file , run below command: +# kubectl apply -f diff --git a/2024/day32/Deployment.yml b/2024/day32/Deployment.yml new file mode 100644 index 0000000000..8f3814196b --- /dev/null +++ b/2024/day32/Deployment.yml @@ -0,0 +1,21 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: todo-app + labels: + app: todo +spec: + replicas: 2 + selector: + matchLabels: + app: todo + template: + metadata: + labels: + app: todo + spec: + containers: + - name: todo + image: rishikeshops/todo-app + ports: + - containerPort: 3000 diff --git a/2024/day32/README.md b/2024/day32/README.md new file mode 100644 index 0000000000..eb2ee9c304 --- /dev/null +++ b/2024/day32/README.md @@ -0,0 +1,27 @@ +## Day 32 Task: Launching your Kubernetes Cluster with Deployment + +### Congratulation ! on your learning on K8s on Day-31 + +## What is Deployment in k8s + +A Deployment provides a configuration for updates for Pods and ReplicaSets. + +You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new replicas for scaling, or to remove existing Deployments and adopt all their resources with new Deployments. + +## Today's task let's keep it very simple. + +## Task-1: + +**Create one Deployment file to deploy a sample todo-app on K8s using "Auto-healing" and "Auto-Scaling" feature** + +- add a deployment.yml file (sample is kept in the folder for your reference) +- apply the deployment to your k8s (minikube) cluster by command + `kubectl apply -f deployment.yml` + +Let's make your resume shine with one more project ;) + +**Having an issue? Don't worry, adding a sample deployment file , you can always refer that or wathch [this video](https://youtu.be/ONrbWFJXLLk)** + +Happy Learning :) + +[← Previous Day](../day31/README.md) | [Next Day →](../day33/README.md) diff --git a/2024/day33/README.md b/2024/day33/README.md new file mode 100644 index 0000000000..984842c527 --- /dev/null +++ b/2024/day33/README.md @@ -0,0 +1,34 @@ +# Day 33 Task: Working with Namespaces and Services in Kubernetes + +### Congrats🎊🎉 on updating your Deployment yesterday💥🙌 + +## What are Namespaces and Services in k8s + +In Kubernetes, Namespaces are used to create isolated environments for resources. Each Namespace is like a separate cluster within the same physical cluster. Services are used to expose your Pods and Deployments to the network. Read more about Namespace [Here](https://kubernetes.io/docs/concepts/workloads/pods/user-namespaces/) + +# Today's task: + +## Task 1: + +- Create a Namespace for your Deployment + +- Use the command `kubectl create namespace ` to create a Namespace + +- Update the deployment.yml file to include the Namespace + +- Apply the updated deployment using the command: + `kubectl apply -f deployment.yml -n ` + +- Verify that the Namespace has been created by checking the status of the Namespaces in your cluster. + +## Task 2: + +- Read about Services, Load Balancing, and Networking in Kubernetes. Refer official documentation of kubernetes [Link](https://kubernetes.io/docs/concepts/services-networking/) + +Need help with Namespaces? Check out this [video](https://youtu.be/K3jNo4z5Jx8) for assistance. + +Keep growing your Kubernetes knowledge💥🙌 + +Happy Learning! :) + +[← Previous Day](../day32/README.md) | [Next Day →](../day34/README.md) diff --git a/2024/day34/README.md b/2024/day34/README.md new file mode 100644 index 0000000000..9753f7ff1f --- /dev/null +++ b/2024/day34/README.md @@ -0,0 +1,36 @@ +# Day 34 Task: Working with Services in Kubernetes + +### Congratulation🎊 on your learning on Deployments in K8s on Day-33 + +## What are Services in K8s + +In Kubernetes, Services are objects that provide stable network identities to Pods and abstract away the details of Pod IP addresses. Services allow Pods to receive traffic from other Pods, Services, and external clients. + +## Task-1: + +- Create a Service for your todo-app Deployment from Day-32 +- Create a Service definition for your todo-app Deployment in a YAML file. +- Apply the Service definition to your K8s (minikube) cluster using the `kubectl apply -f service.yml -n ` command. +- Verify that the Service is working by accessing the todo-app using the Service's IP and Port in your Namespace. + +## Task-2: + +- Create a ClusterIP Service for accessing the todo-app from within the cluster +- Create a ClusterIP Service definition for your todo-app Deployment in a YAML file. +- Apply the ClusterIP Service definition to your K8s (minikube) cluster using the `kubectl apply -f cluster-ip-service.yml -n ` command. +- Verify that the ClusterIP Service is working by accessing the todo-app from another Pod in the cluster in your Namespace. + +## Task-3: + +- Create a LoadBalancer Service for accessing the todo-app from outside the cluster +- Create a LoadBalancer Service definition for your todo-app Deployment in a YAML file. +- Apply the LoadBalancer Service definition to your K8s (minikube) cluster using the `kubectl apply -f load-balancer-service.yml -n ` command. +- Verify that the LoadBalancer Service is working by accessing the todo-app from outside the cluster in your Namespace. + +Struggling with Services? Take a look at this video for a step-by-step [guide](https://youtu.be/OJths_RojFA). + +Need help with Services in Kubernetes? Check out the Kubernetes [documentation](https://kubernetes.io/docs/concepts/services-networking/service/) for assistance. + +Happy Learning :) + +[← Previous Day](../day33/README.md) | [Next Day →](../day35/README.md) diff --git a/2024/day35/README.md b/2024/day35/README.md new file mode 100644 index 0000000000..160e0030b2 --- /dev/null +++ b/2024/day35/README.md @@ -0,0 +1,37 @@ +# Day 35: Mastering ConfigMaps and Secrets in Kubernetes🔒🔑🛡️ + +### 👏🎉 Yay! Yesterday we conquered Namespaces and Services 💪💻🔗🚀 + +## What are ConfigMaps and Secrets in k8s + +In Kubernetes, ConfigMaps and Secrets are used to store configuration data and secrets, respectively. ConfigMaps store configuration data as key-value pairs, while Secrets store sensitive data in an encrypted form. + +- _Example :- Imagine you're in charge of a big spaceship (Kubernetes cluster) with lots of different parts (containers) that need information to function properly. + ConfigMaps are like a file cabinet where you store all the information each part needs in simple, labeled folders (key-value pairs). + Secrets, on the other hand, are like a safe where you keep the important, sensitive information that shouldn't be accessible to just anyone (encrypted data). + So, using ConfigMaps and Secrets, you can ensure each part of your spaceship (Kubernetes cluster) has the information it needs to work properly and keep sensitive information secure! 🚀_ +- Read more about [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) & [Secret](https://kubernetes.io/docs/concepts/configuration/secret/). + +## Today's task: + +## Task 1: + +- Create a ConfigMap for your Deployment +- Create a ConfigMap for your Deployment using a file or the command line +- Update the deployment.yml file to include the ConfigMap +- Apply the updated deployment using the command: `kubectl apply -f deployment.yml -n ` +- Verify that the ConfigMap has been created by checking the status of the ConfigMaps in your Namespace. + +## Task 2: + +- Create a Secret for your Deployment +- Create a Secret for your Deployment using a file or the command line +- Update the deployment.yml file to include the Secret +- Apply the updated deployment using the command: `kubectl apply -f deployment.yml -n ` +- Verify that the Secret has been created by checking the status of the Secrets in your Namespace. + +Need help with ConfigMaps and Secrets? Check out this [video](https://youtu.be/FAnQTgr04mU) for assistance. + +Keep learning and expanding your knowledge of Kubernetes💥🙌 + +[← Previous Day](../day34/README.md) | [Next Day →](../day36/README.md) diff --git a/2024/day36/Deployment.yml b/2024/day36/Deployment.yml new file mode 100644 index 0000000000..3c9c1c7cbc --- /dev/null +++ b/2024/day36/Deployment.yml @@ -0,0 +1,26 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: todo-app-deployment +spec: + replicas: 1 + selector: + matchLabels: + app: todo-app + template: + metadata: + labels: + app: todo-app + spec: + containers: + - name: todo-app + image: rishikeshops/todo-app + ports: + - containerPort: 8000 + volumeMounts: + - name: todo-app-data + mountPath: /app + volumes: + - name: todo-app-data + persistentVolumeClaim: + claimName: pvc-todo-app diff --git a/2024/day36/README.md b/2024/day36/README.md new file mode 100644 index 0000000000..2079e66d65 --- /dev/null +++ b/2024/day36/README.md @@ -0,0 +1,51 @@ +# Day 36 Task: Managing Persistent Volumes in Your Deployment 💥 + +🙌 Kudos to you for conquering ConfigMaps and Secrets in Kubernetes yesterday. + +🔥 You're on fire! 🔥 + +## What are Persistent Volumes in k8s + +In Kubernetes, a Persistent Volume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. A Persistent Volume Claim (PVC) is a request for storage by a user. The PVC references the PV, and the PV is bound to a specific node. Read official documentation of [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). + +⏰ Wait, wait, wait! 📣 Attention all #90daysofDevOps Challengers. 💪 + +Before diving into today's task, don't forget to share your thoughts on the #90daysofDevOps challenge 💪 Fill out our feedback form (https://lnkd.in/gcgvrq8b) to help us improve and provide the best experience 🌟 Your participation and support is greatly appreciated 🙏 Let's continue to grow together 🌱 + +## Today's tasks: + +### Task 1: + +Add a Persistent Volume to your Deployment todo app. + +- Create a Persistent Volume using a file on your node. [Template](https://github.com/LondheShubham153/90DaysOfDevOps/blob/94e3970819e097a5b8edea40fe565d583419f912/2023/day36/pv.yml) + +- Create a Persistent Volume Claim that references the Persistent Volume. [Template](https://github.com/LondheShubham153/90DaysOfDevOps/blob/94e3970819e097a5b8edea40fe565d583419f912/2023/day36/pvc.yml) + +- Update your deployment.yml file to include the Persistent Volume Claim. After Applying pv.yml pvc.yml your deployment file look like this [Template](https://github.com/LondheShubham153/90DaysOfDevOps/blob/94e3970819e097a5b8edea40fe565d583419f912/2023/day36/Deployment.yml) + +- Apply the updated deployment using the command: `kubectl apply -f deployment.yml` + +- Verify that the Persistent Volume has been added to your Deployment by checking the status of the Pods and Persistent Volumes in your cluster. Use this commands `kubectl get pods` , + +`kubectl get pv` + +⚠️ Don't forget: To apply changes or create files in your Kubernetes deployments, each file must be applied separately. ⚠️ + +### Task 2: + +Accessing data in the Persistent Volume, + +- Connect to a Pod in your Deployment using command : `kubectl exec -it -- /bin/bash + +` + +- Verify that you can access the data stored in the Persistent Volume from within the Pod + +Need help with Persistent Volumes? Check out this [video](https://youtu.be/U0_N3v7vJys) for assistance. + +Keep up the excellent work🙌💥 + +Happy Learning :) + +[← Previous Day](../day35/README.md) | [Next Day →](../day37/README.md) diff --git a/2024/day36/pv.yml b/2024/day36/pv.yml new file mode 100644 index 0000000000..9546aba56a --- /dev/null +++ b/2024/day36/pv.yml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pv-todo-app +spec: + capacity: + storage: 1Gi + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + hostPath: + path: "/tmp/data" diff --git a/2024/day36/pvc.yml b/2024/day36/pvc.yml new file mode 100644 index 0000000000..3d9dce14d8 --- /dev/null +++ b/2024/day36/pvc.yml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: pvc-todo-app +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 500Mi diff --git a/2024/day37/README.md b/2024/day37/README.md new file mode 100644 index 0000000000..1300e335ae --- /dev/null +++ b/2024/day37/README.md @@ -0,0 +1,43 @@ +## Day 37 Task: Kubernetes Important interview Questions. + +## Questions + +1.What is Kubernetes and why it is important? + +2.What is difference between docker swarm and kubernetes? + +3.How does Kubernetes handle network communication between containers? + +4.How does Kubernetes handle scaling of applications? + +5.What is a Kubernetes Deployment and how does it differ from a ReplicaSet? + +6.Can you explain the concept of rolling updates in Kubernetes? + +7.How does Kubernetes handle network security and access control? + +8.Can you give an example of how Kubernetes can be used to deploy a highly available application? + +9.What is namespace is kubernetes? Which namespace any pod takes if we don't specify any namespace? + +10.How ingress helps in kubernetes? + +11.Explain different types of services in kubernetes? + +12.Can you explain the concept of self-healing in Kubernetes and give examples of how it works? + +13.How does Kubernetes handle storage management for containers? + +14.How does the NodePort service work? + +15.What is a multinode cluster and single-node cluster in Kubernetes? + +16.Difference between create and apply in kubernetes? + +## These questions will help you in your next DevOps Interview. + +_Write a Blog and share it on LinkedIn._ + +**_Happy Learning :)_** + +[← Previous Day](../day36/README.md) | [Next Day →](../day38/README.md) diff --git a/2024/day38/README.md b/2024/day38/README.md new file mode 100644 index 0000000000..8f51187e87 --- /dev/null +++ b/2024/day38/README.md @@ -0,0 +1,30 @@ +# Day 38 Getting Started with AWS Basics☁ + +![AWS](https://user-images.githubusercontent.com/115981550/217238286-6c6bc6e7-a1ac-4d12-98f3-f95ff5bf53fc.png) + +Congratulations!!!! you have come so far. Don't let your excuses break your consistency. Let's begin our new Journey with Cloud☁. By this time you have created multiple EC2 instances, if not let's begin the journey: + +## AWS: + +Amazon Web Services is one of the most popular Cloud Provider that has free tier too for students and Cloud enthutiasts for their Handson while learning (Create your free account today to explore more on it). + +Read from [here](https://aws.amazon.com/what-is-aws/) + +## IAM: + +AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. With IAM, you can centrally manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. +Read from [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) + +Get to know IAM more deeply [Click Here!!](https://www.youtube.com/watch?v=ORB4eY8EydA) + +### Task1: + +Create an IAM user with username of your own wish and grant EC2 Access. Launch your Linux instance through the IAM user that you created now and install jenkins and docker on your machine via single Shell Script. + +### Task2: + +In this task you need to prepare a devops team of avengers. Create 3 IAM users of avengers and assign them in devops groups with IAM policy. + +Post your progress on Linkedin. Till then Happy Learning :) + +[← Previous Day](../day37/README.md) | [Next Day →](../day39/README.md) diff --git a/2024/day39/README.md b/2024/day39/README.md new file mode 100644 index 0000000000..9a7e3e934f --- /dev/null +++ b/2024/day39/README.md @@ -0,0 +1,41 @@ +# Day 39 AWS and IAM Basics☁ + +![AWS](https://miro.medium.com/max/1400/0*dIzXLQn6aBClm1TJ.png) + +By this time you have created multiple EC2 instances, and post installation manually installed applications like Jenkins, docker etc. +Now let's switch to little automation part. Sounds interesting??🤯 + +## AWS: + +Amazon Web Services is one of the most popular Cloud Provider that has free tier too for students and Cloud enthutiasts for their Handson while learning (Create your free account today to explore more on it). + +Read from [here](https://aws.amazon.com/what-is-aws/) + +## User Data in AWS: + +- When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives. +- You can also pass this data into the launch instance wizard as plain text, as a file (this is useful for launching instances using the command line tools), or as base64-encoded text (for API calls). +- This will save time and manual effort everytime you launch an instance and want to install any application on it like apache, docker, Jenkins etc + +Read more from [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) + +## IAM: + +AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. With IAM, you can centrally manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. +Read from [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) + +Get to know IAM more deeply🏊[Click Here!!](https://www.youtube.com/watch?v=ORB4eY8EydA) + +### Task1: + +- Launch EC2 instance with already installed Jenkins on it. Once server shows up in console, hit the IP address in browser and you Jenkins page should be visible. +- Take screenshot of Userdata and Jenkins page, this will verify the task completion. + +### Task2: + +- Read more on IAM Roles and explain the IAM Users, Groups and Roles in your own terms. +- Create three Roles named: DevOps-User, Test-User and Admin. + +Post your progress on Linkedin. Till then Happy Learning :) + +[← Previous Day](../day38/README.md) | [Next Day →](../day40/README.md) diff --git a/2024/day40/README.md b/2024/day40/README.md new file mode 100644 index 0000000000..ce2dbcfda3 --- /dev/null +++ b/2024/day40/README.md @@ -0,0 +1,49 @@ +# Day 40 AWS EC2 Automation ☁ + +![AWS](https://www.eginnovations.com/blog/wp-content/uploads/2021/09/Amazon-AWS-Cloud-Topimage-1.jpg) + +I hope your journey with AWS cloud and automation is going well [](https://emojipedia.org/emoji/%F0%9F%98%8D/) + +### 😍 + +## Automation in EC2: + +Amazon EC2 or Amazon Elastic Compute Cloud can give you secure, reliable, high-performance, and cost-effective computing infrastructure to meet demanding business needs. + +Also, if you know a few things, you can automate many things. + +Read from [here](https://aws.amazon.com/ec2/) + +## Launch template in AWS EC2: + +- You can make a launch template with the configuration information you need to start an instance. You can save launch parameters in launch templates so you don't have to type them in every time you start a new instance. +- For example, a launch template can have the AMI ID, instance type, and network settings that you usually use to launch instances. +- You can tell the Amazon EC2 console to use a certain launch template when you start an instance. + +Read more from [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html) + +## Instance Types: + +Amazon EC2 has a large number of instance types that are optimised for different uses. The different combinations of CPU, memory, storage and networking capacity in instance types give you the freedom to choose the right mix of resources for your apps. Each instance type comes with one or more instance sizes, so you can adjust your resources to meet the needs of the workload you want to run. + +Read from [here](https://aws.amazon.com/ec2/instance-types/?trk=32f4fbd0-ffda-4695-a60c-8857fab7d0dd&sc_channel=ps&s_kwcid=AL!4422!3!536392685920!e!!g!!ec2%20instance%20types&ef_id=CjwKCAiA0JKfBhBIEiwAPhZXD_O1-3qZkRa-KScynbwjvHd3l4UHSTfKuigd5ZPukXoDXu-v3MtC7hoCafEQAvD_BwE:G:s&s_kwcid=AL!4422!3!536392685920!e!!g!!ec2%20instance%20types) + +## AMI: + +An Amazon Machine Image (AMI) is an image that AWS supports and keeps up to date. It contains the information needed to start an instance. When you launch an instance, you must choose an AMI. When you need multiple instances with the same configuration, you can launch them from a single AMI. + +### Task1: + +- Create a launch template with Amazon Linux 2 AMI and t2.micro instance type with Jenkins and Docker setup (You can use the Day 39 User data script for installing the required tools. + +- Create 3 Instances using Launch Template, there must be an option that shows number of instances to be launched ,can you find it? :) + +- You can go one step ahead and create an auto-scaling group, sounds tough? + +Check [this](https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-launch-template.html#create-launch-template-for-auto-scaling) out + +Post your progress on Linkedin. + +Happy Learning :) + +[← Previous Day](../day39/README.md) | [Next Day →](../day41/README.md) diff --git a/2024/day41/README.md b/2024/day41/README.md new file mode 100644 index 0000000000..0a1488f068 --- /dev/null +++ b/2024/day41/README.md @@ -0,0 +1,53 @@ +# Day 41: Setting up an Application Load Balancer with AWS EC2 🚀 ☁ + +![LB2](https://user-images.githubusercontent.com/115981550/218145297-d55fe812-32b7-4242-a4f8-eb66312caa2c.png) + +### Hi, I hope you had a great day yesterday learning about the launch template and instances in EC2. Today, we are going to dive into one of the most important concepts in EC2: Load Balancing. + +## What is Load Balancing? + +Load balancing is the distribution of workloads across multiple servers to ensure consistent and optimal resource utilization. It is an essential aspect of any large-scale and scalable computing system, as it helps you to improve the reliability and performance of your applications. + +## Elastic Load Balancing: + +**Elastic Load Balancing (ELB)** is a service provided by Amazon Web Services (AWS) that automatically distributes incoming traffic across multiple EC2 instances. ELB provides three types of load balancers: + +Read more from [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) + +1. **Application Load Balancer (ALB)** - _operates at layer 7 of the OSI model and is ideal for applications that require advanced routing and microservices._ + +- Read more from [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) + +2. **Network Load Balancer (NLB)** - _operates at layer 4 of the OSI model and is ideal for applications that require high throughput and low latency._ + +- Read more from [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) + +3. **Classic Load Balancer (CLB)** - _operates at layer 4 of the OSI model and is ideal for applications that require basic load balancing features._ + +- Read more [here](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html) + +## 🎯 Today's Tasks: + +### Task 1: + +- launch 2 EC2 instances with an Ubuntu AMI and use User Data to install the Apache Web Server. +- Modify the index.html file to include your name so that when your Apache server is hosted, it will display your name also do it for 2nd instance which include " TrainWithShubham Community is Super Aweasome :) ". +- Copy the public IP address of your EC2 instances. +- Open a web browser and paste the public IP address into the address bar. +- You should see a webpage displaying information about your PHP installation. + +### Task 2: + +- Create an Application Load Balancer (ALB) in EC2 using the AWS Management Console. +- Add EC2 instances which you launch in task-1 to the ALB as target groups. +- Verify that the ALB is working properly by checking the health status of the target instances and testing the load balancing capabilities. + +![LoadBalancer](https://user-images.githubusercontent.com/115981550/218143557-26ec33ce-99a7-4db6-a46f-1cf48ed77ae0.png) + +Need help with task? Check out this [Blog for assistance](https://rushikesh-mashidkar.hashnode.dev/create-an-application-load-balancer-elastic-load-balancing-using-aws-ec2-instance). + +Don't forget to share your progress on LinkedIn and have a great day🙌💥 + +Happy Learning! 😃 + +[← Previous Day](../day40/README.md) | [Next Day →](../day42/README.md) diff --git a/2024/day42/README.md b/2024/day42/README.md new file mode 100644 index 0000000000..5f8a37ff09 --- /dev/null +++ b/2024/day42/README.md @@ -0,0 +1,28 @@ +# Day 42: IAM Programmatic access and AWS CLI 🚀 ☁ + +Today is more of a reading excercise and getting some programmatic access for your AWS account + +## IAM Programmatic access + +In order to access your AWS account from a terminal or system, you can use AWS Access keys and AWS Secret Access keys +Watch [this video](https://youtu.be/XYKqL5GFI-I) for more details. + +## AWS CLI + +The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. + +The AWS CLI v2 offers several new features including improved installers, new configuration options such as AWS IAM Identity Center (successor to AWS SSO), and various interactive features. + +## Task-01 + +- Create AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY from AWS Console. + +## Task-02 + +- Setup and install AWS CLI and configure your account credentials + +Let me know if you have any issues while doing the task. + +Happy Learning :) + +[← Previous Day](../day41/README.md) | [Next Day →](../day43/README.md) diff --git a/2024/day43/README.md b/2024/day43/README.md new file mode 100644 index 0000000000..b838d01544 --- /dev/null +++ b/2024/day43/README.md @@ -0,0 +1,32 @@ +# Day 43: S3 Programmatic access with AWS-CLI 💻 📁 + +Hi, I hope you had a great day yesterday. Today as part of the #90DaysofDevOps Challenge we will be exploring most commonly used service in AWS i.e S3. + +![s3](https://user-images.githubusercontent.com/115981550/218308379-a2e841cf-6b77-4d02-bfbe-20d1bae09b20.png) + +# S3 + +Amazon Simple Storage Service (Amazon S3) is an object storage service that provides a secure and scalable way to store and access data on the cloud. It is designed for storing any kind of data, such as text files, images, videos, backups, and more. +Read more [here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) + +## Task-01 + +- Launch an EC2 instance using the AWS Management Console and connect to it using Secure Shell (SSH). +- Create an S3 bucket and upload a file to it using the AWS Management Console. +- Access the file from the EC2 instance using the AWS Command Line Interface (AWS CLI). + +Read more about S3 using aws-cli [here](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html) + +## Task-02 + +- Create a snapshot of the EC2 instance and use it to launch a new EC2 instance. +- Download a file from the S3 bucket using the AWS CLI. +- Verify that the contents of the file are the same on both EC2 instances. + +Added Some Useful commands to complete the task. [Click here for commands](https://github.com/LondheShubham153/90DaysOfDevOps/blob/833a67ac4ec17b992934cd6878875dccc4274f56/2023/day43/aws-cli.md) + +Let me know if you have any questions or face any issues while doing the tasks.🚀 + +Happy Learning :) + +[← Previous Day](../day42/README.md) | [Next Day →](../day44/README.md) diff --git a/2024/day43/aws-cli.md b/2024/day43/aws-cli.md new file mode 100644 index 0000000000..8c0f23fe2f --- /dev/null +++ b/2024/day43/aws-cli.md @@ -0,0 +1,21 @@ +Here are some commonly used AWS CLI commands for Amazon S3: + +`aws s3 ls` - This command lists all of the S3 buckets in your AWS account. + +`aws s3 mb s3://bucket-name` - This command creates a new S3 bucket with the specified name. + +`aws s3 rb s3://bucket-name` - This command deletes the specified S3 bucket. + +`aws s3 cp file.txt s3://bucket-name` - This command uploads a file to an S3 bucket. + +`aws s3 cp s3://bucket-name/file.txt .` - This command downloads a file from an S3 bucket to your local file system. + +`aws s3 sync local-folder s3://bucket-name` - This command syncs the contents of a local folder with an S3 bucket. + +`aws s3 ls s3://bucket-name` - This command lists the objects in an S3 bucket. + +`aws s3 rm s3://bucket-name/file.txt` - This command deletes an object from an S3 bucket. + +`aws s3 presign s3://bucket-name/file.txt` - This command generates a pre-signed URL for an S3 object, which can be used to grant temporary access to the object. + +`aws s3api list-buckets` - This command retrieves a list of all S3 buckets in your AWS account, using the S3 API. diff --git a/2024/day44/README.md b/2024/day44/README.md new file mode 100644 index 0000000000..c836c86b29 --- /dev/null +++ b/2024/day44/README.md @@ -0,0 +1,23 @@ +# Day 44: Relational Database Service in AWS + +Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud + +## Task-01 + +- Create a Free tier RDS instance of MySQL +- Create an EC2 instance +- Create an IAM role with RDS access +- Assign the role to EC2 so that your EC2 Instance can connect with RDS +- Once the RDS instance is up and running, get the credentials and connect your EC2 instance using a MySQL client. + +Hint: + +You should install mysql client on EC2, and connect the Host and Port of RDS with this client. + +Post the screenshots once your EC2 instance can connect a MySQL server, that will be a small win for you. + +Watch [this video](https://youtu.be/MrA6Rk1Y82E) for reference. + +Happy Learning + +[← Previous Day](../day43/README.md) | [Next Day →](../day45/README.md) diff --git a/2024/day45/README.md b/2024/day45/README.md new file mode 100644 index 0000000000..c2c11a93b2 --- /dev/null +++ b/2024/day45/README.md @@ -0,0 +1,18 @@ +# Day 45: Deploy Wordpress website on AWS + +Over 30% of all websites on the internet use WordPress as their content management system (CMS). It is most often used to run blogs, but it can also be used to run e-commerce sites, message boards, and many other popular things. This guide will show you how to set up a WordPress blog site. + +## Task-01 + +- As WordPress requires a MySQL database to store its data ,create an RDS as you did in Day 44 + +To configure this WordPress site, you will create the following resources in AWS: + +- An Amazon EC2 instance to install and host the WordPress application. +- An Amazon RDS for MySQL database to store your WordPress data. +- Setup the server and post your new Wordpress app. + +Read [this](https://aws.amazon.com/getting-started/hands-on/deploy-wordpress-with-amazon-rds/) for a detailed explanation +Happy Learning :) + +[← Previous Day](../day44/README.md) | [Next Day →](../day46/README.md) diff --git a/2024/day46/README.md b/2024/day46/README.md new file mode 100644 index 0000000000..a44ae2f101 --- /dev/null +++ b/2024/day46/README.md @@ -0,0 +1,35 @@ +# Day-46: Set up CloudWatch alarms and SNS topic in AWS + +Hey learners, you have been using aws services atleast for last 45 days. Have you ever wondered what happen if for any service is charging you bill continously and you don't know till you loose all your pocket money ? + +Hahahaha😁, Well! we, as a responsible community ,always try to make it under free tier , but it's good to know and setup something , which will inform you whenever bill touches a Threshold. + +## What is Amazon CloudWatch? + +Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications. + +Read more about cloudwatch from the official documentation [here](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) + +## What is Amazon SNS? + +Amazon Simple Notification Service is a notification service provided as part of Amazon Web Services since 2010. It provides a low-cost infrastructure for mass delivery of messages, predominantly to mobile users. + +Read more about it [here](https://docs.aws.amazon.com/sns/latest/dg/welcome.html) + +## Task : + +- Create a CloudWatch alarm that monitors your billing and send an email to you when a it reaches $2. + +(You can keep it for your future use) + +- Delete your billing Alarm that you created now. + +(Now you also know how to delete as well. ) + +Need help with Cloudwatch? Check out this [official documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html) for assistance. + +Keep growing your AWS knowledge💥🙌 + +Happy Learning! :) + +[← Previous Day](../day45/README.md) | [Next Day →](../day47/README.md) diff --git a/2024/day47/README.md b/2024/day47/README.md new file mode 100644 index 0000000000..7d3dc37e37 --- /dev/null +++ b/2024/day47/README.md @@ -0,0 +1,64 @@ +# Day 47: AWS Elastic Beanstalk +Today, we explore the new AWS service- Elastic Beanstalk. We'll also cover deploying a small web application (game) on this platform + +## What is AWS Elastic Beanstalk? +![image](https://github.com/Simbaa815/90DaysOfDevOps/assets/112085387/75f69087-d769-4586-b4a7-99a87feaec92) + +- AWS Elastic Beanstalk is a service used to deploy and scale web applications developed by developers. +- It supports multiple programming languages and runtime environments such as Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker. + +## Why do we need AWS Elastic Beanstalk? +- Previously, developers faced challenges in sharing software modules across geographically separated teams. +- AWS Elastic Beanstalk solves this problem by providing a service to easily share applications across different devices. + +## Advantages of AWS Elastic Beanstalk +- Highly scalable +- Fast and simple to begin +- Quick deployment +- Supports multi-tenant architecture +- Simplifies operations +- Cost efficient + +## Components of AWS Elastic Beanstalk +- Application Version: Represents a specific iteration or release of an application's codebase. +- Environment Tier: Defines the infrastructure resources allocated for an environment (e.g., web server environment, worker environment). +- Environment: Represents a collection of AWS resources running an application version. +- Configuration Template: Defines the settings for an environment, including instance types, scaling options, and more. + +## Elastic Beanstalk Environment +There are two types of environments: web server and worker. + +- Web server environments are front-end facing, accessed directly by clients using a URL. + +- Worker environments support backend applications or micro apps. + +## Task-01 +Deploy the [2048-game](https://github.com/Simbaa815/2048-game) using the AWS Elastic Beanstalk. + +If you ever find yourself facing a challenge, feel free to refer to this helpful [blog](https://devxblog.hashnode.dev/aws-elastic-beanstalk-deploying-the-2048-game) post for guidance and support. + +--- + +# Additional work + +## Test Knowledge on aws 💻 📈 +Today, we will be test the aws knowledge on services in AWS, as part of the 90 Days of DevOps Challenge. + + +## Task-01 + +- Launch an EC2 instance using the AWS Management Console and connect to it using SSH. +- Install a web server on the EC2 instance and deploy a simple web application. +- Monitor the EC2 instance using Amazon CloudWatch and troubleshoot any issues that arise. + +## Task-02 +- Create an Auto Scaling group using the AWS Management Console and configure it to launch EC2 instances in response to changes in demand. +- Use Amazon CloudWatch to monitor the performance of the Auto Scaling group and the EC2 instances and troubleshoot any issues that arise. +- Use the AWS CLI to view the state of the Auto Scaling group and the EC2 instances and verify that the correct number of instances are running. + + +We hope that these tasks will give you hands-on experience with aws services and help you understand how these services work together. If you have any questions or face any issues while doing the tasks, please let us know. + +Happy Learning :) + +[← Previous Day](../day46/README.md) | [Next Day →](../day48/README.md) diff --git a/2024/day48/README.md b/2024/day48/README.md new file mode 100644 index 0000000000..01836eac4e --- /dev/null +++ b/2024/day48/README.md @@ -0,0 +1,40 @@ +# Day-48 - ECS + +Today will be a great learning for sure. I know many of you may not know about the term "ECS". As you know, 90 Days Of DevOps Challenge is mostly about 'learning new' , let's learn then ;) + +## What is ECS ? + +- ECS (Elastic Container Service) is a fully-managed container orchestration service provided by Amazon Web Services (AWS). It allows you to run and manage Docker containers on a cluster of virtual machines (EC2 instances) without having to manage the underlying infrastructure. + +With ECS, you can easily deploy, manage, and scale your containerized applications using the AWS Management Console, the AWS CLI, or the API. ECS supports both "Fargate" and "EC2 launch types", which means you can run your containers on AWS-managed infrastructure or your own EC2 instances. + +ECS also integrates with other AWS services, such as Elastic Load Balancing, Auto Scaling, and Amazon VPC, allowing you to build scalable and highly available applications. Additionally, ECS has support for Docker Compose and Kubernetes, making it easy to adopt existing container workflows. + +Overall, ECS is a powerful and flexible container orchestration service that can help simplify the deployment and management of containerized applications in AWS. + +## Difference between EKS and ECS ? + +- EKS (Elastic Kubernetes Service) and ECS (Elastic Container Service) are both container orchestration platforms provided by Amazon Web Services (AWS). While both platforms allow you to run containerized applications in the AWS cloud, there are some differences between the two. + +**Architecture**: +ECS is based on a centralized architecture, where there is a control plane that manages the scheduling of containers on EC2 instances. On the other hand, EKS is based on a distributed architecture, where the Kubernetes control plane is distributed across multiple EC2 instances. + +**Kubernetes Support**: +EKS is a fully managed Kubernetes service, meaning that it supports Kubernetes natively and allows you to run your Kubernetes workloads on AWS without having to manage the Kubernetes control plane. ECS, on the other hand, has its own orchestration engine and does not support Kubernetes natively. + +**Scaling**: +EKS is designed to automatically scale your Kubernetes cluster based on demand, whereas ECS requires you to configure scaling policies for your tasks and services. + +**Flexibility**: +EKS provides more flexibility than ECS in terms of container orchestration, as it allows you to customize and configure Kubernetes to meet your specific requirements. ECS is more restrictive in terms of the options available for container orchestration. + +**Community**: +Kubernetes has a large and active open-source community, which means that EKS benefits from a wide range of community-driven development and support. ECS, on the other hand, has a smaller community and is largely driven by AWS itself. + +In summary, EKS is a good choice if you want to use Kubernetes to manage your containerized workloads on AWS, while ECS is a good choice if you want a simpler, more managed platform for running your containerized applications. + +# Task : + +Set up ECS (Elastic Container Service) by setting up Nginx on ECS. + +[← Previous Day](../day47/README.md) | [Next Day →](../day49/README.md) diff --git a/2024/day49/README.md b/2024/day49/README.md new file mode 100644 index 0000000000..ecc603177a --- /dev/null +++ b/2024/day49/README.md @@ -0,0 +1,25 @@ +# Day 49 - INTERVIEW QUESTIONS ON AWS + +Hey people, we have listened to your suggestions and we are looking forward to get more! +As you people have asked to put more interview based questions as part of Daily Task, So here it it :) + +## INTERVIEW QUESTIONS: + +- Name 5 aws services you have used and what's the use cases? +- What are the tools used to send logs to the cloud environment? +- What are IAM Roles? How do you create /manage them? +- How to upgrade or downgrade a system with zero downtime? +- What is infrastructure as code and how do you use it? +- What is a load balancer? Give scenarios of each kind of balancer based on your experience. +- What is CloudFormation and why is it used for? +- Difference between AWS CloudFormation and AWS Elastic Beanstalk? +- What are the kinds of security attacks that can occur on the cloud? And how can we minimize them? +- Can we recover the EC2 instance when we have lost the key? +- What is a gateway? +- What is the difference between the Amazon Rds, Dynamodb, and Redshift? +- Do you prefer to host a website on S3? What's the reason if your answer is either yes or no? + +Let's share your answer on LinkedIn in best possible way thinking you are in a interview table. +Happy Learning !! :) + +[← Previous Day](../day48/README.md) | [Next Day →](../day50/README.md) diff --git a/2024/day50/README.md b/2024/day50/README.md new file mode 100644 index 0000000000..0340a36b09 --- /dev/null +++ b/2024/day50/README.md @@ -0,0 +1,30 @@ +# Day 50: Your CI/CD pipeline on AWS - Part-1 🚀 ☁ + +What if I tell you, in next 4 days, you'll be making a CI/CD pipeline on AWS with these tools. + +- CodeCommit +- CodeBuild +- CodeDeploy +- CodePipeline +- S3 + +## What is CodeCommit ? + +- CodeCommit is a managed source control service by AWS that allows users to store, manage, and version their source code and artifacts securely and at scale. It supports Git, integrates with other AWS services, enables collaboration through branch and merge workflows, and provides audit logs and compliance reports to meet regulatory requirements and track changes. Overall, CodeCommit provides developers with a reliable and efficient way to manage their codebase and set up a CI/CD pipeline for their software development projects. + +# Task-01 : + +- Set up a code repository on CodeCommit and clone it on your local. +- You need to setup GitCredentials in your AWS IAM. +- Use those credentials in your local and then clone the repository from CodeCommit + +# Task-02 : + +- Add a new file from local and commit to your local branch +- Push the local changes to CodeCommit repository. + +For more details watch [this](https://youtu.be/p5i3cMCQ760) video. + +Happy Learning :) + +[← Previous Day](../day49/README.md) | [Next Day →](../day51/README.md) diff --git a/2024/day51/README.md b/2024/day51/README.md new file mode 100644 index 0000000000..01f0b70262 --- /dev/null +++ b/2024/day51/README.md @@ -0,0 +1,30 @@ +# Day 51: Your CI/CD pipeline on AWS - Part 2 🚀 ☁ + +On your journey of making a CI/CD pipeline on AWS with these tools, you completed AWS CodeCommit. + +Next few days you'll learn these tools/services: + +- CodeBuild +- CodeDeploy +- CodePipeline +- S3 + +## What is CodeBuild ? + +- AWS CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. + +# Task-01 : + +- Read about Buildspec file for Codebuild. +- create a simple index.html file in CodeCommit Repository +- you have to build the index.html using nginx server + +# Task-02 : + +- Add buildspec.yaml file to CodeCommit Repository and complete the build process. + +For more details watch [this](https://youtu.be/p5i3cMCQ760) video. + +Happy Learning :) + +[← Previous Day](../day50/README.md) | [Next Day →](../day52/README.md) diff --git a/2024/day52/README.md b/2024/day52/README.md new file mode 100644 index 0000000000..52dffd62ae --- /dev/null +++ b/2024/day52/README.md @@ -0,0 +1,31 @@ +# Day 52: Your CI/CD pipeline on AWS - Part 3 🚀 ☁ + +On your journey of making a CI/CD pipeline on AWS with these tools, you completed AWS CodeCommit & CodeBuild. + +Next few days you'll learn these tools/services: + +- CodeDeploy +- CodePipeline +- S3 + +## What is CodeDeploy ? + +- AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services. + +CodeDeploy can deploy application content that runs on a server and is stored in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories. CodeDeploy can also deploy a serverless Lambda function. You do not need to make changes to your existing code before you can use CodeDeploy. + +# Task-01 : + +- Read about Appspec.yaml file for CodeDeploy. +- Deploy index.html file on EC2 machine using nginx +- you have to setup a CodeDeploy agent in order to deploy code on EC2 + +# Task-02 : + +- Add appspec.yaml file to CodeCommit Repository and complete the deployment process. + +For more details watch [this](https://youtu.be/IUF-pfbYGvg) video. + +Happy Learning :) + +[← Previous Day](../day51/README.md) | [Next Day →](../day53/README.md) diff --git a/2024/day53/README.md b/2024/day53/README.md new file mode 100644 index 0000000000..2139f0cb5d --- /dev/null +++ b/2024/day53/README.md @@ -0,0 +1,21 @@ +# Day 53: Your CI/CD pipeline on AWS - Part 4 🚀 ☁ + +On your journey of making a CI/CD pipeline on AWS with these tools, you completed AWS CodeCommit, CodeBuild & CodeDeploy. + +Finish Off in style with AWS CodePipeline + +## What is CodePipeline ? + +- CodePipeline builds, tests, and deploys your code every time there is a code change, based on the release process models you define. + Think of it as a CI/CD Pipeline service + +# Task-01 : + +- Create a Deployment group of Ec2 Instance. +- Create a CodePipeline that gets the code from CodeCommit, Builds the code using CodeBuild and deploys it to a Deployment Group. + +For more details watch [this](https://youtu.be/IUF-pfbYGvg) video. + +Happy Learning :) + +[← Previous Day](../day52/README.md) | [Next Day →](../day54/README.md) diff --git a/2024/day54/README.md b/2024/day54/README.md new file mode 100644 index 0000000000..f134a32bf1 --- /dev/null +++ b/2024/day54/README.md @@ -0,0 +1,19 @@ +# Day 54: Understanding Infrastructure as Code and Configuration Management + +## What's the difference bhaiyya? + +When it comes to the cloud, Infrastructure as Code (IaC) and Configuration Management (CM) are inseparable. With IaC, a descriptive model is used for infrastructure management. To name a few examples of infrastructure: networks, virtual computers, and load balancers. Using an IaC model always results in the same setting. + +Throughout the lifecycle of a product, Configuration Management (CM) ensures that the performance, functional and physical inputs, requirements, design, and operations of that product remain consistent. + +# Task-01 + +- Read more about IaC and Config. Management Tools +- Give differences on both with suitable examples +- What are most commont IaC and Config management Tools? + +Write a blog on this topic in the most creative way and post it on linkedIn :) + +happy learning... + +[← Previous Day](../day53/README.md) | [Next Day →](../day55/README.md) diff --git a/2024/day55/README.md b/2024/day55/README.md new file mode 100644 index 0000000000..5df87b107a --- /dev/null +++ b/2024/day55/README.md @@ -0,0 +1,28 @@ +# Day 55: Understanding Configuration Management with Ansible + +## What's this Ansible? + +Ansible is an open-source automation tool, or platform, used for IT tasks such as configuration management, application deployment, intraservice orchestration, and provisioning + +# Task-01 + +- Installation of Ansible on AWS EC2 (Master Node) + `sudo apt-add-repository ppa:ansible/ansible` `sudo apt update` + `sudo apt install ansible` + +# Task-02 + +- read more about Hosts file + `sudo nano /etc/ansible/hosts ansible-inventory --list -y` + +# Task-03 + +- Setup 2 more EC2 instances with same Private keys as the previous instance (Node) +- Copy the private key to master server where Ansible is setup +- Try a ping command using ansible to the Nodes. + +Write a blog on this topic with screenshots in the most creative way and post it on linkedIn :) + +happy learning... + +[← Previous Day](../day54/README.md) | [Next Day →](../day56/README.md) diff --git a/2024/day56/README.md b/2024/day56/README.md new file mode 100644 index 0000000000..853372bae2 --- /dev/null +++ b/2024/day56/README.md @@ -0,0 +1,18 @@ +# Day 56: Understanding Ad-hoc commands in Ansible + +Ansible ad hoc commands are one-liners designed to achieve a very specific task they are like quick snippets and your compact swiss army knife when you want to do a quick task across multiple machines. + +To put simply, Ansible ad hoc commands are one-liner Linux shell commands and playbooks are like a shell script, a collective of many commands with logic. + +Ansible ad hoc commands come handy when you want to perform a quick task. + +# Task-01 + +- write an ansible ad hoc ping command to ping 3 servers from inventory file +- Write an ansible ad hoc command to check uptime + +- You can refer to [this](https://www.middlewareinventory.com/blog/ansible-ad-hoc-commands/) blog to understand the different examples of ad-hoc commands and try out them, post the screenshots in a blog with an explanation. + +happy Learning :) + +[← Previous Day](../day55/README.md) | [Next Day →](../day57/README.md) diff --git a/2024/day57/README.md b/2024/day57/README.md new file mode 100644 index 0000000000..4866eecf58 --- /dev/null +++ b/2024/day57/README.md @@ -0,0 +1,13 @@ +# Day 57: Ansible Hands-on with video + +Ansible is fun, you saw in last few days how easy it is. + +Let's make it fun now, by using a video explanation for Ansible. + +# Task-01 + +- Write a Blog explanation for the [ansible video](https://youtu.be/SGB7EdiP39E) + +happy Learning :) + +[← Previous Day](../day56/README.md) | [Next Day →](../day58/README.md) diff --git a/2024/day58/README.md b/2024/day58/README.md new file mode 100644 index 0000000000..f8facae4b7 --- /dev/null +++ b/2024/day58/README.md @@ -0,0 +1,23 @@ +# Day 58: Ansible Playbooks + +Ansible playbooks run multiple tasks, assign roles, and define configurations, deployment steps, and variables. If you’re using multiple servers, Ansible playbooks organize the steps between the assembled machines or servers and get them organized and running in the way the users need them to. Consider playbooks as the equivalent of instruction manuals. + +# Task-01 + +- Write an ansible playbook to create a file on a different server + +- Write an ansible playbook to create a new user. + +- Write an ansible playbook to install docker on a group of servers + +Watch [this](https://youtu.be/089mRKoJTzo) video to learn about ansible Playbooks + +# Task-02 + +- Write a blog about writing ansible playbooks with the best practices. + +Let me or anyone in the community know if you face any challenges + +happy Learning :) + +[← Previous Day](../day57/README.md) | [Next Day →](../day59/README.md) diff --git a/2024/day59/README.md b/2024/day59/README.md new file mode 100644 index 0000000000..f8bf4d0908 --- /dev/null +++ b/2024/day59/README.md @@ -0,0 +1,26 @@ +# Day 59: Ansible Project 🔥 + +Ansible playbooks are amazing, as you learned yesterday. +What if you deploy a simple web app using ansible, sounds like a good project, right? + +# Task-01 + +- create 3 EC2 instances . make sure all three are created with same key pair + +- Install Ansible in host server + +- copy the private key from local to Host server (Ansible_host) at (/home/ubuntu/.ssh) + +- access the inventory file using sudo vim /etc/ansible/hosts + +- Create a playbook to install Nginx + +- deploy a sample webpage using the ansible playbook + +Read [this](https://medium.com/@sandeep010498/learn-ansible-with-real-time-project-cf6a0a512d45) Blog by [Sandeep Singh](https://medium.com/@sandeep010498) to clear all your doubts + +Let me or anyone in the community know if you face any challenges + +happy Learning :) + +[← Previous Day](../day58/README.md) | [Next Day →](../day60/README.md) diff --git a/2024/day60/README.md b/2024/day60/README.md new file mode 100644 index 0000000000..ecae296195 --- /dev/null +++ b/2024/day60/README.md @@ -0,0 +1,31 @@ +# Day 60 - Terraform🔥 + +Hello Learners , you guys are doing every task by creating an ec2 instance (mostly). Today let’s automate this process . How to do it ? Well Terraform is the solution . + +## What is Terraform? + +Terraform is an infrastructure as code (IaC) tool that allows you to create, manage, and update infrastructure +resources such as virtual machines, networks, and storage in a repeatable, scalable, and automated way. + +## Task 1: + +Install Terraform on your system +Refer this [link](https://phoenixnap.com/kb/how-to-install-terraform) for installation + +## Task 2: Answer below questions + +- Why we use terraform? +- What is Infrastructure as Code (IaC)? +- What is Resource? +- What is Provider? +- Whats is State file in terraform? What’s the importance of it ? +- What is Desired and Current State? + +You can prepare for tomorrow's task from [here](https://www.youtube.com/live/965CaSveIEI?feature=share)🚀🚀 + +We Hope this tasks will help you understand how to write a basic Terraform configuration file and basic commands on Terraform. + +Don’t forget to post in on LinkedIn. +Happy Learning:) + +[← Previous Day](../day59/README.md) | [Next Day →](../day61/README.md) diff --git a/2024/day61/README.md b/2024/day61/README.md new file mode 100644 index 0000000000..9d518b70db --- /dev/null +++ b/2024/day61/README.md @@ -0,0 +1,37 @@ +# Day 61- Terraform🔥 + +Hope you've already got the gist of What Working with Terraform would be like . Lets begin +with day 2 of Terraform ! + +## Task 1: + +find purpose of basic Terraform commands which you'll use often + +1. `terraform init` + +2. `terraform init -upgrade` + +3. `terraform plan` + +4. `terraform apply` + +5. `terraform validate` + +6. `terraform fmt` + +7. `terraform destroy` + +Also along with these tasks its important to know about Terraform in general- +Who are Terraform's main competitors? +The main competitors are: + +Ansible +Packer +Cloud Foundry +Kubernetes + +Want a Free video Course for terraform? Click [here](https://bit.ly/tws-terraform) + +Don't forget to share your learnings on Linkedin ! Happy Learning :) + +[← Previous Day](../day60/README.md) | [Next Day →](../day62/README.md) diff --git a/2024/day62/README.md b/2024/day62/README.md new file mode 100644 index 0000000000..76f61b708a --- /dev/null +++ b/2024/day62/README.md @@ -0,0 +1,79 @@ +# Day 62 - Terraform and Docker 🔥 + +Terraform needs to be told which provider to be used in the automation, hence we need to give the provider name with source and version. +For Docker, we can use this block of code in your main.tf + +## Blocks and Resources in Terraform + +## Terraform block + +## Task-01 + +- Create a Terraform script with Blocks and Resources + +``` +terraform { + required_providers { + docker = { + source = "kreuzwerker/docker" + version = "~> 2.21.0" +} +} +} +``` + +### Note: kreuzwerker/docker, is shorthand for registry.terraform.io/kreuzwerker/docker. + +## Provider Block + +The provider block configures the specified provider, in this case, docker. A provider is a plugin that Terraform uses to create and manage your resources. + +``` +provider "docker" {} +``` + +## Resource + +Use resource blocks to define components of your infrastructure. A resource might be a physical or virtual component such as a Docker container, or it can be a logical resource such as a Heroku application. + +Resource blocks have two strings before the block: the resource type and the resource name. In this example, the first resource type is docker_image and the name is Nginx. + +## Task-02 + +- Create a resource Block for an nginx docker image + +Hint: + +``` +resource "docker_image" "nginx" { + name = "nginx:latest" + keep_locally = false +} +``` + +- Create a resource Block for running a docker container for nginx + +``` +resource "docker_container" "nginx" { + image = docker_image.nginx.latest + name = "tutorial" + ports { + internal = 80 + external = 80 + } +} +``` + +Note: In case Docker is not installed + +`sudo apt-get install docker.io` +`sudo docker ps` +`sudo chown $USER /var/run/docker.sock` + +# Video Course + +I can imagine, Terraform can be tricky, so best to use a Free video Course for terraform [here](https://bit.ly/tws-terraform) + +Happy Learning :) + +[← Previous Day](../day61/README.md) | [Next Day →](../day63/README.md) diff --git a/2024/day63/README.md b/2024/day63/README.md new file mode 100644 index 0000000000..e4338fb906 --- /dev/null +++ b/2024/day63/README.md @@ -0,0 +1,62 @@ +# Day 63 - Terraform Variables + +variables in Terraform are quite important, as you need to hold values of names of instance, configs , etc. + +We can create a variables.tf file which will hold all the variables. + +``` +variable "filename" { +default = "/home/ubuntu/terrform-tutorials/terraform-variables/demo-var.txt" +} +``` + +``` +variable "content" { +default = "This is coming from a variable which was updated" +} +``` + +These variables can be accessed by var object in main.tf + +## Task-01 + +- Create a local file using Terraform + Hint: + +``` +resource "local_file" "devops" { +filename = var.filename +content = var.content +} +``` + +## Data Types in Terraform + +## Map + +``` +variable "file_contents" { +type = map +default = { +"statement1" = "this is cool" +"statement2" = "this is cooler" +} +} +``` + +## Task-02 + +- Use terraform to demonstrate usage of List, Set and Object datatypes +- Put proper screenshots of the outputs + +Use `terraform refresh` + +To refresh the state by your configuration file, reloads the variables + +# Video Course + +I can imagine, Terraform can be tricky, so best to use a Free video Course for terraform [here](https://bit.ly/tws-terraform) + +Happy Learning :) + +[← Previous Day](../day62/README.md) | [Next Day →](../day64/README.md) diff --git a/2024/day64/README.md b/2024/day64/README.md new file mode 100644 index 0000000000..d30e1048d9 --- /dev/null +++ b/2024/day64/README.md @@ -0,0 +1,67 @@ +# Day 64 - Terraform with AWS + +Provisioning on AWS is quite easy and straightforward with Terraform. + +## Prerequisites + +### AWS CLI installed + +The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. + +### AWS IAM user + +IAM (Identity Access Management) AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. + +_In order to connect your AWS account and Terraform, you need the access keys and secret access keys exported to your machine._ + +``` +export AWS_ACCESS_KEY_ID= +export AWS_SECRET_ACCESS_KEY= +``` + +### Install required providers + +``` +terraform { + required_providers { + aws = { + source = "hashicorp/aws" + version = "~> 4.16" +} +} + required_version = ">= 1.2.0" +} +``` + +Add the region where you want your instances to be + +``` +provider "aws" { +region = "us-east-1" +} +``` + +## Task-01 + +- Provision an AWS EC2 instance using Terraform + +Hint: + +``` +resource "aws_instance" "aws_ec2_test" { + count = 4 + ami = "ami-08c40ec9ead489470" + instance_type = "t2.micro" + tags = { + Name = "TerraformTestServerInstance" + } +} +``` + +# Video Course + +I can imagine, Terraform can be tricky, so best to use a Free video Course for terraform [here](https://bit.ly/tws-terraform) + +Happy Learning :) + +[← Previous Day](../day63/README.md) | [Next Day →](../day65/README.md) diff --git a/2024/day65/README.md b/2024/day65/README.md new file mode 100644 index 0000000000..904c6c1158 --- /dev/null +++ b/2024/day65/README.md @@ -0,0 +1,67 @@ +# Day 65 - Working with Terraform Resources 🚀 + +Yesterday, we saw how to create a Terraform script with Blocks and Resources. Today, we will dive deeper into Terraform resources. + +## Understanding Terraform Resources + +A resource in Terraform represents a component of your infrastructure, such as a physical server, a virtual machine, a DNS record, or an S3 bucket. Resources have attributes that define their properties and behaviors, such as the size and location of a virtual machine or the domain name of a DNS record. + +When you define a resource in Terraform, you specify the type of resource, a unique name for the resource, and the attributes that define the resource. Terraform uses the resource block to define resources in your Terraform configuration. + +## Task 1: Create a security group + +To allow traffic to the EC2 instance, you need to create a security group. Follow these steps: + +In your main.tf file, add the following code to create a security group: + +``` +resource "aws_security_group" "web_server" { + name_prefix = "web-server-sg" + + ingress { + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["0.0.0.0/0"] + } +} +``` + +- Run terraform init to initialize the Terraform project. + +- Run terraform apply to create the security group. + +## Task 2: Create an EC2 instance + +- Now you can create an EC2 instance with Terraform. Follow these steps: + +- In your main.tf file, add the following code to create an EC2 instance: + +``` +resource "aws_instance" "web_server" { + ami = "ami-0557a15b87f6559cf" + instance_type = "t2.micro" + key_name = "my-key-pair" + security_groups = [ + aws_security_group.web_server.name + ] + + user_data = <<-EOF + #!/bin/bash + echo "

Welcome to my website!

" > index.html + nohup python -m SimpleHTTPServer 80 & + EOF +} +``` + +Note: Replace the ami and key_name values with your own. You can find a list of available AMIs in the AWS documentation. + +Run terraform apply to create the EC2 instance. + +## Task 3: Access your website + +- Now that your EC2 instance is up and running, you can access the website you just hosted on it. Follow these steps: + +Happy Terraforming! + +[← Previous Day](../day64/README.md) | [Next Day →](../day66/README.md) diff --git a/2024/day66/README.md b/2024/day66/README.md new file mode 100644 index 0000000000..630837a5ff --- /dev/null +++ b/2024/day66/README.md @@ -0,0 +1,26 @@ +# Day 66 - Terraform Hands-on Project - Build Your Own AWS Infrastructure with Ease using Infrastructure as Code (IaC) Techniques(Interview Questions) ☁️ + +Welcome back to your Terraform journey. + +In the previous tasks, you have learned about the basics of Terraform, its configuration file, and creating an EC2 instance using Terraform. Today, we will explore more about Terraform and create multiple resources. + +## Task: + +- Create a VPC (Virtual Private Cloud) with CIDR block 10.0.0.0/16 +- Create a public subnet with CIDR block 10.0.1.0/24 in the above VPC. +- Create a private subnet with CIDR block 10.0.2.0/24 in the above VPC. +- Create an Internet Gateway (IGW) and attach it to the VPC. +- Create a route table for the public subnet and associate it with the public subnet. This route table should have a route to the Internet Gateway. +- Launch an EC2 instance in the public subnet with the following details: +- AMI: ami-0557a15b87f6559cf +- Instance type: t2.micro +- Security group: Allow SSH access from anywhere +- User data: Use a shell script to install Apache and host a simple website +- Create an Elastic IP and associate it with the EC2 instance. +- Open the website URL in a browser to verify that the website is hosted successfully. + +#### This Terraform hands-on task is designed to test your proficiency in using Terraform for Infrastructure as Code (IaC) on AWS. You will be tasked with creating a VPC, subnets, an internet gateway, and launching an EC2 instance with a web server running on it. This task will showcase your skills in automating infrastructure deployment using Terraform. It's a popular interview question for companies looking for candidates with hands-on experience in Terraform. That's it for today. + +Happy Terraforming:) + +[← Previous Day](../day65/README.md) | [Next Day →](../day67/README.md) diff --git a/2024/day67/README.md b/2024/day67/README.md new file mode 100644 index 0000000000..62e6f35476 --- /dev/null +++ b/2024/day67/README.md @@ -0,0 +1,22 @@ +# Day 67: AWS S3 Bucket Creation and Management + +## AWS S3 Bucket + +Amazon S3 (Simple Storage Service) is an object storage service that offers industry-leading scalability, data availability, security, and performance. It can be used for a variety of use cases, such as storing and retrieving data, hosting static websites, and more. + +In this task, you will learn how to create and manage S3 buckets in AWS. + +## Task + +- Create an S3 bucket using Terraform. +- Configure the bucket to allow public read access. +- Create an S3 bucket policy that allows read-only access to a specific IAM user or role. +- Enable versioning on the S3 bucket. + +## Resources + +[Terraform S3 bucket resource](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket) + +Good luck and happy learning! + +[← Previous Day](../day66/README.md) | [Next Day →](../day68/README.md) diff --git a/2024/day68/README.md b/2024/day68/README.md new file mode 100644 index 0000000000..4185d8a5dd --- /dev/null +++ b/2024/day68/README.md @@ -0,0 +1,66 @@ +# Day 68 - Scaling with Terraform 🚀 + +Yesterday, we learned how to AWS S3 Bucket with Terraform. Today, we will see how to scale our infrastructure with Terraform. + +## Understanding Scaling + +Scaling is the process of adding or removing resources to match the changing demands of your application. As your application grows, you will need to add more resources to handle the increased load. And as the load decreases, you can remove the extra resources to save costs. + +Terraform makes it easy to scale your infrastructure by providing a declarative way to define your resources. You can define the number of resources you need and Terraform will automatically create or destroy the resources as needed. + +## Task 1: Create an Auto Scaling Group + +Auto Scaling Groups are used to automatically add or remove EC2 instances based on the current demand. Follow these steps to create an Auto Scaling Group: + +- In your main.tf file, add the following code to create an Auto Scaling Group: + +``` +resource "aws_launch_configuration" "web_server_as" { + image_id = "ami-005f9685cb30f234b" + instance_type = "t2.micro" + security_groups = [aws_security_group.web_server.name] + + user_data = <<-EOF + #!/bin/bash + echo "

You're doing really Great

" > index.html + nohup python -m SimpleHTTPServer 80 & + EOF +} + +resource "aws_autoscaling_group" "web_server_asg" { + name = "web-server-asg" + launch_configuration = aws_launch_configuration.web_server_lc.name + min_size = 1 + max_size = 3 + desired_capacity = 2 + health_check_type = "EC2" + load_balancers = [aws_elb.web_server_lb.name] + vpc_zone_identifier = [aws_subnet.public_subnet_1a.id, aws_subnet.public_subnet_1b.id] +} + + +``` + +- Run terraform apply to create the Auto Scaling Group. + +## Task 2: Test Scaling + +- Go to the AWS Management Console and select the Auto Scaling Groups service. + +- Select the Auto Scaling Group you just created and click on the "Edit" button. + +- Increase the "Desired Capacity" to 3 and click on the "Save" button. + +- Wait a few minutes for the new instances to be launched. + +- Go to the EC2 Instances service and verify that the new instances have been launched. + +- Decrease the "Desired Capacity" to 1 and wait a few minutes for the extra instances to be terminated. + +- Go to the EC2 Instances service and verify that the extra instances have been terminated. + +Congratulations🎊🎉 You have successfully scaled your infrastructure with Terraform. + +Happy Learning :) + +[← Previous Day](../day67/README.md) | [Next Day →](../day69/README.md) diff --git a/2024/day69/README.md b/2024/day69/README.md new file mode 100644 index 0000000000..570803dbdd --- /dev/null +++ b/2024/day69/README.md @@ -0,0 +1,182 @@ +# Day 69 - Meta-Arguments in Terraform + +When you define a resource block in Terraform, by default, this specifies one resource that will be created. To manage several of the same resources, you can use either count or for_each, which removes the need to write a separate block of code for each one. Using these options reduces overhead and makes your code neater. + +count is what is known as a ‘meta-argument’ defined by the Terraform language. Meta-arguments help achieve certain requirements within the resource block. + +## Count + +The count meta-argument accepts a whole number and creates the number of instances of the resource specified. + +When each instance is created, it has its own distinct infrastructure object associated with it, so each can be managed separately. When the configuration is applied, each object can be created, destroyed, or updated as appropriate. + +eg. + +``` + +terraform { + +required_providers { + +aws = { + +source = "hashicorp/aws" + +version = "~> 4.16" + +} + +} + +required_version = ">= 1.2.0" + +} + + + +provider "aws" { + +region = "us-east-1" + +} + + + +resource "aws_instance" "server" { + +count = 4 + + + +ami = "ami-08c40ec9ead489470" + +instance_type = "t2.micro" + + + +tags = { + +Name = "Server ${count.index}" + +} + +} + + + +``` + +## for_each + +Like the count argument, the for_each meta-argument creates multiple instances of a module or resource block. However, instead of specifying the number of resources, the for_each meta-argument accepts a map or a set of strings. This is useful when multiple resources are required that have different values. Consider our Active directory groups example, with each group requiring a different owner. + +``` + +terraform { + +required_providers { + +aws = { + +source = "hashicorp/aws" + +version = "~> 4.16" + +} + +} + +required_version = ">= 1.2.0" + +} + + + +provider "aws" { + +region = "us-east-1" + +} + + + +locals { + +ami_ids = toset([ + +"ami-0b0dcb5067f052a63", + +"ami-08c40ec9ead489470", + +]) + +} + + + +resource "aws_instance" "server" { + +for_each = local.ami_ids + + + +ami = each.key + +instance_type = "t2.micro" + +tags = { + +Name = "Server ${each.key}" + +} + +} + + + +Multiple key value iteration + +locals { + +ami_ids = { + +"linux" :"ami-0b0dcb5067f052a63", + +"ubuntu": "ami-08c40ec9ead489470", + +} + +} + + + +resource "aws_instance" "server" { + +for_each = local.ami_ids + + + +ami = each.value + +instance_type = "t2.micro" + + + +tags = { + +Name = "Server ${each.key}" + +} + +} + +``` + +## Task-01 + +- Create the above Infrastructure as code and demonstrate the use of Count and for_each. +- Write about meta-arguments and its use in Terraform. + +Happy learning :) + +[← Previous Day](../day68/README.md) | [Next Day →](../day70/README.md) diff --git a/2024/day70/README.md b/2024/day70/README.md new file mode 100644 index 0000000000..4a42230590 --- /dev/null +++ b/2024/day70/README.md @@ -0,0 +1,80 @@ +# Day 70 - Terraform Modules + +- Modules are containers for multiple resources that are used together. A module consists of a collection of .tf and/or .tf.json files kept together in a directory +- A module can call other modules, which lets you include the child module's resources into the configuration in a concise way. +- Modules can also be called multiple times, either within the same configuration or in separate configurations, allowing resource configurations to be packaged and re-used. + +### Below is the format on how to use modules: + +``` +# Creating a AWS EC2 Instance +resource "aws_instance" "server-instance" { + # Define number of instance + instance_count = var.number_of_instances + + # Instance Configuration + ami = var.ami + instance_type = var.instance_type + subnet_id = var.subnet_id + vpc_security_group_ids = var.security_group + + # Instance Tagsid + tags = { + Name = "${var.instance_name}" + } +} +``` + +``` +# Server Module Variables +variable "number_of_instances" { + description = "Number of Instances to Create" + type = number + default = 1 +} + +variable "instance_name" { + description = "Instance Name" +} + +variable "ami" { + description = "AMI ID" + default = "ami-xxxx" +} + +variable "instance_type" { + description = "Instance Type" +} + +variable "subnet_id" { + description = "Subnet ID" +} + +variable "security_group" { + description = "Security Group" + type = list(any) +} +``` + +``` +# Server Module Output +output "server_id" { + description = "Server ID" + value = aws_instance.server-instance.id +} + +``` + +## Task-01 + +Explain the below in your own words and it shouldnt be copied from Internet 😉 + +- Write about different modules Terraform. +- Difference between Root Module and Child Module. +- Is modules and Namespaces are same? Justify your answer for both Yes/No + +You all are doing great, and you have come so far. Well Done Everyone🎉 + +Thode mehnat aur krni hai bas to lge rho tab tak.....Happy learning :) + +[← Previous Day](../day69/README.md) | [Next Day →](../day71/README.md) diff --git a/2024/day71/README.md b/2024/day71/README.md new file mode 100644 index 0000000000..7bcb7bb3e1 --- /dev/null +++ b/2024/day71/README.md @@ -0,0 +1,41 @@ +# Day 71 - Let's prepare for some interview questions of Terraform 🔥 + +### 1. What is Terraform and how it is different from other IaaC tools? + +### 2. How do you call a main.tf module? + +### 3. What exactly is Sentinel? Can you provide few examples where we can use for Sentinel policies? + +### 4. You have a Terraform configuration file that defines an infrastructure deployment. However, there are multiple instances of the same resource that need to be created. How would you modify the configuration file to achieve this? + +### 5. You want to know from which paths Terraform is loading providers referenced in your Terraform configuration (\*.tf files). You need to enable debug messages to find this out. Which of the following would achieve this? + +A. Set the environment variable TF_LOG=TRACE + +B. Set verbose logging for each provider in your Terraform configuration + +C. Set the environment variable TF_VAR_log=TRACE + +D. Set the environment variable TF_LOG_PATH + +### 6. Below command will destroy everything that is being created in the infrastructure. Tell us how would you save any particular resource while destroying the complete infrastructure. + +``` +terraform destroy +``` + +### 7. Which module is used to store .tfstate file in S3? + +### 8. How do you manage sensitive data in Terraform, such as API keys or passwords? + +### 9. You are working on a Terraform project that needs to provision an S3 bucket, and a user with read and write access to the bucket. What resources would you use to accomplish this, and how would you configure them? + +### 10. Who maintains Terraform providers? + +### 11. How can we export data from one module to another? + +# + +Waiting for your responses😉.....Till then Happy learning :) + +[← Previous Day](../day70/README.md) | [Next Day →](../day72/README.md) diff --git a/2024/day72/README.md b/2024/day72/README.md new file mode 100644 index 0000000000..a283b10e39 --- /dev/null +++ b/2024/day72/README.md @@ -0,0 +1,16 @@ +Day 72 - Grafana🔥 + +Hello Learners , you guys are doing really a good job. You will not be there 24\*7 to monitor your resources. So, Today let’s monitor the resources in a smart way with - Grafana 🎉 + +## Task 1: + +> What is Grafana? What are the features of Grafana? +> Why Grafana? +> What type of monitoring can be done via Grafana? +> What databases work with Grafana? +> What are metrics and visualizations in Grafana? +> What is the difference between Grafana vs Prometheus? + +--- + +[← Previous Day](../day71/README.md) | [Next Day →](../day73/README.md) diff --git a/2024/day73/README.md b/2024/day73/README.md new file mode 100644 index 0000000000..a1af9d7dc9 --- /dev/null +++ b/2024/day73/README.md @@ -0,0 +1,16 @@ +Day 73 - Grafana 🔥 +Hope you are now clear with the basics of grafana, like why we use, where we use, what can we do with this and so on. + +Now, let's do some practical stuff. + +--- + +Task: + +> Setup grafana in your local environment on AWS EC2. + +--- + +Ref: https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7042518379030556672-ZZA-?utm_source=share&utm_medium=member_desktop + +[← Previous Day](../day72/README.md) | [Next Day →](../day74/README.md) diff --git a/2024/day74/README.md b/2024/day74/README.md new file mode 100644 index 0000000000..2877eeebd4 --- /dev/null +++ b/2024/day74/README.md @@ -0,0 +1,19 @@ +# Day 74 - Connecting EC2 with Grafana . + +You guys did amazing job last day setting up Grafana on Local 🔥. + +Now, let's do one step ahead. + +--- + +Task: + +Connect an Linux and one Windows EC2 instance with Grafana and monitor the different components of the server. + +--- + +Don't forget to share this amazing work over LinkedIn and Tag us. + +## Happy Learning :) + +[← Previous Day](../day73/README.md) | [Next Day →](../day75/README.md) diff --git a/2024/day75/README.md b/2024/day75/README.md new file mode 100644 index 0000000000..3c75d41caa --- /dev/null +++ b/2024/day75/README.md @@ -0,0 +1,30 @@ +# Day 75 - Sending Docker Log to Grafana + +We have monitored ,😉 that you guys are understanding and doing amazing with monitoring tool. 👌 + +Today, make it little bit more complex but interesting 😍 and let's add one more **Project** 🔥 to your resume. + +--- + +## Task: + +- Install _Docker_ and start docker service on a Linux EC2 through [USER DATA](../day39/README.md) . +- Create 2 Docker containers and run any basic application on those containers (A simple todo app will work). +- Now intregrate the docker containers and share the real time logs with Grafana (Your Instance should be connected to Grafana and Docker plugin should be enabled on grafana). +- Check the logs or docker container name on Grafana UI. + +--- + +You can use [this video](https://youtu.be/y3SGHbixmJw) for your refernce. But it's always better to find your own way of doing. 😊 + +## Bonus : + +- As you have done this amazing task, here is one bonus link.❤️ + +## You can use this [refernce video](https://youtu.be/CCi957AnSfc) to intregrate _Prometheus_ with _Grafana_ and monitor Docker containers. Seems interesting ? + +Don't forget to share this amazing work over LinkedIn and Tag us. + +## Happy Learning :) + +[← Previous Day](../day74/README.md) | [Next Day →](../day76/README.md) diff --git a/2024/day76/README.md b/2024/day76/README.md new file mode 100644 index 0000000000..7c3fbb0bd1 --- /dev/null +++ b/2024/day76/README.md @@ -0,0 +1,33 @@ +# Day 76 Build a Grafana dashboard + +A dashboard gives you an at-a-glance view of your data and lets you track metrics through different visualizations. + +Dashboards consist of panels, each representing a part of the story you want your dashboard to tell. + +Every panel consists of a query and a visualization. The query defines what data you want to display, whereas the visualization defines how the data is displayed. + +## Task 01 + +- In the sidebar, hover your cursor over the Create (plus sign) icon and then click Dashboard. + +- Click Add a new panel. + +- In the Query editor below the graph, enter the query from earlier and then press Shift + Enter: + +`sum(rate(tns_request_duration_seconds_count[5m])) by(route)` + +- In the Legend field, enter {{route}} to rename the time series in the legend. The graph legend updates when you click outside the field. + +- In the Panel editor on the right, under Settings, change the panel title to “Traffic”. + +- Click Apply in the top-right corner to save the panel and go back to the dashboard view. + +- Click the Save dashboard (disk) icon at the top of the dashboard to save your dashboard. + +- Enter a name in the Dashboard name field and then click Save. + +Read [this](https://grafana.com/tutorials/grafana-fundamentals/) in case you have any questions + +Do share some amazing Dashboards with the community + +[← Previous Day](../day75/README.md) | [Next Day →](../day77/README.md) diff --git a/2024/day77/README.md b/2024/day77/README.md new file mode 100644 index 0000000000..7acf545be9 --- /dev/null +++ b/2024/day77/README.md @@ -0,0 +1,14 @@ +# Day 77 Alerting + +Grafana Alerting allows you to learn about problems in your systems moments after they occur. Create, manage, and take action on your alerts in a single, consolidated view, and improve your team’s ability to identify and resolve issues quickly. + +Grafana Alerting is available for Grafana OSS, Grafana Enterprise, or Grafana Cloud. With Mimir and Loki alert rules you can run alert expressions closer to your data and at massive scale, all managed by the Grafana UI you are already familiar with. + +## Task-01 + +- Setup [Grafana cloud](https://grafana.com/products/cloud/) +- Setup sample alerting + +Check out [this blog](https://grafana.com/docs/grafana/latest/alerting/) for more details + +[← Previous Day](../day76/README.md) | [Next Day →](../day78/README.md) diff --git a/2024/day78/README.md b/2024/day78/README.md new file mode 100644 index 0000000000..631894de55 --- /dev/null +++ b/2024/day78/README.md @@ -0,0 +1,14 @@ +Day - 78 (Grafana Cloud) + +--- + +Task - 01 + +1. Setup alerts for EC2 instance. +2. Setup alerts for AWS Billing Alerts. + +--- + +For Reference: https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7044695663913148416-LfvD?utm_source=share&utm_medium=member_desktop + +[← Previous Day](../day77/README.md) | [Next Day →](../day79/README.md) diff --git a/2024/day79/README.md b/2024/day79/README.md new file mode 100644 index 0000000000..4eb87c4c49 --- /dev/null +++ b/2024/day79/README.md @@ -0,0 +1,20 @@ +Day 79 - Prometheus 🔥 + +Now, the next step is to learn about the Prometheus. +It's an open-source system for monitoring services and alerts based on a time series data model. Prometheus collects data and metrics from different services and stores them according to a unique identifier—the metric name—and a time stamp. + +Tasks: + +--- + +1. What is the Architecture of Prometheus Monitoring? +2. What are the Features of Prometheus? +3. What are the Components of Prometheus? +4. What database is used by Prometheus? +5. What is the default data retention period in Prometheus? + +--- + +Ref: https://www.devopsschool.com/blog/top-50-prometheus-interview-questions-and-answers/ + +[← Previous Day](../day78/README.md) | [Next Day →](../day80/README.md) diff --git a/2024/day80/README.md b/2024/day80/README.md new file mode 100644 index 0000000000..edbc3ec561 --- /dev/null +++ b/2024/day80/README.md @@ -0,0 +1,15 @@ +# Project-1 + +========= + +# Project Description + +The project aims to automate the building, testing, and deployment process of a web application using Jenkins and GitHub. The Jenkins pipeline will be triggered automatically by GitHub webhook integration when changes are made to the code repository. The pipeline will include stages such as building, testing, and deploying the application, with notifications and alerts for failed builds or deployments. + +## Task-01 + +Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7011367641952993281-DHn5?utm_source=share&utm_medium=member_desktop) + +Happy Learning :) + +[← Previous Day](../day79/README.md) | [Next Day →](../day81/README.md) diff --git a/2024/day81/README.md b/2024/day81/README.md new file mode 100644 index 0000000000..a10675fa1c --- /dev/null +++ b/2024/day81/README.md @@ -0,0 +1,15 @@ +# Project-2 + +========= + +# Project Description + +The project is about automating the deployment process of a web application using Jenkins and its declarative syntax. The pipeline includes stages like building, testing, and deploying to a staging environment. It also includes running acceptance tests and deploying to production if all tests pass. + +## Task-01 + +Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7014971330496212992-6Q2m?utm_source=share&utm_medium=member_desktop) + +Happy Learning :) + +[← Previous Day](../day80/README.md) | [Next Day →](../day82/README.md) diff --git a/2024/day82/README.md b/2024/day82/README.md new file mode 100644 index 0000000000..a17acccd92 --- /dev/null +++ b/2024/day82/README.md @@ -0,0 +1,15 @@ +# Project-3 + +========= + +# Project Description + +The project involves hosting a static website using an AWS S3 bucket. Amazon S3 is an object storage service that provides a simple web services interface to store and retrieve any amount of data. The website files will be uploaded to an S3 bucket and configured to function as a static website. The bucket will be configured with the appropriate permissions and a unique domain name, making the website publicly accessible. Overall, the project aims to leverage the benefits of AWS S3 to host and scale a static website in a cost-effective and scalable manner. + +## Task-01 + +Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_aws-project-devopsjobs-activity-7016427742300663808-JAQd?utm_source=share&utm_medium=member_desktop) + +Happy Learning :) + +[← Previous Day](../day81/README.md) | [Next Day →](../day83/README.md) diff --git a/2024/day83/README.md b/2024/day83/README.md new file mode 100644 index 0000000000..dc80aefc33 --- /dev/null +++ b/2024/day83/README.md @@ -0,0 +1,15 @@ +# Project-4 + +========= + +# Project Description + +The project aims to deploy a web application using Docker Swarm, a container orchestration tool that allows for easy management and scaling of containerized applications. The project will utilize Docker Swarm's production-ready features such as load balancing, rolling updates, and service discovery to ensure high availability and reliability of the web application. The project will involve creating a Dockerfile to package the application into a container and then deploying it onto a Swarm cluster. The Swarm cluster will be configured to provide automated failover, load balancing, and horizontal scaling to the application. The goal of the project is to demonstrate the benefits of Docker Swarm for deploying and managing containerized applications in production environments. + +## Task-01 + +Do the hands-on Project, read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7034173810656296960-UjUw?utm_source=share&utm_medium=member_desktop) + +Happy Learning :) + +[← Previous Day](../day82/README.md) | [Next Day →](../day84/README.md) diff --git a/2024/day84/README.md b/2024/day84/README.md new file mode 100644 index 0000000000..be78b29c8b --- /dev/null +++ b/2024/day84/README.md @@ -0,0 +1,15 @@ +# Project-5 + +========= + +# Project Description + +The project involves deploying a Netflix clone web application on a Kubernetes cluster, a popular container orchestration platform that simplifies the deployment and management of containerized applications. The project will require creating Docker images of the web application and its dependencies and deploying them onto the Kubernetes cluster using Kubernetes manifests. The Kubernetes cluster will provide benefits such as high availability, scalability, and automatic failover of the application. Additionally, the project will utilize Kubernetes tools such as Kubernetes Dashboard and kubectl to monitor and manage the deployed application. Overall, the project aims to demonstrate the power and benefits of Kubernetes for deploying and managing containerized applications at scale. + +## Task-01 + +Get a netflix clone form [GitHub](https://github.com/devandres-tech/Netflix-Clone), read [this](https://www.linkedin.com/posts/chetanrakhra_devops-project-share-activity-7034173810656296960-UjUw?utm_source=share&utm_medium=member_desktop) and follow the Redit clone steps to similarly deploy a Netflix Clone + +Happy Learning :) + +[← Previous Day](../day83/README.md) | [Next Day →](../day85/README.md) diff --git a/2024/day85/README.md b/2024/day85/README.md new file mode 100644 index 0000000000..0cd64c996b --- /dev/null +++ b/2024/day85/README.md @@ -0,0 +1,26 @@ +# Project-6 + +========= + +# Project Description + +The project involves deploying a Node JS app on AWS ECS Fargate and AWS ECR. +Read More about the tech stack [here](https://faun.pub/what-is-amazon-ecs-and-ecr-how-does-they-work-with-an-example-4acbf9be8415) + +## Task-01 + +- Get a NodeJs application from [GitHub](https://github.com/LondheShubham153/node-todo-cicd). + +- Build the Dockerfile present in the repo + +- Setup AWS CLI and AWS Login in order to tag and push to ECR + +- Setup an ECS cluster + +- Create a Task Definition for the node js project with ECR image + +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day84/README.md) | [Next Day →](../day86/README.md) diff --git a/2024/day86/README.md b/2024/day86/README.md new file mode 100644 index 0000000000..c8f809df7d --- /dev/null +++ b/2024/day86/README.md @@ -0,0 +1,24 @@ +# Project-7 + +========= + +# Project Description + +The project involves deploying a Portfolio app on AWS S3 using GitHub Actions. +Git Hub actions allows you to perform CICD with GitHub Repository integrated. + +## Task-01 + +- Get a Portfolio application from [GitHub](https://github.com/LondheShubham153/tws-portfolio). + +- Build the GitHub Actions Workflow + +- Setup AWS CLI and AWS Login in order to sync website to S3 (to be done as a part of YAML) + +- Follow this [video]() to understand it better + +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day85/README.md) | [Next Day →](../day87/README.md) diff --git a/2024/day87/README.md b/2024/day87/README.md new file mode 100644 index 0000000000..fa123ea638 --- /dev/null +++ b/2024/day87/README.md @@ -0,0 +1,24 @@ +# Project-8 + +========= + +# Project Description + +The project involves deploying a react application on AWS Elastic BeanStalk using GitHub Actions. +Git Hub actions allows you to perform CICD with GitHub Repository integrated. + +## Task-01 + +- Get source code from [GitHub](https://github.com/sitchatt/AWS_Elastic_BeanStalk_On_EC2.git). + +- Setup AWS Elastic BeanStalk + +- Build the GitHub Actions Workflow + +- Follow this [blog](https://www.linkedin.com/posts/sitabja-chatterjee_effortless-deployment-of-react-app-to-aws-activity-7053579065487687680-wZI8?utm_source=share&utm_medium=member_desktop) to understand it better + +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day86/README.md) | [Next Day →](../day88/README.md) diff --git a/2024/day88/README.md b/2024/day88/README.md new file mode 100644 index 0000000000..3668934da1 --- /dev/null +++ b/2024/day88/README.md @@ -0,0 +1,23 @@ +# Project-9 + +========= + +# Project Description + +The project involves deploying a Django Todo app on AWS EC2 using Kubeadm Kubernetes cluster. + +Kubernetes Cluster helps in Auto-scaling and Auto-healing of your application. + +## Task-01 + +- Get a Django Full Stack application from [GitHub](https://github.com/LondheShubham153/django-todo-cicd). + +- Setup the Kubernetes cluster using [this script](https://github.com/RishikeshOps/Scripts/blob/main/k8sss.sh) + +- Setup Deployment and Service for Kubernetes. + +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day87/README.md) | [Next Day →](../day89/README.md) diff --git a/2024/day89/README.md b/2024/day89/README.md new file mode 100644 index 0000000000..45ee46628d --- /dev/null +++ b/2024/day89/README.md @@ -0,0 +1,19 @@ +# Project-10 + +========= + +# Project Description + +The project involves Mounting of AWS S3 Bucket On Amazon EC2 Linux Using S3FS. + +This is a AWS Mini Project that will teach you AWS, S3, EC2, S3FS. + +## Task-01 + +- Create IAM user and set policies for the project resources using this [blog](https://medium.com/@chetxn/project-8-devops-implementation-8300b9ed1f2). +- Utilize and make the best use of aws-cli +- Run the Project and share it on LinkedIn :) + +Happy Learning :) + +[← Previous Day](../day88/README.md) | [Next Day →](../day90/README.md) diff --git a/2024/day90/README.md b/2024/day90/README.md new file mode 100644 index 0000000000..d28985c060 --- /dev/null +++ b/2024/day90/README.md @@ -0,0 +1,29 @@ +# Day 90: The Awesome Finale! 🎉 🎉 + +🚀 Can you believe it? You've hit the jackpot – Day 90, the grand finale of our DevOps bonanza. Time to give yourself a virtual high-five! + +### What's Next? + +While this marks the end of the official 90-day journey, remember that your learning journey in DevOps is far from over. There's always something new to explore, tools to master, and techniques to refine. We're continuing to curate more content, challenges, and resources to help you advance your DevOps expertise. + +### Share Your Achievement + +Share your journey with the world! Post about your accomplishments on social media using the hashtag #90DaysOfDevOps. Inspire others to join the DevOps movement and take charge of their learning path. + +### Keep the Momentum Going! + +The knowledge and skills you've gained during these 90 days are just the beginning. Keep practicing, experimenting, and collaborating. DevOps is a continuous journey of improvement and innovation. + +### Star the Repository + +If you've found value in this repository and the DevOps content we've curated, consider showing your appreciation by starring this repository. Your support motivates us to keep creating high-quality content and resources for the community. + +**[🌟 Star this repository](https://github.com/LondheShubham153/90DaysOfDevOps)** + +Thank you for being part of the "90 Days of DevOps" adventure. +Keep coding, automating, deploying, and innovating! 🎈 + +With gratitude, +@TrainWithShubham + +[← Previous Day](../day89/README.md) diff --git a/2025/ansible/README.md b/2025/ansible/README.md new file mode 100644 index 0000000000..5721fffd01 --- /dev/null +++ b/2025/ansible/README.md @@ -0,0 +1,193 @@ +# Week 9: Ansible Automation Challenge + +This set of tasks is part of the 90DaysOfDevOps challenge and focuses on solving real-world automation problems using Ansible. By completing these tasks on your designated Ansible project repository, you'll work on scenarios that mirror production environments and industry practices. The tasks cover installation, dynamic inventory management, robust playbook development, role organization, secure secret management, and orchestration of multi-tier applications. Your work will help you build practical skills and prepare for technical interviews. + +**Important:** +1. Fork or create your designated Ansible project repository (or use your own) and implement all tasks on your fork. +2. Document all steps, commands, screenshots, and observations in a file named `solution.md` within your fork. +3. Submit your `solution.md` file in the Week 9 (Ansible) task folder of the 90DaysOfDevOps repository. + +--- + +## Task 1: Install Ansible and Configure a Dynamic Inventory + +**Real-World Scenario:** +In production, inventories change frequently. Set up Ansible with a dynamic inventory (using a script or AWS EC2 plugin) to automatically fetch and update target hosts. + +**Steps:** +1. **Install Ansible:** + - Follow the official installation guide to install Ansible on your local machine. +2. **Configure a Dynamic Inventory:** + - Set up a dynamic inventory using an inventory script or the AWS EC2 dynamic inventory plugin. +3. **Test Connectivity:** + - Run: + ```bash + ansible all -m ping -i dynamic_inventory.py + ``` + to ensure all servers are reachable. +4. **Document in `solution.md`:** + - Include your dynamic inventory configuration and test outputs. + - Explain how dynamic inventories adapt to a production environment. + +**Interview Questions:** +- How do dynamic inventories improve the management of production hosts? +- What challenges do dynamic inventory sources present and how can you mitigate them? + +--- + +## Task 2: Develop a Robust Playbook to Install and Configure Nginx + +**Real-World Scenario:** +Web servers like Nginx must be reliably deployed and configured in production. Create a playbook that installs Nginx, configures it using advanced Jinja2 templating (with loops, conditionals, and filters), and verifies that Nginx is running correctly. Incorporate asynchronous task execution with error handling for long-running operations. + +**Steps:** +1. **Create a Comprehensive Playbook:** + - Write a playbook (e.g., `nginx_setup.yml`) that: + - Installs Nginx. + - Deploys a templated Nginx configuration using a Jinja2 template (`nginx.conf.j2`) that includes loops and conditionals. + - Implements asynchronous execution (`async` and `poll`) with error handling. +2. **Test the Playbook:** + - Run the playbook against your dynamic inventory. +3. **Document in `solution.md`:** + - Include your playbook and Jinja2 template. + - Describe your strategies for asynchronous execution and error handling. + +**Interview Questions:** +- How do Jinja2 templates with loops and conditionals improve production configuration management? +- What are the challenges of managing long-running tasks with async in Ansible, and how do you handle errors? + +--- + +## Task 3: Organize Complex Playbooks Using Roles and Advanced Variables + +**Real-World Scenario:** +For large-scale production environments, organizing your playbooks into roles enhances maintainability and collaboration. Refactor your playbooks into roles (e.g., `nginx`, `app`, `db`) and use advanced variable files (with hierarchies and conditionals) to manage different configurations. + +**Steps:** +1. **Create Roles:** + - Develop roles for different components (e.g., `nginx`, `app`, `db`) with the standard directory structure (`tasks/`, `handlers/`, `templates/`, `vars/`). +2. **Utilize Advanced Variables:** + - Create hierarchical variable files with default values and override files for various scenarios. +3. **Refactor and Execute:** + - Update your composite playbook to include the roles. +4. **Document in `solution.md`:** + - Provide the role directory structure and sample variable files. + - Explain how this organization improves maintainability and flexibility. + +**Interview Questions:** +- How do roles improve scalability and collaboration in large-scale Ansible projects? +- What strategies do you use for variable precedence and hierarchy in complex environments? + +--- + +## Task 4: Secure Production Data with Advanced Ansible Vault Techniques + +**Real-World Scenario:** +In production, managing secrets securely is critical. Use Ansible Vault to encrypt sensitive data and explore advanced techniques like splitting secrets into multiple files and decrypting them at runtime. + +**Steps:** +1. **Create Encrypted Files:** + - Use `ansible-vault create` to encrypt multiple secret files. +2. **Integrate Vault in Your Playbooks:** + - Modify your playbooks to load encrypted variables from multiple files. +3. **Test Decryption:** + - Run your playbooks with the vault password to ensure proper decryption. +4. **Document in `solution.md`:** + - Outline your vault strategy and best practices (without exposing secrets). + - Explain the importance of secure secret management. + +**Interview Questions:** +- How does Ansible Vault secure sensitive data in production? +- What advanced techniques can you use for managing secrets at scale? + +--- + +## Task 5: Advanced Orchestration for Multi-Tier Deployments + +**Real-World Scenario:** +Deploy a multi-tier application (e.g., frontend, backend, and database) using Ansible roles to manage each tier. Use orchestration features (such as `serial`, `order`, and async execution) to ensure a smooth deployment process. + +**Steps:** +1. **Develop a Composite Playbook:** + - Write a playbook that calls multiple roles (e.g., `nginx` for frontend, `app` for backend, `db` for the database). +2. **Manage Execution Order and Async Tasks:** + - Use features like `serial` or `order` and implement asynchronous tasks with error handling where necessary. +3. **Document in `solution.md`:** + - Include your composite playbook and explain your orchestration strategy. + - Describe any asynchronous task handling and error management. + +**Interview Questions:** +- How do you orchestrate multi-tier deployments with Ansible? +- What are the challenges and solutions for asynchronous task execution in a multi-tier environment? + +--- + +## Bonus Task: Multi-Environment Setup with Terraform & Ansible + +**Real-World Scenario:** +Integrate Terraform and Ansible to provision and configure AWS infrastructure across multiple environments (dev, staging, prod). Use Terraform to provision resources using environment-specific variable files and use Ansible to configure them (e.g., install and configure Nginx). + +**Steps:** +1. **Provision with Terraform:** + - Create environment-specific variable files (e.g., `dev.tfvars`, `staging.tfvars`, `prod.tfvars`). + - Apply your Terraform configuration for each environment: + ```bash + terraform apply -var-file="dev.tfvars" + ``` +2. **Configure with Ansible:** + - Create separate inventory files or use a dynamic inventory based on Terraform outputs. + - Write a playbook (e.g., `nginx_setup.yml`) to install and configure Nginx. + - Execute the playbook for each environment. +3. **Document in `solution.md`:** + - Provide your environment-specific variable files, inventory files, and playbook. + - Summarize how Terraform outputs integrate with Ansible to manage multi-environment deployments. + +**Interview Questions:** +- How do you integrate Terraform outputs into Ansible inventories in a production workflow? +- What challenges might you face when managing multi-environment configurations, and how do you overcome them? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Fork or use your designated Ansible project repository and ensure all files (playbooks, roles, inventory files, `solution.md`, etc.) are committed and pushed to your fork. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `ansible-challenge`) to the main repository. + - **Title:** + ``` + Week 9 Challenge - Ansible Automation Challenge + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Submit Your Documentation:** + - **Important:** Place your `solution.md` file in the Week 9 (Ansible) task folder of the 90DaysOfDevOps repository. + +4. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Ansible challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., dynamic inventory, multi-tier orchestration, advanced Vault usage, and Terraform-Ansible integration). + - Use the hashtags: **#90DaysOfDevOps #Ansible #DevOps #InterviewPrep** + - Optionally, provide links to your fork or blog posts detailing your journey. + +--- + +## TrainWithShubham Resources for Ansible + +- **[Ansible Short Notes](https://www.trainwithshubham.com/products/Ansible-Short-Notes-64ad5f72b308530823e2c036)** +- **[Ansible One-Shot Video](https://youtu.be/4GwafiGsTUM?si=gqlIsNrfAv495WGj)** +- **[Multi-env setup blog](https://trainwithshubham.blog/devops-project-multi-environment-infrastructure-with-terraform-and-ansible/)** + +--- + +## Additional Resources + +- **[Ansible Official Documentation](https://docs.ansible.com/)** +- **[Ansible Modules Documentation](https://docs.ansible.com/ansible/latest/modules/modules_by_category.html)** +- **[Ansible Galaxy](https://galaxy.ansible.com/)** +- **[Ansible Best Practices](https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. diff --git a/2025/aws/README.md b/2025/aws/README.md new file mode 100644 index 0000000000..8b13789179 --- /dev/null +++ b/2025/aws/README.md @@ -0,0 +1 @@ + diff --git a/2025/cicd/README.md b/2025/cicd/README.md new file mode 100644 index 0000000000..2d68a9b891 --- /dev/null +++ b/2025/cicd/README.md @@ -0,0 +1,288 @@ +# Week 6 : Jenkins ( CI/CD ) Basics and Advanced real world challenge + +This set of tasks is designed as part of the 90DaysOfDevOps challenge to simulate real-world scenarios you might encounter on the job or in technical interviews. By completing these tasks, you'll gain practical experience with advanced Jenkins topics, including pipelines, distributed agents, RBAC, shared libraries, vulnerability scanning, and automated notifications. + +Complete each task and document all steps, commands, Screenshots, and observations in a file named `solution.md`. This documentation will serve as both your preparation guide and a portfolio piece for interviews. + +--- + +## Task 1: Create a Jenkins Pipeline Job for CI/CD + +**Scenario:** +Create an end-to-end CI/CD pipeline for a sample application. + +**Steps:** +1. **Set Up a Pipeline Job:** + - Create a new Pipeline job in Jenkins. + - Write a basic Jenkinsfile that automates the build, test, and deployment of a sample application (e.g., a simple web app). + - Suggested stages: **Build**, **Test**, **Deploy**. +2. **Run and Verify the Pipeline:** + - Trigger the pipeline and ensure each stage runs successfully. + - Verify the execution by checking console logs and, if applicable, using `docker ps` to confirm container status. +3. **Document in `solution.md`:** + - Include your Jenkinsfile code and explain the purpose of each stage. + - Note any issues you encountered and how you resolved them. + +**Interview Questions:** +- How do declarative pipelines streamline the CI/CD process compared to scripted pipelines? +- What are the benefits of breaking the pipeline into distinct stages? + +--- + +## Task 2: Build a Multi-Branch Pipeline for a Microservices Application + +**Scenario:** +You have a microservices-based application with multiple components stored in separate Git repositories. Your goal is to create a multi-branch pipeline that builds, tests, and deploys each service concurrently. + +**Steps:** +1. **Set Up a Multi-Branch Pipeline Job:** + - Create a new multi-branch pipeline in Jenkins. + - Configure it to scan your Git repository (or repositories) for branches. +2. **Develop a Jenkinsfile for Each Service:** + - Write a Jenkinsfile that includes stages for **Checkout**, **Build**, **Test**, and **Deploy**. + - Include parallel stages if applicable (e.g., running tests for different services concurrently). +3. **Simulate a Merge Scenario:** + - Create a feature branch and simulate a pull request workflow (using the Jenkins “Pipeline Multibranch” plugin with PR support if available). +4. **Document in `solution.md`:** + - List the Jenkinsfile(s) used, explain your pipeline design, and describe how multi-branch pipelines help manage microservices deployments in production. + +**Interview Questions:** +- How does a multi-branch pipeline improve continuous integration for microservices? +- What challenges might you face when merging feature branches in a multi-branch pipeline? + +--- + +## Task 3: Configure and Scale Jenkins Agents/Nodes + +**Scenario:** +Your build workload has increased, and you need to configure multiple agents (across different OS types) to distribute the load. + +**Steps:** +1. **Set Up Multiple Agents:** + - Configure at least two agents (e.g., one Linux-based and one Windows-based) in Jenkins. + - Use Docker containers or VMs to simulate different environments. +2. **Label Agents:** + - Assign labels (e.g., `linux`, `windows`) and modify your Jenkinsfile to run appropriate stages on the correct agent. +3. **Run Parallel Jobs:** + - Create jobs that run in parallel across these agents. +4. **Document in `solution.md`:** + - Explain how you configured and verified each agent. + - Describe the benefits of distributed builds in terms of speed and reliability. + +**Interview Questions:** +- What are the benefits and challenges of using distributed agents in Jenkins? +- How can you ensure that jobs are assigned to the correct agent in a multi-platform environment? + +--- + +## Task 4: Implement and Test RBAC in a Multi-Team Environment + +**Scenario:** +In a large organization, different teams (developers, testers, and operations) require different levels of access to Jenkins. You need to configure RBAC to secure your CI/CD pipeline. + +**Steps:** +1. **Configure RBAC:** + - Use Matrix-based security or the Role Strategy Plugin to create roles (e.g., Admin, Developer, Tester). + - Define permissions for each role. +2. **Create Test Accounts:** + - Simulate real-world usage by creating user accounts for each role and verifying access. +3. **Document in `solution.md`:** + - Include screenshots or logs of your RBAC configuration. + - Explain the importance of access control and provide a potential risk scenario that RBAC helps mitigate. + +**Interview Questions:** +- Why is RBAC essential in a CI/CD environment, and what are the consequences of weak access control? +- Can you describe a scenario where inadequate RBAC could lead to security issues? + +--- + +## Task 5: Develop and Integrate a Jenkins Shared Library + +**Scenario:** +You are working on multiple pipelines that share common tasks (like code quality checks or deployment steps). To avoid duplication and ensure consistency, you need to develop a Shared Library. + +**Steps:** +1. **Create a Shared Library Repository:** + - Set up a separate Git repository that hosts your shared library code. + - Develop reusable functions (e.g., a function for sending notifications or a common test stage). +2. **Integrate the Library:** + - Update your Jenkinsfile(s) from previous tasks to load and use the shared library. + - Use syntax similar to: + ```groovy + @Library('my-shared-library') _ + pipeline { + // pipeline code using shared functions + } + ``` +3. **Document in `solution.md`:** + - Provide code examples from your shared library. + - Explain how this approach improves maintainability and reduces errors. + +**Interview Questions:** +- How do shared libraries contribute to code reuse and maintainability in large organizations? +- Provide an example of a function that would be ideal for a shared library and explain its benefits. + +--- + +## Task 6: Integrate Vulnerability Scanning with Trivy + +**Scenario:** +Security is critical in CI/CD. You must ensure that the Docker images built in your pipeline are free from known vulnerabilities. + +**Steps:** +1. **Add a Vulnerability Scan Stage:** + - Update your Jenkins pipeline to include a stage that runs Trivy on your Docker image: + ```groovy + stage('Vulnerability Scan') { + steps { + sh 'trivy image /sample-app:v1.0' + } + } + ``` +2. **Configure Fail Criteria:** + - Optionally, set the stage to fail the build if critical vulnerabilities are detected. +3. **Document in `solution.md`:** + - Summarize the scan output, note the vulnerabilities and severity, and describe any remediation steps. + - Reflect on the importance of automated security scanning in CI/CD pipelines. + +**Interview Questions:** +- Why is integrating vulnerability scanning into a CI/CD pipeline important? +- How does Trivy help improve the security of your Docker images? + +--- + +## Task 7: Dynamic Pipeline Parameterization + +**Scenario:** +In production environments, pipelines need to be flexible and configurable. Implement dynamic parameterization to allow the pipeline to accept runtime parameters (such as target environment, version numbers, or deployment options). + +**Steps:** +1. **Modify Your Jenkinsfile:** + - Update your Jenkinsfile to accept parameters. For example: + ```groovy + pipeline { + agent any + parameters { + string(name: 'TARGET_ENV', defaultValue: 'staging', description: 'Deployment target environment') + string(name: 'APP_VERSION', defaultValue: '1.0.0', description: 'Application version to deploy') + } + stages { + stage('Build') { + steps { + echo "Building version ${params.APP_VERSION} for ${params.TARGET_ENV} environment..." + // Build commands here + } + } + // Add other stages as needed + } + } + ``` +2. **Run the Parameterized Pipeline:** + - Trigger the pipeline and provide different parameter values to observe how the pipeline behavior changes. +3. **Document in `solution.md`:** + - Explain how parameterization makes the pipeline dynamic. + - Include sample outputs and discuss how this flexibility is useful in a production CI/CD environment. + +**Interview Questions:** +- How does pipeline parameterization improve the flexibility of CI/CD workflows? +- Provide an example of a scenario where dynamic parameters would be critical in a deployment pipeline. + +--- + +## Task 8: Integrate Email Notifications for Build Events + +**Scenario:** +Automated notifications keep teams informed about build statuses. Configure Jenkins to send email alerts upon build completion or failure. + +**Steps:** +1. **Configure SMTP Settings:** + - Set up SMTP details in Jenkins under "Manage Jenkins" → "Configure System". +2. **Update Your Jenkinsfile:** + - Add a stage that uses the `emailext` plugin to send notifications: + ```groovy + stage('Notify') { + steps { + emailext ( + subject: "Build Notification: ${env.JOB_NAME} - Build #${env.BUILD_NUMBER}", + body: "The build has completed successfully. Check details at: ${env.BUILD_URL}", + recipientProviders: [[$class: 'DevelopersRecipientProvider']] + ) + } + } + ``` +3. **Test the Notification:** + - Trigger the pipeline and verify that an email is sent. +4. **Document in `solution.md`:** + - Explain your configuration steps, note any challenges, and describe how you resolved them. + +**Interview Questions:** +- What are the advantages of automating email notifications in CI/CD? +- How would you troubleshoot issues if email notifications fail to send? + +--- + +## Task 9: Troubleshooting, Monitoring & Advanced Debugging + +**Scenario:** +Real-world CI/CD pipelines sometimes fail. Demonstrate how you would troubleshoot and monitor your Jenkins environment. + +**Steps:** +1. **Troubleshooting:** + - Simulate a pipeline failure (e.g., by introducing an error in the Jenkinsfile) and document your troubleshooting process. + - Use commands like `docker logs` and review Jenkins console output. +2. **Monitoring:** + - Describe methods for monitoring Jenkins, such as using system logs or monitoring plugins. +3. **Advanced Debugging:** + - Add debugging statements (e.g., `echo` commands) in your Jenkinsfile to output environment variables or intermediate results. + - Use Jenkins' "Replay" feature to test modifications without committing changes. +4. **Document in `solution.md`:** + - Provide a detailed account of your troubleshooting, monitoring, and debugging strategies. + - Reflect on how these practices help maintain a stable CI/CD environment. + +**Interview Questions:** +- How would you approach troubleshooting a failing Jenkins pipeline? +- What are some effective strategies for monitoring Jenkins in a production environment? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Ensure all files (e.g., Jenkinsfile, configuration scripts, `solution.md`, etc.) are committed and pushed to your repository. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `jenkins-challenge`) to the main repository. + - **Title:** + ``` + Week 6 Challenge - DevOps Batch 9: Jenkins CI/CD Challenge + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Jenkins challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., agent configuration, RBAC, shared libraries, vulnerability scanning, and troubleshooting). + - Use the hashtags: **#90DaysOfDevOps #Jenkins #CI/CD #DevOps #InterviewPrep** + - Optionally, provide links to your repository or blog posts detailing your journey. + +--- + + +## TrainWithShubham Resources for Jenkins CI/CD + +- **[Jenkins Short notes](https://www.trainwithshubham.com/products/64aac20780964e534608664d?dgps_u=l&dgps_s=ucpd&dgps_t=cp_u&dgps_u_st=p&dgps_uid=66c972da3795a9659545d71a)** +- **[Jenkins One-Shot Video](https://youtu.be/XaSdKR2fOU4?si=eDmLQMSSh_eMPT_p)** +- **[TWS blog on Jenkins CI/CD](https://trainwithshubham.blog/automate-cicd-spring-boot-banking-app-jenkins-docker-github/)** + +## Additional Resources + +- **[Jenkins Official Documentation](https://www.jenkins.io/doc/)** +- **[Jenkins Pipeline Documentation](https://www.jenkins.io/doc/book/pipeline/)** +- **[Jenkins Agents and Nodes](https://www.jenkins.io/doc/book/managing/nodes/)** +- **[Jenkins RBAC & Role Strategy Plugin](https://plugins.jenkins.io/role-strategy/)** +- **[Jenkins Shared Libraries](https://www.jenkins.io/doc/book/pipeline/shared-libraries/)** +- **[Trivy Vulnerability Scanner](https://trivy.dev/latest/docs/scanner/vulnerability/)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. \ No newline at end of file diff --git a/2025/docker/README.md b/2025/docker/README.md new file mode 100644 index 0000000000..194a3ac090 --- /dev/null +++ b/2025/docker/README.md @@ -0,0 +1,235 @@ +# Week 5: Docker Basics & Advanced Challenge + +Welcome to the Week 5 Docker Challenge! In this task, you will work with Docker concepts and tools taught by Shubham Bhaiya. This challenge covers the following topics: + +- **Introduction and Purpose:** Understand Docker’s role in modern development. +- **Virtualization vs. Containerization:** Learn the differences and benefits. +- **Build Kya Hota Hai:** Understand the Docker build process. +- **Docker Terminologies:** Get familiar with key Docker terms. +- **Docker Components:** Explore Docker Engine, images, containers, and more. +- **Project Building Using Docker:** Containerize a sample project. +- **Multi-stage Docker Builds / Distroless Images:** Optimize your images. +- **Docker Hub (Push/Tag/Pull):** Manage and distribute your Docker images. +- **Docker Volumes:** Persist data across container runs. +- **Docker Networking:** Connect containers using networks. +- **Docker Compose:** Orchestrate multi-container applications. +- **Docker Scout:** Analyze your images for vulnerabilities and insights. + +Complete all the tasks below and document your steps, commands, and observations in a file named `solution.md`. Finally, share your experience on LinkedIn using the provided guidelines. + +--- + +## Challenge Tasks + +### Task 1: Introduction and Conceptual Understanding +1. **Write an Introduction:** + - In your `solution.md`, provide a brief explanation of Docker’s purpose in modern DevOps. + - Compare **Virtualization vs. Containerization** and explain why containerization is the preferred approach for microservices and CI/CD pipelines. + +--- + +### Task 2: Create a Dockerfile for a Sample Project +1. **Select or Create a Sample Application:** + - Choose a simple application (for example, a basic Node.js, Python, or Java app that prints “Hello, Docker!” or serves a simple web page). + +2. **Write a Dockerfile:** + - Create a `Dockerfile` that defines how to build an image for your application. + - Include comments in your Dockerfile explaining each instruction. + - Build your image using: + ```bash + docker build -t /sample-app:latest . + ``` + +3. **Verify Your Build:** + - Run your container locally to ensure it works as expected: + ```bash + docker run -d -p 8080:80 /sample-app:latest + ``` + - Verify the container is running with: + ```bash + docker ps + ``` + - Check logs using: + ```bash + docker logs + ``` + +--- + +### Task 3: Explore Docker Terminologies and Components +1. **Document Key Terminologies:** + - In your `solution.md`, list and briefly describe key Docker terms such as image, container, Dockerfile, volume, and network. + - Explain the main Docker components (Docker Engine, Docker Hub, etc.) and how they interact. + +--- + +### Task 4: Optimize Your Docker Image with Multi-Stage Builds +1. **Implement a Multi-Stage Docker Build:** + - Modify your existing `Dockerfile` to include multi-stage builds. + - Aim to produce a lightweight, **distroless** (or minimal) final image. +2. **Compare Image Sizes:** + - Build your image before and after the multi-stage build modification and compare their sizes using: + ```bash + docker images + ``` +3. **Document the Differences:** + - Explain in `solution.md` the benefits of multi-stage builds and the impact on image size. + +--- + +### Task 5: Manage Your Image with Docker Hub +1. **Tag Your Image:** + - Tag your image appropriately: + ```bash + docker tag /sample-app:latest /sample-app:v1.0 + ``` +2. **Push Your Image to Docker Hub:** + - Log in to Docker Hub if necessary: + ```bash + docker login + ``` + - Push the image: + ```bash + docker push /sample-app:v1.0 + ``` +3. **(Optional) Pull the Image:** + - Verify by pulling your image: + ```bash + docker pull /sample-app:v1.0 + ``` + +--- + +### Task 6: Persist Data with Docker Volumes +1. **Create a Docker Volume:** + - Create a Docker volume: + ```bash + docker volume create my_volume + ``` +2. **Run a Container with the Volume:** + - Run a container using the volume to persist data: + ```bash + docker run -d -v my_volume:/app/data /sample-app:v1.0 + ``` +3. **Document the Process:** + - In `solution.md`, explain how Docker volumes help with data persistence and why they are useful. + +--- + +### Task 7: Configure Docker Networking +1. **Create a Custom Docker Network:** + - Create a custom Docker network: + ```bash + docker network create my_network + ``` +2. **Run Containers on the Same Network:** + - Run two containers (e.g., your sample app and a simple database like MySQL) on the same network to demonstrate inter-container communication: + ```bash + docker run -d --name sample-app --network my_network /sample-app:v1.0 + docker run -d --name my-db --network my_network -e MYSQL_ROOT_PASSWORD=root mysql:latest + ``` +3. **Document the Process:** + - In `solution.md`, describe how Docker networking enables container communication and its significance in multi-container applications. + +--- + +### Task 8: Orchestrate with Docker Compose +1. **Create a docker-compose.yml File:** + - Write a `docker-compose.yml` file that defines at least two services (e.g., your sample app and a database). + - Include definitions for services, networks, and volumes. +2. **Deploy Your Application:** + - Bring up your application using: + ```bash + docker-compose up -d + ``` + - Test the setup, then shut it down using: + ```bash + docker-compose down + ``` +3. **Document the Process:** + - Explain each service and configuration in your `solution.md`. + +--- + +### Task 9: Analyze Your Image with Docker Scout +1. **Run Docker Scout Analysis:** + - Execute Docker Scout on your image to generate a detailed report of vulnerabilities and insights: + ```bash + docker scout cves /sample-app:v1.0 + ``` + - Alternatively, if available, run: + ```bash + docker scout quickview /sample-app:v1.0 + ``` + to get a summarized view of the image’s security posture. + - **Optional:** Save the output to a file for further analysis: + ```bash + docker scout cves /sample-app:v1.0 > scout_report.txt + ``` + +2. **Review and Interpret the Report:** + - Carefully review the output and focus on: + - **List of CVEs:** Identify vulnerabilities along with their severity ratings (e.g., Critical, High, Medium, Low). + - **Affected Layers/Dependencies:** Determine which image layers or dependencies are responsible for the vulnerabilities. + - **Suggested Remediations:** Note any recommended fixes or mitigation strategies provided by Docker Scout. + - **Comparison Step:** If possible, compare this report with previous builds to assess improvements or regressions in your image's security posture. + - If Docker Scout is not available in your environment, document that fact and consider using an alternative vulnerability scanner (e.g., Trivy, Clair) for a comparative analysis. + +3. **Document Your Findings:** + - In your `solution.md`, provide a detailed summary of your analysis: + - List the identified vulnerabilities along with their severity levels. + - Specify which layers or dependencies contributed to these vulnerabilities. + - Outline any actionable recommendations or remediation steps. + - Reflect on how these insights might influence your image optimization or overall security strategy. + - **Optional:** Include screenshots or attach the saved report file (`scout_report.txt`) as evidence of your analysis. + +--- + +### Task 10: Documentation and Critical Reflection +1. **Update `solution.md`:** + - List all the commands and steps you executed. + - Provide explanations for each task and detail any improvements made (e.g., image optimization with multi-stage builds). +2. **Reflect on Docker’s Impact:** + - Write a brief reflection on the importance of Docker in modern software development, discussing its benefits and potential challenges. + +--- + +## 📢 How to Submit + +1. **Push Your Final Work:** + - Ensure that your complete project—including your `Dockerfile`, `docker-compose.yml`, `solution.md`, and any additional files (e.g., the Docker Scout report if saved)—is committed and pushed to your repository. + - Verify that all your changes are visible in your repository. + +2. **Create a Pull Request (PR):** + - Open a PR from your working branch (e.g., `docker-challenge`) to the main repository. + - Use a clear and descriptive title, for example: + ``` + Week 5 Challenge - DevOps Batch 9: Docker Basics & Advanced Challenge + ``` + - In the PR description, include the following details: + - A brief summary of your approach and the tasks you completed. + - A list of the key Docker commands used during the challenge. + - Any insights or challenges you encountered (e.g., lessons learned from multi-stage builds or Docker Scout analysis). + +3. **Share Your Experience on LinkedIn:** + - Write a LinkedIn post summarizing your Week 5 Docker challenge experience. + - In your post, include: + - A brief description of the challenge and what you learned. + - Screenshots, logs, or excerpts from your `solution.md` that highlight key steps or interesting findings (e.g., Docker Scout reports). + - The hashtags: **#90DaysOfDevOps #Docker #DevOps** + - Optionally, links to any blog posts or related GitHub repositories that further explain your journey. + +--- + +## Additional Resources + +- **[Docker Documentation](https://docs.docker.com/)** +- **[Docker Hub](https://docs.docker.com/docker-hub/)** +- **[Multi-stage Builds](https://docs.docker.com/develop/develop-images/multistage-build/)** +- **[Docker Compose](https://docs.docker.com/compose/)** +- **[Docker Scan (Vulnerability Scanning)](https://docs.docker.com/engine/scan/)** +- **[Containerization vs. Virtualization](https://www.docker.com/resources/what-container)** + +--- + +Happy coding and best of luck with this Docker challenge! Document your journey thoroughly in `solution.md` and refer to these resources for additional guidance. diff --git a/2025/git/01_Git_and_Github_Basics/README.md b/2025/git/01_Git_and_Github_Basics/README.md new file mode 100644 index 0000000000..589e08c57c --- /dev/null +++ b/2025/git/01_Git_and_Github_Basics/README.md @@ -0,0 +1,212 @@ +# Week 4: Git and GitHub Challenge + +Welcome to the Week 4 Challenge! In this task you will practice the essential Git and GitHub commands and concepts taught by Shubham Bhaiya. This includes: + +- **Git Basics:** `git init`, `git add`, `git commit` +- **Repository Management:** `git clone`, forking a repository, and understanding how a GitHub repo is made +- **Branching:** Creating branches (`git branch`), switching between branches (`git switch` / `git checkout`), and viewing commit history (`git log`) +- **Authentication:** Pushing and pulling using a Personal Access Token (PAT) +- **Critical Thinking:** Explaining why branching strategies are important in collaborative development + +To make this challenge more difficult, additional steps have been added. You will also be required to explore SSH authentication as a bonus task. Complete all the tasks and document every step in `solution.md`. Finally, share your experience on LinkedIn (details provided at the end). + +--- + +## Challenge Tasks + +### Task 1: Fork and Clone the Repository +1. **Fork the Repository:** + - Visit [this repository](https://github.com/LondheShubham153/90DaysOfDevOps) and fork it to your own GitHub account. If not done yet. + +2. **Clone Your Fork Locally:** + - Clone the forked repository using HTTPS: + ```bash + git clone + ``` + - Change directory into the cloned repository: + ```bash + cd 2025/git/01_Git_and_Github_Basics + ``` + +--- + +### Task 2: Initialize a Local Repository and Create a File +1. **Set Up Your Challenge Directory:** + - Inside the cloned repository, create a new directory for this challenge: + ```bash + mkdir week-4-challenge + cd week-4-challenge + ``` + +2. **Initialize a Git Repository:** + - Initialize the directory as a new Git repository: + ```bash + git init + ``` + +3. **Create a File:** + - Create a file named `info.txt` and add some initial content (for example, your name and a brief introduction). + +4. **Stage and Commit Your File:** + - Stage the file: + ```bash + git add info.txt + ``` + - Commit the file with a descriptive message: + ```bash + git commit -m "Initial commit: Add info.txt with introductory content" + ``` + +--- + +## Task 3: Configure Remote URL with PAT and Push/Pull + +1. **Configure Remote URL with Your PAT:** + To avoid entering your Personal Access Token (PAT) every time you push or pull, update your remote URL to include your credentials. + + **⚠️ Note:** Embedding your PAT in the URL is only for this exercise. It is not recommended for production use. + + Replace ``, ``, and `` with your actual GitHub username, your PAT, and the repository name respectively: + + ```bash + git remote add origin https://:@github.com//90DaysOfDevOps.git + ``` + If a remote named `origin` already exists, update it with: + ```bash + git remote set-url origin https://:@github.com//90DaysOfDevOps.git + ``` +2. **Push Your Commit to Remote:** + - Push your current branch (typically `main`) and set the upstream: + ```bash + git push -u origin main + ``` +3. **(Optional) Pull Remote Changes:** + - Verify your configuration by pulling changes: + ```bash + git pull origin main + ``` + +--- + +### Task 4: Explore Your Commit History +1. **View the Git Log:** + - Check your commit history using: + ```bash + git log + ``` + - Take note of the commit hash and details as you will reference these in your documentation. + +--- + +### Task 5: Advanced Branching and Switching +1. **Create a New Branch:** + - Create a branch called `feature-update`: + ```bash + git branch feature-update + ``` + +2. **Switch to the New Branch:** + - Switch using `git switch`: + ```bash + git switch feature-update + ``` + - Alternatively, you can use: + ```bash + git checkout feature-update + ``` + +3. **Modify the File and Commit Changes:** + - Edit `info.txt` (for example, add more details or improvements). + - Stage and commit your changes: + ```bash + git add info.txt + git commit -m "Feature update: Enhance info.txt with additional details" + git push origin feature-update + ``` + - Merge this branch to `main` via a Pull Request on GitHub. + +4. **(Advanced) Optional Extra Challenge:** + - If you feel confident, create another branch (e.g., `experimental`) from your main branch, make a conflicting change to `info.txt`, then switch back to `feature-update` and merge `experimental` to simulate a merge conflict. Resolve the conflict manually, then commit the resolution. + > *Note: This extra step is optional and intended for those looking for an additional challenge.* + +--- + +### Task 6: Explain Branching Strategies +1. **Document Your Process:** + - Create (or update) a file named `solution.md` in your repository. + - List all the Git commands you used in Tasks 1–4. + - **Explain:** Write a brief explanation on **why branching strategies are important** in collaborative development. Consider addressing: + - Isolating features and bug fixes + - Facilitating parallel development + - Reducing merge conflicts + - Enabling effective code reviews + +--- + +### Bonus Task: Explore SSH Authentication +1. **Generate an SSH Key (if not already set up):** + - Create an SSH key pair: + ```bash + ssh-keygen + ``` + - Follow the prompts and then locate your public key (typically found at `~/.ssh/id_ed25519.pub`). + +2. **Add Your SSH Public Key to GitHub:** + - Copy the contents of your public key and add it to your GitHub account under **SSH and GPG keys**. + (See [Connecting to GitHub with SSH](https://docs.github.com/en/authentication/connecting-to-github-with-ssh) for help.) + +3. **Switch Your Remote URL to SSH:** + - Change the remote URL from HTTPS to SSH: + ```bash + git remote set-url origin git@github.com:/90DaysOfDevOps.git + ``` + +4. **Push Your Branch Using SSH:** + - Test the SSH connection by pushing your branch: + ```bash + git push origin feature-update + ``` + +--- + +## 📢 How to Submit + +1. **Push Your Final Work:** + - Ensure your branch (e.g., `feature-update`) with the updated `solution.md` file is pushed to your fork. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch to the main repository. + - Use a clear title such as: + ``` + Week 4 Challenge - DevOps Batch 9: Git & GitHub Advanced Challenge + ``` + - In the PR description, summarize your process and list the Git commands you used. + +3. **Share Your Experience on LinkedIn:** + - Write a LinkedIn post summarizing your Week 4 experience. + - Include screenshots or logs of your tasks. + - Use hashtags: **#90DaysOfDevOps #GitGithub #DevOps** + - Optionally, share any blog posts, GitHub repos, or articles you create about this challenge. + +--- + +## Additional Resources + +- **Git Documentation:** + [https://git-scm.com/docs](https://git-scm.com/docs) + +- **Creating a Personal Access Token:** + [GitHub PAT Setup](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) + +- **Forking and Cloning Repositories:** + [Fork a Repository](https://docs.github.com/en/get-started/quickstart/fork-a-repo) | [Cloning a Repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) + +- **SSH Authentication with GitHub:** + [Connecting to GitHub with SSH](https://docs.github.com/en/authentication/connecting-to-github-with-ssh) + +- **Understanding Branching Strategies:** + [Git Branching Strategies](https://www.atlassian.com/git/tutorials/comparing-workflows) + +--- + +Happy coding and best of luck with this challenge! Document your journey thoroughly and be sure to explore the additional resources if you get stuck. diff --git a/2025/git/02_Git_and_Github_Advanced/README.md b/2025/git/02_Git_and_Github_Advanced/README.md new file mode 100644 index 0000000000..5b9e775252 --- /dev/null +++ b/2025/git/02_Git_and_Github_Advanced/README.md @@ -0,0 +1,208 @@ +# Week 4: Git & GitHub Advanced Challenge + +This challenge covers advanced Git concepts essential for real-world DevOps workflows. By the end of this challenge, you will: + +- Understand how to work with Pull Requests effectively. +- Learn to undo changes using Reset & Revert. +- Use Stashing to manage uncommitted work. +- Apply Cherry-picking for selective commits. +- Keep a clean commit history using Rebasing. +- Learn industry-standard Branching Strategies. + +## **Topics Covered** +1. Pull Requests – Collaborating in teams. +2. Reset & Revert – Undo changes safely. +3. Stashing – Saving work temporarily. +4. Cherry-picking – Selecting specific commits. +5. Rebasing – Maintaining a clean history. +6. Branching Strategies – Industry best practices. + +## **Challenge Tasks** + +### **Task 1: Working with Pull Requests (PRs)** +**Scenario:** You are working on a new feature and need to merge your changes into the main branch using a Pull Request. + +1. Fork a repository and clone it locally. + ```bash + git clone + cd + ``` +2. Create a feature branch and make changes. + ```bash + git checkout -b feature-branch + echo "New Feature" >> feature.txt + git add . + git commit -m "Added a new feature" + ``` +3. Push the changes and create a Pull Request. + ```bash + git push origin feature-branch + ``` +4. Open a PR on GitHub, request a review, and merge it once approved. + +**Document in `solution.md`** +- Steps to create a PR. +- Best practices for writing PR descriptions. +- Handling review comments. + +--- + +### **Task 2: Undoing Changes – Reset & Revert** +**Scenario:** You accidentally committed incorrect changes and need to undo them. + +1. Create and modify a file. + ```bash + echo "Wrong code" >> wrong.txt + git add . + git commit -m "Committed by mistake" + ``` +2. Soft Reset (keeps changes staged). + ```bash + git reset --soft HEAD~1 + ``` +3. Mixed Reset (unstages changes but keeps files). + ```bash + git reset --mixed HEAD~1 + ``` +4. Hard Reset (removes all changes). + ```bash + git reset --hard HEAD~1 + ``` +5. Revert a commit safely. + ```bash + git revert HEAD + ``` + +**Document in `solution.md`** +- Differences between `reset` and `revert`. +- When to use each method. + +--- + +### **Task 3: Stashing - Save Work Without Committing** +**Scenario:** You need to switch branches but don’t want to commit incomplete work. + +1. Modify a file without committing. + ```bash + echo "Temporary Change" >> temp.txt + git add temp.txt + ``` +2. Stash the changes. + ```bash + git stash + ``` +3. Switch to another branch and apply the stash. + ```bash + git checkout main + git stash pop + ``` + +**Document in `solution.md`** +- When to use `git stash`. +- Difference between `git stash pop` and `git stash apply`. + +--- + +### **Task 4: Cherry-Picking - Selectively Apply Commits** +**Scenario:** A bug fix exists in another branch, and you only want to apply that specific commit. + +1. Find the commit to cherry-pick. + ```bash + git log --oneline + ``` +2. Apply a specific commit to the current branch. + ```bash + git cherry-pick + ``` +3. Resolve conflicts if any. + ```bash + git cherry-pick --continue + ``` + +**Document in `solution.md`** +- How cherry-picking is used in bug fixes. +- Risks of cherry-picking. + +--- + +### **Task 5: Rebasing - Keeping a Clean Commit History** +**Scenario:** Your branch is behind the main branch and needs to be updated without extra merge commits. + +1. Fetch the latest changes. + ```bash + git fetch origin main + ``` +2. Rebase the feature branch onto main. + ```bash + git rebase origin/main + ``` +3. Resolve conflicts and continue. + ```bash + git rebase --continue + ``` + +**Document in `solution.md`** +- Difference between `merge` and `rebase`. +- Best practices for rebasing. + +--- + +### **Task 6: Branching Strategies Used in Companies** +**Scenario:** Understand real-world branching strategies used in DevOps workflows. + +1. Research and explain Git workflows: + - Git Flow (Feature, Release, Hotfix branches). + - GitHub Flow (Main + Feature branches). + - Trunk-Based Development (Continuous Integration). + +2. Simulate a Git workflow using branches. + ```bash + git branch feature-1 + git branch hotfix-1 + git checkout feature-1 + ``` + +**Document in `solution.md`** +- Which strategy is best for DevOps and CI/CD. +- Pros and cons of different workflows. + +--- + +## **How to Submit** + +1. **Push your work to GitHub.** + ```bash + git add . + git commit -m "Completed Git & GitHub Advanced Challenge" + git push origin main + ``` + +2. **Create a Pull Request.** + - Title: + ``` + Git & GitHub Advanced Challenge - Completed + ``` + - PR Description: + - Steps followed for each task. + - Screenshots or logs (if applicable). + - +3. **Share Your Experience on LinkedIn:** + - Write a LinkedIn post summarizing your Week 4 Git & GitHub challenge experience. + - In your post, include: + - A brief description of the challenge and what you learned. + - Screenshots or excerpts from your `solution.md` that highlight key steps or interesting findings. + - The hashtags: **#90DaysOfDevOps #Git #GitHub #VersionControl #DevOps** + - Optionally, links to any blog posts or related GitHub repositories that further explain your journey. + +--- + +## **Additional Resources** +- [Git Official Documentation](https://git-scm.com/doc) +- [Git Reset & Revert Guide](https://www.atlassian.com/git/tutorials/resetting-checking-out-and-reverting) +- [Git Stash Explained](https://git-scm.com/book/en/v2/Git-Tools-Stashing-and-Cleaning) +- [Cherry-Picking Best Practices](https://www.atlassian.com/git/tutorials/cherry-pick) +- [Branching Strategies for DevOps](https://www.atlassian.com/git/tutorials/comparing-workflows) + +--- + +Happy coding and best of luck with this challenge! Document your journey thoroughly and be sure to explore the additional resources if you get stuck. diff --git a/2025/kubernetes/README.md b/2025/kubernetes/README.md new file mode 100644 index 0000000000..030d3fd81b --- /dev/null +++ b/2025/kubernetes/README.md @@ -0,0 +1,299 @@ +# Week 7 : Kubernetes Basics & Advanced Challenges + +This set of tasks is designed as part of the 90DaysOfDevOps challenge to simulate real-world scenarios you might encounter on the job or in technical interviews. By completing these tasks on the [SpringBoot BankApp](https://github.com/Amitabh-DevOps/Springboot-BankApp), you'll gain practical experience with advanced Kubernetes topics, including architecture, core objects, networking, storage management, configuration, autoscaling, security & access control, job scheduling, and bonus topics like Helm, Service Mesh, or AWS EKS. + +> [!IMPORTANT] +> +> 1. Fork the [SpringBoot BankApp](https://github.com/Amitabh-DevOps/Springboot-BankApp) and implement all tasks on your fork. +> 2. Document all steps, commands, screenshots, and observations in a file named `solution.md` within your fork. +> 3. Submit your `solution.md` file in the Week 7 (Kubernetes) task folder of the 90DaysOfDevOps repository. + +--- + +## Task 1: Understand Kubernetes Architecture & Deploy a Sample Pod + +**Scenario:** +Familiarize yourself with Kubernetes’ control plane and worker node components, then deploy a simple Pod manually. + +**Steps:** +1. **Study Kubernetes Architecture:** + - Review the roles of control plane components (API Server, Scheduler, Controller Manager, etcd, Cloud Controller) and worker node components (Kubelet, Container Runtime, Kube Proxy). +2. **Deploy a Sample Pod:** + - Create a YAML file (e.g., `pod.yaml`) to deploy a simple Pod (such as an NGINX container). + - Apply the YAML using: + ```bash + kubectl apply -f pod.yaml + ``` +3. **Document in `solution.md`:** + - Describe the Kubernetes architecture components. + - Include your Pod YAML and explain each section. + +> [!NOTE] +> +> **Interview Questions:** +> - Can you explain how the Kubernetes control plane components work together and the role of etcd in this architecture? +> - If a Pod fails to start, what steps would you take to diagnose the issue? + +--- + +## Task 2: Deploy and Manage Core Kubernetes Objects + +**Scenario:** +Deploy core Kubernetes objects for the SpringBoot BankApp application, including Deployments, ReplicaSets, StatefulSets, DaemonSets, and use Namespaces to isolate resources. + +**Steps:** +1. **Create a Namespace:** + - Write a YAML file to create a Namespace for the SpringBoot BankApp application. + - Apply the YAML: + ```bash + kubectl apply -f namespace.yaml + ``` +2. **Deploy a Deployment:** + - Create a YAML file for a Deployment (within your Namespace) that manages a set of Pods running a component of SpringBoot BankApp. + - Verify that a ReplicaSet is created automatically. +3. **Deploy a StatefulSet:** + - Write a YAML file for a StatefulSet (for example, for a database component) and apply it. +4. **Deploy a DaemonSet:** + - Create a YAML file for a DaemonSet to run a Pod on every node. +5. **Document in `solution.md`:** + - Include the YAML files for the Namespace, Deployment, StatefulSet, and DaemonSet. + - Explain the differences between these objects and when to use each. + +> [!NOTE] +> +> **Interview Questions:** +> - How does a Deployment ensure that the desired state of Pods is maintained in a cluster? +> - Can you explain the differences between a Deployment, StatefulSet, and DaemonSet, and provide an example scenario for each? + +--- + +## Task 3: Networking & Exposure – Create Services, Ingress, and Network Policies + +**Scenario:** +Expose your SpringBoot BankApp application to internal and external traffic by creating Services and configuring an Ingress, while using Network Policies to secure communication. + +**Steps:** +1. **Create a Service:** + - Write a YAML file for a Service of type ClusterIP. + - Modify the Service type to NodePort or LoadBalancer and apply the YAML. +2. **Configure an Ingress:** + - Create an Ingress resource to route external traffic to your application. +3. **Implement a Network Policy:** + - Write a YAML file for a Network Policy that restricts traffic to your application Pods. +4. **Document in `solution.md`:** + - Include the YAML files for your Service, Ingress, and Network Policy. + - Explain the differences between Service types and the roles of Ingress and Network Policies. + +> [!NOTE] +> +> **Interview Questions:** +> - How do NodePort and LoadBalancer Services differ in terms of exposure and use cases? +> - What is the role of a Network Policy in Kubernetes, and can you describe a scenario where it is essential? + +--- + +## Task 4: Storage Management – Use Persistent Volumes and Claims + +**Scenario:** +Deploy a component of the SpringBoot BankApp application that requires persistent storage by creating Persistent Volumes (PV), Persistent Volume Claims (PVC), and a StorageClass for dynamic provisioning. + +**Steps:** +1. **Create a Persistent Volume and Claim:** + - Write YAML files for a static PV and a corresponding PVC. +2. **Deploy an Application Using the PVC:** + - Modify a Pod or Deployment YAML to mount the PVC. +3. **Document in `solution.md`:** + - Include your PV, PVC, and application YAML. + - Explain how StorageClasses facilitate dynamic storage provisioning. + +> [!NOTE] +> +> **Interview Questions:** +> - What are the main differences between a Persistent Volume and a Persistent Volume Claim? +> - How does a StorageClass simplify storage management in Kubernetes? + +--- + +## Task 5: Configuration & Secrets Management with ConfigMaps and Secrets + +**Scenario:** +Deploy a component of the SpringBoot BankApp application that consumes external configuration and sensitive data using ConfigMaps and Secrets. + +**Steps:** +1. **Create a ConfigMap:** + - Write a YAML file for a ConfigMap containing configuration data. +2. **Create a Secret:** + - Write a YAML file for a Secret containing sensitive information. +3. **Deploy an Application:** + - Update your application YAML to mount the ConfigMap and Secret. +4. **Document in `solution.md`:** + - Include the YAML files and explain how the application uses these resources. + +> [!NOTE] +> +> **Interview Questions:** +> - How would you update a running application if a ConfigMap or Secret is modified? +> - What measures do you take to secure Secrets in Kubernetes? + +--- + +## Task 6: Autoscaling & Resource Management + +**Scenario:** +Implement autoscaling for a component of the SpringBoot BankApp application using the Horizontal Pod Autoscaler (HPA). Optionally, explore Vertical Pod Autoscaling (VPA) and ensure the Metrics Server is running. + +**Steps:** +1. **Deploy an Application with Resource Requests:** + - Deploy an application with defined resource requests and limits. +2. **Create an HPA Resource:** + - Write a YAML file for an HPA that scales the number of replicas based on CPU or memory usage. +3. **(Optional) Implement VPA & Metrics Server:** + - Optionally, deploy a VPA and verify that the Metrics Server is running. +4. **Document in `solution.md`:** + - Include the YAML files and explain how HPA (and optionally VPA) work. + - Discuss the benefits of autoscaling in production. + +> [!NOTE] +> +> **Interview Questions:** +> - What is the process by which the Horizontal Pod Autoscaler scales an application? +> - In what scenarios would vertical scaling (VPA) be more beneficial than horizontal scaling (HPA)? + +--- + +## Task 7: Security & Access Control + +**Scenario:** +Secure your Kubernetes cluster by implementing Role-Based Access Control (RBAC) and additional security measures. + +### Part A: RBAC Implementation +**Steps:** +1. **Configure RBAC:** + - Create roles and role bindings using YAML files for specific user groups (e.g., Admin, Developer, Tester). +2. **Create Test Accounts:** + - Simulate real-world usage by creating user accounts for each role and verifying access. +3. **Optional Enhancement:** + - Simulate an unauthorized action (e.g., a Developer attempting to delete a critical resource) and document how RBAC prevents it. + - Analyze RBAC logs (if available) to verify that unauthorized access attempts are recorded. +4. **Document in `solution.md`:** + - Include screenshots or logs of your RBAC configuration. + - Describe the roles, permissions, and potential risks mitigated by proper RBAC implementation. + +> [!NOTE] +> +> **Interview Questions:** +> - How do RBAC policies help secure a multi-team Kubernetes environment? +> - Can you provide an example of how improper RBAC could compromise a cluster? + +### Part B: Additional Security Controls +**Steps:** +1. **Set Up Taints & Tolerations:** + - Apply taints to nodes and specify tolerations in your Pod specifications. +2. **Define a Pod Disruption Budget (PDB):** + - Write a YAML file for a PDB to ensure a minimum number of Pods remain available during maintenance. +3. **Document in `solution.md`:** + - Include the YAML files and explain how taints, tolerations, and PDBs contribute to cluster stability and security. + +> [!NOTE] +> +> **Interview Questions:** +> - How do taints and tolerations ensure that critical workloads are isolated from interference? +> - Why are Pod Disruption Budgets important for maintaining application availability? + +--- + +## Task 8: Job Scheduling & Custom Resources + +**Scenario:** +Manage scheduled tasks and extend Kubernetes functionality by creating Jobs, CronJobs, and a Custom Resource Definition (CRD). + +**Steps:** +1. **Create a Job and CronJob:** + - Write YAML files for a Job (a one-time task) and a CronJob (a scheduled task). +2. **Create a Custom Resource Definition (CRD):** + - Write a YAML file for a CRD and use `kubectl` to create a custom resource. +3. **Document in `solution.md`:** + - Include the YAML files and explain the use cases for Jobs, CronJobs, and CRDs. + - Reflect on how CRDs extend Kubernetes capabilities. + +> [!NOTE] +> +> **Interview Questions:** +> - What factors would influence your decision to use a CronJob versus a Job? +> - How do CRDs enable custom extensions in Kubernetes? + +--- + +## Task 9: Bonus Task: Advanced Deployment with Helm, Service Mesh, or EKS + +**Scenario:** +For an added challenge, deploy a component of the SpringBoot BankApp application using Helm, implement a basic Service Mesh (e.g., Istio), or deploy your cluster on AWS EKS. + +**Steps:** +1. **Helm Deployment:** + - Create a Helm chart for your application. + - Deploy the application using Helm and perform an update. + - *OR* +2. **Service Mesh Implementation:** + - Deploy a basic Service Mesh (using Istio, Linkerd, or Consul) and demonstrate traffic management between services. + - *OR* +3. **Deploy on AWS EKS:** + - Set up an EKS cluster and deploy your application there. +4. **Document in `solution.md`:** + - Include your Helm chart files, Service Mesh configuration, or EKS deployment details. + - Explain the advantages of using Helm, a Service Mesh, or EKS in a production environment. + +> [!NOTE] +> +> **Interview Questions:** +> - How does Helm simplify application deployments in Kubernetes? +> - What are the benefits of using a Service Mesh in a microservices architecture? +> - How does deploying on AWS EKS compare with managing your own Kubernetes cluster? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Ensure all files (e.g., Manifest files, scripts, solution.md, etc.) are committed and pushed to your 90DaysOfDevOps repository. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `kubernetes-challenge`) to the main repository. + - **Title:** + ``` + Week 7 Challenge - DevOps Batch 9: Kubernetes Basics & Advanced Challenge + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Kubernetes challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., architecture, autoscaling, security, job scheduling, and advanced deployments). + - Use the hashtags: **#90DaysOfDevOps #Kubernetes #DevOps #InterviewPrep** + - Optionally, provide links to your fork or blog posts detailing your journey. + +--- + +## TrainWithShubham Resources for Kubernetes + +- **[Kubernetes Short Notes](https://www.trainwithshubham.com/products/6515573bf42fc83942cd112e?dgps_u=l&dgps_s=ucpd&dgps_t=cp_u&dgps_u_st=u&dgps_uid=66c972da3795a9659545d71a)** +- **[Kubernetes One-Shot Video](https://youtu.be/W04brGNgxN4?si=oPscVYz0VFzZig8Q)** +- **[TWS blog on Kubernetes](https://trainwithshubham.blog/)** + +--- + +## Additional Resources + +- **[Kubernetes Official Documentation](https://kubernetes.io/docs/)** +- **[Kubernetes Concepts](https://kubernetes.io/docs/concepts/)** +- **[Helm Documentation](https://helm.sh/docs/)** +- **[Istio Documentation](https://istio.io/latest/docs/)** +- **[Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)** +- **[Kubernetes Networking](https://kubernetes.io/docs/concepts/services-networking/)** +- **[Kubernetes Storage](https://kubernetes.io/docs/concepts/storage/)** +- **[Kubernetes Autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/)** +- **[Kubernetes Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. diff --git a/2025/linux/README.md b/2025/linux/README.md new file mode 100644 index 0000000000..3add4b5e6a --- /dev/null +++ b/2025/linux/README.md @@ -0,0 +1,107 @@ +# Week 2: Linux System Administration & Automation + +Welcome to **Week 2** of the **90 Days of DevOps - 2025 Edition**! This week, we dive into **Linux system administration and automation**, covering essential topics such as **user management, file permissions, log analysis, process control, volume mounts, and shell scripting**. + +--- + +## 🚀 Project: DevOps Linux Server Monitoring & Automation +Imagine you're managing a **Linux-based production server** and need to ensure that **users, logs, and processes** are well-managed. You will perform real-world tasks such as **log analysis, volume management, and automation** to enhance your DevOps skills. + +--- + +## 📌 Tasks + +### **1️⃣ User & Group Management** +- Learn about Linux **users, groups, and permissions** (`/etc/passwd`, `/etc/group`). +- **Task:** + - Create a user `devops_user` and add them to a group `devops_team`. + - Set a password and grant **sudo** access. + - Restrict SSH login for certain users in `/etc/ssh/sshd_config`. + +--- + +### **2️⃣ File & Directory Permissions** +- **Task:** + - Create `/devops_workspace` and a file `project_notes.txt`. + - Set permissions: + - **Owner can edit**, **group can read**, **others have no access**. + - Use `ls -l` to verify permissions. + +--- + +### **3️⃣ Log File Analysis with AWK, Grep & Sed** +Logs are crucial in DevOps! You’ll analyze logs using the **Linux_2k.log** file from **LogHub** ([GitHub Repo](https://github.com/logpai/loghub/blob/master/Linux/Linux_2k.log)). + +- **Task:** + - **Download the log file** from the repository. + - **Extract insights using commands:** + - Use `grep` to find all occurrences of the word **"error"**. + - Use `awk` to extract **timestamps and log levels**. + - Use `sed` to replace all IP addresses with **[REDACTED]** for security. + - **Bonus:** Find the most frequent log entry using `awk` or `sort | uniq -c | sort -nr | head -10`. + +--- + +### **4️⃣ Volume Management & Disk Usage** +- **Task:** + - Create a directory `/mnt/devops_data`. + - Mount a new volume (or loop device for local practice). + - Verify using `df -h` and `mount | grep devops_data`. + +--- + +### **5️⃣ Process Management & Monitoring** +- **Task:** + - Start a background process (`ping google.com > ping_test.log &`). + - Use `ps`, `top`, and `htop` to monitor it. + - Kill the process and verify it's gone. + +--- + +### **6️⃣ Automate Backups with Shell Scripting** +- **Task:** + - Write a shell script to back up `/devops_workspace` as `backup_$(date +%F).tar.gz`. + - Save it in `/backups` and schedule it using `cron`. + - Make the script display a success message in **green text** using `echo -e`. + +--- + +## 🎯 Bonus Tasks (Optional 🚀) +1. Find the **top 5 most common log messages** in `Linux_2k.log` using `awk` and `sort`. +2. Use `find` to list **all files modified in the last 7 days**. +3. Write a script that extracts and displays only **ERROR and WARNING logs** from `Linux_2k.log`. + +--- + +## 📢 How to Submit +- **Write a LinkedIn post** summarizing your Week 2 experience. +- Include screenshots or logs of your tasks. +- **Use hashtags**: `#90DaysOfDevOps` `#LinuxAdmin` `#DevOps` +- Share any blog posts, GitHub repos, or articles you create. + +--- + +## 📚 Resources to Get Started +- [Linux In One Shot](https://youtu.be/e01GGTKmtpc?si=FSVNFRwdNC0NZeba) +- [Linux_2k.log (LogHub)](https://github.com/logpai/loghub/blob/master/Linux/Linux_2k.log) + +--- + +## 📝 Example Submission Post +```markdown +Week 2 of #90DaysOfDevOps2025 done! 🏆 + +✅ Managed users & SSH access +✅ Set up permissions & volumes +✅ Analyzed logs using AWK & grep +✅ Automated backups with a shell script + +Check out my blog here: [Your Blog/GitHub Link] + +#Linux #SysAdmin #DevOps +``` + +--- + +Happy learning, and see you in **Week 3**! 🚀 + diff --git a/2025/networking/README.md b/2025/networking/README.md new file mode 100644 index 0000000000..2abf0e5cf0 --- /dev/null +++ b/2025/networking/README.md @@ -0,0 +1,64 @@ +# Week 1: Networking Challenge + +Welcome to Week 1 of the **90 Days of DevOps - 2025 Edition**! This week's focus is on **Networking**, a foundational skill for every DevOps professional. Let's dive into understanding key networking concepts, tools, and tasks essential for building a strong DevOps career. + +## Tasks + +### 1. **Understand OSI & TCP/IP Models** +- Learn about the OSI and TCP/IP models, including their layers and purposes. +- **Task:** Write examples of how each layer applies to real-world scenarios (e.g., HTTP at the Application Layer, TCP at the Transport Layer). + +### 2. **Protocols and Ports for DevOps** +- Study the most commonly used protocols (e.g., HTTP, HTTPS, FTP, SSH, DNS) and their port numbers. +- **Task:** Create a blog, article, GitHub page, or README listing these protocols and explaining their relevance to DevOps workflows. + +### 3. **AWS EC2 and Security Groups** +- Launch an AWS EC2 instance (free tier is fine). +- Learn about Security Groups, their rules, and their significance in securing cloud instances. +- **Task:** Write a step-by-step guide or blog on how to create and configure Security Groups. + +### 4. **Hands-On with Networking Commands** +- Practice essential networking commands like: + - `ping` (check connectivity) + - `traceroute` / `tracert` (trace packet routes) + - `netstat` (network statistics) + - `curl` (make HTTP requests) + - `dig` / `nslookup` (DNS lookup) +- **Task:** Create a cheat sheet or short guide explaining the purpose and usage of each command. + + +--- + +## How to Submit +- Create a LinkedIn post summarizing your Week 1 Networking Challenge experience. +- Include the link to your blog, GitHub page, or README in the comments of your post. +- **Tip:** Use an eye-catching image or flow diagram relevant to networking concepts for better reach and engagement. + +--- + +## Resources to Get Started +- [OSI Model Explained (GeeksforGeeks)](https://www.geeksforgeeks.org/layers-of-osi-model/) +- [Common Networking Protocols](https://en.wikipedia.org/wiki/List_of_network_protocols) +- [AWS Free Tier](https://aws.amazon.com/free/) +- [DNS Basics by Cloudflare](https://www.cloudflare.com/learning/dns/what-is-dns/) +- [Docker Networking](https://docs.docker.com/network/) + +Feel free to explore these resources and expand your learning! + +--- + +### Example Submission Post: +"Week 1 of #90DaysOfDevOps2025 completed! 🚀 + +✅ Learned OSI & TCP/IP models +✅ Explored AWS Security Groups +✅ Practiced networking commands +✅ Set up my first web server + +Check out my blog here: [Your Blog/GitHub Link] + +#Networking #DevOps #90DaysOfDevOps" + +--- + +Good luck, and happy networking! 🌐 diff --git a/2025/observability/README.md b/2025/observability/README.md new file mode 100644 index 0000000000..3363f243d3 --- /dev/null +++ b/2025/observability/README.md @@ -0,0 +1,185 @@ +# Week 10: Observability Challenge with Prometheus and Grafana on KIND/EKS + +This challenge is part of the 90DaysOfDevOps program and focuses on solving advanced, production-grade observability scenarios using Prometheus and Grafana. You will deploy, configure, and fine-tune monitoring and alerting systems on a KIND cluster, and as a bonus, monitor and log an AWS EKS cluster. This exercise is designed to push your skills with advanced configurations, custom queries, dynamic dashboards, and robust alerting mechanisms, while preparing you for technical interviews. + +**Important:** +1. Fork the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop) and implement all tasks on your fork. +2. Document all steps, commands, screenshots, and observations in a file named `solution.md` within your fork. +3. Submit your `solution.md` file in the Week 10 (Observability) task folder of the 90DaysOfDevOps repository. + +--- + +## Task 1: Setup a KIND Cluster for Observability + +**Real-World Scenario:** +Simulate a production-like Kubernetes environment locally by creating a KIND cluster to serve as the foundation for your monitoring setup. + +**Steps:** +1. **Install KIND:** + - Follow the official KIND installation guide. +2. **Create a KIND Cluster:** + - Run: + ```bash + kind create cluster --name observability-cluster + ``` +3. **Verify the Cluster:** + - Run `kubectl get nodes` and capture the output. +4. **Document in `solution.md`:** + - Include installation steps, the commands used, and output from `kubectl get nodes`. + +**Interview Questions:** +- What are the benefits and limitations of using KIND for production-like testing? +- How can you simulate production scenarios using a local KIND cluster? + +--- + +## Task 2: Deploy Prometheus on KIND with Advanced Configurations + +**Real-World Scenario:** +Deploy Prometheus on your KIND cluster with a custom configuration that includes advanced scrape settings and relabeling rules to ensure high-quality metric collection. + +**Steps:** +1. **Create a Custom Prometheus Configuration:** + - Write a `prometheus.yml` with custom scrape configurations targeting cluster components (e.g., kube-state-metrics, Node Exporter) and advanced relabeling rules to clean up metric labels. +2. **Deploy Prometheus:** + - Deploy Prometheus using a Kubernetes Deployment or via a Helm chart. +3. **Verify and Tune:** + - Access the Prometheus UI to verify that metrics are being scraped as expected. + - Adjust relabeling rules and scrape intervals to optimize performance. +4. **Document in `solution.md`:** + - Include your `prometheus.yml` and screenshots of the Prometheus UI showing active targets and effective relabeling. + +**Interview Questions:** +- How do advanced relabeling rules refine metric collection in Prometheus? +- What performance issues might you encounter when scraping targets on a KIND cluster, and how would you address them? + +--- + +## Task 3: Deploy Grafana and Build Production-Grade Dashboards + +**Real-World Scenario:** +Deploy Grafana on your KIND cluster and configure it to use Prometheus as a data source. Then, create dashboards that reflect real production metrics, including custom queries and complex visualizations. + +**Steps:** +1. **Deploy Grafana:** + - Create a Kubernetes Deployment and Service for Grafana. +2. **Configure the Data Source:** + - In the Grafana UI, add Prometheus as a data source. +3. **Design Production Dashboards:** + - Create dashboards with panels that display key metrics (e.g., CPU, memory, disk I/O, network latency) using advanced PromQL queries. + - Customize panel visualizations (e.g., graphs, tables, heatmaps) to present data effectively. +4. **Document in `solution.md`:** + - Include configuration details, screenshots of dashboards, and an explanation of the queries and visualization choices. + +**Interview Questions:** +- What factors are critical when designing dashboards for production monitoring? +- How do you optimize PromQL queries for performance and clarity in Grafana? + +--- + +## Task 4: Configure Alerting and Notification Rules + +**Real-World Scenario:** +Establish robust alerting to detect critical issues (e.g., resource exhaustion, node failures) and notify the operations team immediately. + +**Steps:** +1. **Define Alerting Rules:** + - Add alerting rules in `prometheus.yml` or configure Prometheus Alertmanager for specific conditions. +2. **Configure Notification Channels:** + - Set up Grafana (or Alertmanager) to send notifications via email, Slack, or another channel. +3. **Test Alerts:** + - Simulate alert conditions (e.g., by temporarily reducing resources) to verify that notifications are sent. +4. **Document in `solution.md`:** + - Include your alerting configuration, screenshots of triggered alerts, and a brief rationale for chosen thresholds. + +**Interview Questions:** +- How do you design effective alerting rules to minimize false positives in production? +- What challenges do you face in configuring notifications for a dynamic environment? + +--- + +## Task 5: Deploy Node Exporter for Enhanced System Metrics + +**Real-World Scenario:** +Enhance system monitoring by deploying Node Exporter on your KIND cluster to collect detailed metrics such as CPU, memory, disk, and network usage, which are critical for troubleshooting production issues. + +**Steps:** +1. **Deploy Node Exporter:** + - Create a Deployment or DaemonSet to deploy Node Exporter across all nodes in your KIND cluster. +2. **Verify Metrics Collection:** + - Ensure Node Exporter endpoints are correctly scraped by Prometheus. +3. **Document in `solution.md`:** + - Include your Node Exporter YAML configuration and screenshots showing metrics collected in Prometheus. + - Explain the importance of system-level metrics in production monitoring. + +**Interview Questions:** +- What additional system metrics does Node Exporter provide that are crucial for production? +- How would you integrate Node Exporter metrics into your existing Prometheus setup? + +--- + +## Bonus Task: Monitor and Log an AWS EKS Cluster + +**Real-World Scenario:** +For an added challenge, provision or use an existing AWS EKS cluster and set up Prometheus and Grafana to monitor and log its performance. This task simulates the observability of a production cloud environment. + +**Steps:** +1. **Provision an EKS Cluster:** + - Use Terraform to deploy an EKS cluster (or leverage an existing one) and document key configuration settings. +2. **Deploy Prometheus and Grafana on EKS:** + - Configure Prometheus with appropriate scrape targets for the EKS cluster. + - Deploy Grafana and integrate it with Prometheus. +3. **Integrate Logging (Optional):** + - Optionally, configure a logging solution (e.g., Fluentd or CloudWatch) to capture EKS logs. +4. **Document in `solution.md`:** + - Summarize your EKS provisioning steps, Prometheus and Grafana configurations, and any logging integration. + - Explain how monitoring and logging improve observability in a cloud environment. + +**Interview Questions:** +- What are the key challenges of monitoring an EKS cluster versus a local KIND cluster? +- How would you integrate logging with monitoring tools to ensure comprehensive observability? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Fork the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop) and ensure all files (Prometheus and Grafana configurations, Node Exporter YAML, Terraform files for the bonus task, `solution.md`, etc.) are committed and pushed to your fork. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `observability-challenge`) to the main repository. + - **Title:** + ``` + Week 10 Challenge - Observability Challenge (Prometheus & Grafana on KIND/EKS) + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Submit Your Documentation:** + - **Important:** Place your `solution.md` file in the Week 10 (Observability) task folder of the 90DaysOfDevOps repository. + +4. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Observability challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., KIND/EKS setup, advanced configurations, dashboard creation, alerting strategies, and Node Exporter integration). + - Use the hashtags: **#90DaysOfDevOps #Prometheus #Grafana #KIND #EKS #Observability #DevOps #InterviewPrep** + - Optionally, provide links to your repository or blog posts detailing your journey. + +--- + +## TrainWithShubham Resources for Observability + +- **[Prometheus & Grafana One-Shot Video](https://youtu.be/DXZUunEeHqM?si=go1m-THyng7Ipyu6)** + +--- + +## Additional Resources + +- **[Prometheus Official Documentation](https://prometheus.io/docs/)** +- **[Grafana Official Documentation](https://grafana.com/docs/)** +- **[Alertmanager Documentation](https://prometheus.io/docs/alerting/latest/alertmanager/)** +- **[Kubernetes Monitoring with Prometheus](https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/)** +- **[Grafana Dashboards](https://grafana.com/grafana/dashboards/)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. diff --git a/2025/projects/README.md b/2025/projects/README.md new file mode 100644 index 0000000000..8b13789179 --- /dev/null +++ b/2025/projects/README.md @@ -0,0 +1 @@ + diff --git a/2025/shell_scripting/README.md b/2025/shell_scripting/README.md new file mode 100644 index 0000000000..e8792c3280 --- /dev/null +++ b/2025/shell_scripting/README.md @@ -0,0 +1,130 @@ +## Week 3 Challenge 1: User Account Management + +In this challenge, you will create a bash script that provides options for managing user accounts on the system. The script should allow users to perform various user account-related tasks based on command-line arguments. + +### Part 1: Account Creation + +1. Implement an option `-c` or `--create` that allows the script to create a new user account. The script should prompt the user to enter the new username and password. + +2. Ensure that the script checks whether the username is available before creating the account. If the username already exists, display an appropriate message and exit gracefully. + +3. After creating the account, display a success message with the newly created username. + +### Part 2: Account Deletion + +1. Implement an option `-d` or `--delete` that allows the script to delete an existing user account. The script should prompt the user to enter the username of the account to be deleted. + +2. Ensure that the script checks whether the username exists before attempting to delete the account. If the username does not exist, display an appropriate message and exit gracefully. + +3. After successfully deleting the account, display a confirmation message with the deleted username. + +### Part 3: Password Reset + +1. Implement an option `-r` or `--reset` that allows the script to reset the password of an existing user account. The script should prompt the user to enter the username and the new password. + +2. Ensure that the script checks whether the username exists before attempting to reset the password. If the username does not exist, display an appropriate message and exit gracefully. + +3. After resetting the password, display a success message with the username and the updated password. + +### Part 4: List User Accounts + +1. Implement an option `-l` or `--list` that allows the script to list all user accounts on the system. The script should display the usernames and their corresponding user IDs (UID). + +### Part 5: Help and Usage Information + +1. Implement an option `-h` or `--help` that displays usage information and the available command-line options for the script. + +### Bonus Points (Optional) + +If you want to challenge yourself further, you can add additional features to the script, such as: + +- Displaying more detailed information about user accounts (e.g., home directory, shell, etc.). +- Allowing the modification of user account properties (e.g., username, user ID, etc.). + +Remember to handle errors gracefully, provide appropriate user prompts, and add comments to explain the logic and purpose of each part of the script. + +## [Example Interaction: User Account Management Script](./example_interaction_with_usr_acc_mgmt.md) + + +## Submission Instructions + +Create a bash script named `user_management.sh` that implements the User Account Management as described in the challenge. + +Add comments in the script to explain the purpose and logic of each part. + +## Week 3 Challenge 2: Automated Backup & Recovery using Cron + + +This is another challenge for Day 2 of the Bash Scripting Challenge! In this challenge, you will create a bash script that performs a backup of a specified directory and implements a rotation mechanism to manage backups. + +## Challenge Description + +Your task is to create a bash script that takes a directory path as a command-line argument and performs a backup of the directory. The script should create timestamped backup folders and copy all the files from the specified directory into the backup folder. + +Additionally, the script should implement a rotation mechanism to keep only the last 3 backups. This means that if there are more than 3 backup folders, the oldest backup folders should be removed to ensure only the most recent backups are retained. + +> The script will create a timestamped backup folder inside the specified directory and copy all the files into it. It will also check for existing backup folders and remove the oldest backups to keep only the last 3 backups. + +## Example Usage + +Assume the script is named `backup_with_rotation.sh`. Here's an example of how it will look, +also assuming the script is executed with the following commands on different dates: + +1. First Execution (2023-07-30): + +``` +$ ./backup_with_rotation.sh /home/user/documents +``` + +Output: + +``` +Backup created: /home/user/documents/backup_2023-07-30_12-30-45 +Backup created: /home/user/documents/backup_2023-07-30_15-20-10 +Backup created: /home/user/documents/backup_2023-07-30_18-40-55 +``` + +After this execution, the /home/user/documents directory will contain the following items: + +``` +backup_2023-07-30_12-30-45 +backup_2023-07-30_15-20-10 +backup_2023-07-30_18-40-55 +file1.txt +file2.txt +... +``` + +2. Second Execution (2023-08-01): + +``` +$ ./backup_with_rotation.sh /home/user/documents +``` + +Output: + +``` +Backup created: /home/user/documents/backup_2023-08-01_09-15-30 +``` + +After this execution, the /home/user/documents directory will contain the following items: + +``` +backup_2023-07-30_15-20-10 +backup_2023-07-30_18-40-55 +backup_2023-08-01_09-15-30 +file1.txt +file2.txt +... +``` + +In this example, the script creates backup folders with timestamped names and retains only the last 3 backups while removing the older backups. + +## Submission Instructions + +Create a bash script named backup_with_rotation.sh that implements the Directory Backup with Rotation as described in the challenge. + +Add comments in the script to explain the purpose and logic of each part. + + +Good luck with the User Account Management challenge! This challenge will test your ability to interact with user input, manage user accounts, and perform administrative tasks using bash scripting. Happy scripting and managing user accounts! diff --git a/2025/terraform/README.md b/2025/terraform/README.md new file mode 100644 index 0000000000..26a696d37c --- /dev/null +++ b/2025/terraform/README.md @@ -0,0 +1,228 @@ +# Week 8: Terraform (Infrastructure as Code) Challenge + +This set of tasks is designed as part of the 90DaysOfDevOps challenge to simulate complex, real-world scenarios you might encounter on the job or in technical interviews. By completing these tasks on the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop), you'll gain practical experience with advanced Terraform topics, including provisioning, state management, variables, modules, workspaces, resource lifecycle management, drift detection, and environment management. + +**Important:** +1. Fork the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop) and implement all tasks on your fork. +2. Document all steps, commands, screenshots, and observations in a file named `solution.md` within your fork. +3. Submit your `solution.md` file in the Week 8 (Terraform) task folder of the 90DaysOfDevOps repository. + +--- + +## Task 1: Install Terraform, Initialize, and Provision a Basic Resource + +**Scenario:** +Begin by installing Terraform, initializing a project, and provisioning a basic resource (e.g., an AWS EC2 instance) to validate your setup. + +**Steps:** +1. **Install Terraform:** + - Download and install Terraform on your local machine. +2. **Initialize a Terraform Project:** + - Create a new directory for your Terraform project. + - Run `terraform init` to initialize the project. +3. **Provision a Basic Resource:** + - Create a configuration file (e.g., `main.tf`) to provision an AWS EC2 instance (or a similar resource for your cloud provider). + - Run `terraform apply` and confirm the changes. +4. **Document in `solution.md`:** + - Include the installation steps, your `main.tf` file, and the output of your `terraform apply` command. + +**Interview Questions:** +- How does Terraform manage resource creation and state? +- What is the significance of the `terraform init` command in a new project? + +--- + +## Task 2: Manage Terraform State with a Remote Backend + +**Scenario:** +Ensuring state consistency is critical when multiple team members work on infrastructure. Configure a remote backend (e.g., AWS S3 with DynamoDB for locking) to store your Terraform state file. + +**Steps:** +1. **Configure a Remote Backend:** + - Create a backend configuration in your `main.tf` or a separate backend file to configure a remote backend. +2. **Reinitialize Terraform:** + - Run `terraform init` to reinitialize your project with the new backend. +3. **Document in `solution.md`:** + - Include the backend configuration details. + - Explain the benefits of using a remote backend and state locking in collaborative environments. + +**Interview Questions:** +- Why is remote state management important in Terraform? +- How does state locking prevent conflicts during collaborative updates? + +--- + +## Task 3: Use Variables, Outputs, and Workspaces + +**Scenario:** +Improve the flexibility and reusability of your Terraform configuration by using variables, outputs, and workspaces to manage multiple environments. + +**Steps:** +1. **Define Variables and Outputs:** + - Create a `variables.tf` file to define configurable parameters (e.g., region, instance type). + - Create an `outputs.tf` file to output key information (e.g., public IP address of the EC2 instance). +2. **Implement Workspaces:** + - Use `terraform workspace new` to create separate workspaces for different environments (e.g., dev, staging, prod). +3. **Document in `solution.md`:** + - Include your `variables.tf`, `outputs.tf`, and a summary of your workspace setup. + - Explain how these features enable dynamic and multi-environment deployments. + +**Interview Questions:** +- How do variables and outputs enhance the reusability of Terraform configurations? +- What is the purpose of workspaces in Terraform, and how would you use them in a production scenario? + +--- + +## Task 4: Create and Use Terraform Modules + +**Scenario:** +Enhance reusability by creating a Terraform module for commonly used resources, and integrate it into your main configuration. + +**Steps:** +1. **Create a Module:** + - In a separate directory (e.g., `modules/ec2_instance`), create a module with `main.tf`, `variables.tf`, and `outputs.tf` for provisioning an EC2 instance. +2. **Reference the Module:** + - Update your main configuration to call the module using a `module` block. +3. **Document in `solution.md`:** + - Provide the module code and the main configuration. + - Explain how modules promote consistency and reduce code duplication. + +**Interview Questions:** +- What are the advantages of using modules in Terraform? +- How would you structure a module for reusable infrastructure components? + +--- + +## Task 5: Resource Dependencies and Lifecycle Management + +**Scenario:** +Ensure correct resource creation order and safe updates by managing dependencies and customizing resource lifecycles. + +**Steps:** +1. **Define Resource Dependencies:** + - Use the `depends_on` meta-argument in your configuration to specify dependencies explicitly. +2. **Configure Resource Lifecycles:** + - Add lifecycle blocks (e.g., `create_before_destroy`) in your resource definitions to manage updates safely. +3. **Document in `solution.md`:** + - Include examples of resource dependencies and lifecycle configurations in your code. + - Explain how these settings prevent downtime during updates. + +**Interview Questions:** +- How does Terraform handle resource dependencies? +- Can you explain the purpose of the `create_before_destroy` lifecycle argument? + +--- + +## Task 6: Infrastructure Drift Detection and Change Management + +**Scenario:** +In production, changes might occur outside of Terraform. Use Terraform commands to detect infrastructure drift and manage changes. + +**Steps:** +1. **Detect Drift:** + - Run `terraform plan` to identify differences between your configuration and the actual infrastructure. +2. **Reconcile Changes:** + - Describe your approach to updating the state or reapplying configurations when drift is detected. +3. **Document in `solution.md`:** + - Include examples of drift detection and your strategy for reconciling differences. + - Reflect on the importance of change management in infrastructure as code. + +**Interview Questions:** +- What is infrastructure drift, and why is it a concern in production environments? +- How would you resolve discrepancies between your Terraform configuration and actual infrastructure? + +--- + +## Task 7: (Optional) Dynamic Pipeline Parameterization for Terraform + +**Scenario:** +Enhance your Terraform configurations by using dynamic input parameters and conditional logic to deploy resources differently based on environment-specific values. + +**Steps:** +1. **Enhance Variables with Conditionals:** + - Update your `variables.tf` to include default values and conditional expressions for environment-specific configurations. +2. **Apply Conditional Logic:** + - Use conditional expressions in your resource definitions to adjust attributes based on variable values. +3. **Document in `solution.md`:** + - Explain how dynamic parameterization improves flexibility. + - Include sample outputs demonstrating different configurations. + +**Interview Questions:** +- How do conditional expressions in Terraform improve configuration flexibility? +- Provide an example scenario where dynamic parameters are critical in a deployment pipeline. + +--- + + +### **Bonus Task: Multi-Environment Setup with Terraform & Ansible ** + +**Scenario:** +Set up **AWS infrastructure** for multiple environments (dev, staging, prod) using **Terraform** for provisioning and **Ansible** for configuration. This includes installing both tools, creating dynamic inventories, and automating Nginx configuration across environments. + +1. **Install Tools:** + - Install **Terraform** and **Ansible** on your local machine. + +2. **Provision AWS Infrastructure with Terraform:** + - Create Terraform files to spin up EC2 instances (or similar resources) in dev, staging, and prod. + - Apply configurations (e.g., `terraform apply -var-file="dev.tfvars"`) for each environment. + +3. **Configure Hosts with Ansible:** + - Generate **dynamic inventories** (or separate inventory files) based on Terraform outputs. + - Write a playbook to install and configure **Nginx** across all environments. + - Run `ansible-playbook -i nginx_setup.yml` to automate the setup. + +4. **Automate & Document:** + - Ensure infrastructure changes are version-controlled. + - Place all steps, commands, and observations in `solution.md`. + +**Interview Questions :** +- **Terraform & Ansible Integration:** How do you share Terraform outputs (host details) with Ansible inventories? +- **Multi-Environment Management:** What strategies ensure consistency while keeping dev, staging, and prod isolated? +- **Nginx Configuration:** How do you handle environment-specific differences for Nginx setups? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Fork the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop) and ensure all Terraform files (configuration files, modules, variable files, `solution.md`, etc.) are committed and pushed to your fork. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `terraform-challenge`) to the main repository. + - **Title:** + ``` + Week 8 Challenge - Terraform Infrastructure as Code Challenge + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Submit Your Documentation:** + - **Important:** Place your `solution.md` file in the Week 8 (Terraform) task folder of the 90DaysOfDevOps repository. + +4. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Terraform challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., state management, module usage, drift detection, multi-environment setups). + - Use the hashtags: **#90DaysOfDevOps #Terraform #DevOps #InterviewPrep** + - Optionally, provide links to your fork or blog posts detailing your journey. + +--- + +## TrainWithShubham Resources for Terraform + +- **[Terraform Short Notes](https://www.trainwithshubham.com/products/66d5c45f7345de4e9c1d8b05?dgps_u=l&dgps_s=ucpd&dgps_t=cp_u&dgps_u_st=u&dgps_uid=66c972da3795a9659545d71a)** +- **[Terraform One-Shot Video](https://youtu.be/S9mohJI_R34?si=QdRm-JrdKs8ZswXZ)** +- **[Multi-Environment Setup Blog](https://amitabhdevops.hashnode.dev/devops-project-multi-environment-infrastructure-with-terraform-and-ansible)** + +--- + +## Additional Resources + +- **[Terraform Official Documentation](https://www.terraform.io/docs/)** +- **[Terraform Providers](https://www.terraform.io/docs/providers/index.html)** +- **[Terraform Modules](https://www.terraform.io/docs/modules/index.html)** +- **[Terraform State Management](https://www.terraform.io/docs/state/index.html)** +- **[Terraform Workspaces](https://www.terraform.io/docs/language/state/workspaces.html)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. diff --git a/README.md b/README.md index c1cc43144e..c075658e1a 100644 --- a/README.md +++ b/README.md @@ -1,35 +1,66 @@ # #90DaysOfDevOps Challenge -## Learn, Upskill, Grow with the Community -This repository is a Challenge for the DevOps Community to get stronger in DevOps. -This challenge starts on the 1st January 2023 and in the next 90 Days we promise ourselves to become better at DevOps. +## Learn, Upskill, Grow with the Community -The reason for making this Public is so that others can learn from the community and help each other grow. +Join our DevOps community challenge and embark on a 90-day journey to become a better DevOps practitioner. This repository serves as an open invitation to all DevOps enthusiasts who are looking to enhance their skills and knowledge. By participating in this challenge, you will have the opportunity to learn from others in the community, collaborate with like-minded individuals, and ultimately strengthen your DevOps abilities. + +Let's come together to grow and achieve new heights in DevOps! + +📖 **Discover More in Our Detailed Table of Contents!** Explore the richness of our content and find what you're looking for efficiently. Check out our [TOC here](./TOC.md). ## Steps: + - Fork[https://github.com/LondheShubham153/90DaysOfDevOps/fork] the Repo. - Learn Everyday and add your learnings in the day wise folders. - Check out what others are Learning and help/learn from them. - Showcase your learnings on LinkedIn +## These are our community Links + + +   + + + +   + + + +   -These are our community Links. + + +   -- Telegram Channel: https://t.me/trainwithshubham -- Discord Channel: https://discord.gg/hs3Pmc5F -- WhatsApp Group: https://chat.whatsapp.com/FvRlAAZVxUhCUSZ0Y1s7KY -- YouTube Channel: https://www.youtube.com/@TrainWithShubham -- Website: https://www.trainwithshubham.com/ -- LinkedIn: https://www.linkedin.com/in/shubhamlondhe1996/ + + +  + + + + +  ## Events -YouTube Live Announcement: -https://youtu.be/rO5Rllir-LM +### YouTube Live Announcement: + + +   + +### YouTube Playlist for DevOps: + + +   -YouTube Playlist for DevOps: -https://youtube.com/playlist?list=PLlfy9GnSVerRqYJgVYO0UiExj5byjrW8u +### DevOps Course: + + +  -DevOps Course: -https://bit.ly/devops-batch-2 +## Thanks to all contributors ❤ + + + diff --git a/TOC.md b/TOC.md new file mode 100644 index 0000000000..604c12e834 --- /dev/null +++ b/TOC.md @@ -0,0 +1,136 @@ +## Table of Contents + +Below is the index of the incredible DevOps journey that awaits you: + +... + +### 🌟 [Day 1-7 : Introduction to DevOps and Linux Basics](./2023/day01/) + +- Description: Kickstart your 90-day journey with the foundational principles of DevOps. Dive deep into the Linux ecosystem, exploring commands, shell scripting, and file permissions. +- Topics Covered: + - [Understanding and defining DevOps](./2023/day01/README.md) + - [Getting hands-on with basic to advanced Linux commands](./2023/day02/README.md) + - [Grasping the concepts of Linux Shell Scripting](./2023/day04/README.md) + - [Exploring advanced shell scripting techniques with practical tasks.](./2023/day05/README.md) + - [Deep dive into file permissions and Access Control Lists (ACLs)](./2023/day06/README.md) + - [Insights into package managers in Linux and understanding systemctl and systemd](./2023/day07/README.md) + +### 🚀 [Day 8-12: Mastering Git & GitHub: From Basics to Advanced Techniques](./2023/day08/) + +- Description: Embark on a comprehensive journey through Git and GitHub, from grasping the fundamental concepts to exploring advanced techniques that are essential for DevOps. +- Topics Covered: + - [Introduction and understanding of Git and GitHub.](./2023/day08/README.md) + - [Grasping the concept and advantages of Version Control Systems, with a focus on Centralized vs. Distributed.](./2023/day08/README.md) + - [Diving deep into the significance, distinctions, and practicalities of Git and GitHub, including setting up repositories and understanding branch differences.](./2023/day09/README.md) + - [Exploring advanced Git concepts such as branching, revert, reset, rebase, merge, stash, cherry-pick, and conflict resolution.](./2023/day10/README.md) + - [Concluding with celebrations, crafting a Git cheatsheet, and fostering a spirit of continuous learning.](./2023/day12/README.md) + +### 💼 [Day 13-15: Delving into Python Essentials for DevOps](./2023/day13/) + +- Description: Dive into the world of Python, as this programming language plays a pivotal role in a DevOps engineer's toolkit. Cover the basics, explore diverse data types, understand essential data structures, and leverage Python libraries for DevOps tasks. +- Topics Covered: + - [Introduction to Python: its definition, creator, and the extensive libraries and frameworks it offers.](./2023/day13/README.md) + - [Understanding Python's data types and structures.](./2023/day14/README.md) + - [Utilizing Python libraries for DevOps tasks while emphasizing hands-on work with data structures and file formats.](./2023/day15/README.md) + +### 🐳 [Day 16-21: Deep Dive into Docker for DevOps Engineers](./2023/day16/) + +- Description: This module immerses DevOps Engineers into the extensive world of Docker. It equips you with the hands-on skills necessary to build, manage, and optimize Docker containers, create Docker projects, understand related concepts, and share your knowledge with the community. +- Topics Covered: + - [The essence of Docker and its revolutionary packaging into standardized units known as containers.](./2023/day16/README.md) + - [A special project day focused on Dockerfiles – understanding their significance and constructing one for a simple web application.](./2023/day17/README.md) + - [Expanding knowledge on Docker Compose, its configuration language YAML, and the magic they bring to multi-container applications.](./2023/day18/README.md) + - [Docker’s storage solutions with Docker Volume, understanding its independence and how it can benefit container data management.](./2023/day19/README.md) + - [Important interview Questions](./2023/day21/README.md) + +### 🛠️ [Day 22-29: Diving into Jenkins: Basics to Advanced](./2023/day22/) + +- Description: Delve into Jenkins's world, navigating from its foundational concepts to advanced functionalities integral for DevOps. This will empower you to master CI/CD pipelines, understand the anatomy of Jenkins projects, and optimize Jenkins in the DevOps lifecycle. +- Topics Covered: + - [Introduction to Jenkins and its significance in the DevOps realm.](./2023/day22/README.md) + - [Detailed exploration of Jenkins Freestyle Projects.](./2023/day23/README.md) + - [Crafting an end-to-end Jenkins CI/CD project for a Node JS application.](./2023/day24/README.md) + - [Jenkins Declarative Pipelines, understanding the distinction between declarative and scripted pipelines.](./2023/day26/README.md) + - [Leveraging Docker with Jenkins to enhance CI/CD workflows.](./2023/day27/README.md) + - [Jenkins Agents and the orchestration between the master and agent for optimized task execution.](./2023/day28/README.md) + - [Jenkins Important interview Questions.](./2023/day29/README.md) + +### ☸️ [Day 30-37: Kubernetes Mastery: From Overview to Advanced Implementation](./2023/day30/) + +- Description: Dive deep into Kubernetes, the leading container management platform. Spanning from its foundations and architecture, all the way to advanced configurations, services, and best practices. Equip yourself not only with hands-on skills but also with critical insights and understanding. +- Topics Covered: + - [Historical background of Kubernetes, its inspiration from Google's Borg, and its significant role in DevOps.](./2023/day30/README.md) + - [Initial setup with launching a Kubernetes Cluster, getting hands-on with minikube, and deploying Nginx.](./2023/day31/README.md) + - [Advanced cluster operations, including deployments with features like auto-healing and auto-scaling.](./2023/day32/README.md) + - [Working with core Kubernetes concepts like Namespaces, Services, ConfigMaps, Secrets, and Persistent Volumes.](./2023/day33/README.md) + - [Mastering ConfigMaps and Secrets in Kubernete](./2023/day35/README.md) + - [Important interview questions related to Kubernetes.](./2023/day37/README.md) + +### ☁️ [Day 38-53: AWS's vast ecosystem and its dominance in the cloud industry.](./2023/day38/) + +- Description: Dive into Amazon Web Services, starting with the fundamentals and progressing to more complex concepts and tools. Over the course of these days, learn the intricacies of AWS, set up essential services, and work hands-on with CI/CD pipeline concepts. +- Topics Covered: + - [Introduction to AWS and its fundamental components.](./2023/day38/README.md) + - [Understanding IAM (Identity and Access Management)](./2023/day39/README.md) + - [Hands-on with AWS EC2 (Elastic Compute Cloud), including automation and setting up Application Load Balancers.](./2023/day40/README.md) + - [Working with AWS-CLI and S3 programmatic access.](./2023/day42/README.md) + - [Grasping the RDS (Relational Database Service) and deploying a WordPress website.](./2023/day44/README.md) + - [Monitoring and alerting with AWS CloudWatch and SNS.](./2023/day46/README.md) + - [Delving into ECS (Elastic Container Service) and preparing for AWS-based interviews.](./2023/day48/README.md) + - [Embarking on a 4-day intensive journey to set up a CI/CD pipeline on AWS, incorporating tools such as CodeCommit, CodeBuild, CodeDeploy, CodePipeline, and S3.](./2023/day50/README.md) + +### 🛠️ [Day 54-59: Journey Through Ansible: Configuration Management & Automation](./2023/day54/) + +- Description: Venture into the realm of Infrastructure as Code (IaC) and Configuration Management with a detailed focus on Ansible. From basic setups to complex playbooks and hands-on projects, master the nuances of Ansible through step-by-step tasks and comprehensive modules. +- Topics Covered: + - [Introduction to Infrastructure as Code and its significance.](./2023/day54/README.md) + - [Diving deep into Configuration Management and the power of Ansible.](./2023/day55/README.md) + - [A closer look at Ansible: from installation on AWS EC2 to understanding the hosts file and setting up additional EC2 instances.](./2023/day55/README.md) + - [Ad-hoc commands in Ansible: quick commands versus playbooks, their utility, and hands-on tasks involving pinging servers and checking uptime.](./2023/day56/README.md) + - [Enhancing understanding through video explanations to make Ansible more engaging and relatable.](./2023/day57/README.md) + - [Exploring Ansible Playbooks: their importance, use cases, and deep dives into configurations, deployment, roles, and variables.](./2023/day58/README.md) + - [A practical project to solidify understanding: deploying a web app using Ansible, including EC2 setup, Ansible installations, inventory file access, Nginx installations, and deploying a sample webpage.](./2023/day59/README.md) + +### ⚙️ [Day 60-71: Dive into Terraform: From Basics to Modules](./2023/day60/) + +- Description: Delve deep into Terraform, the renowned infrastructure-as-code tool. Spanning an 11-day learning journey, explore its fundamental concepts, automation potentials, advanced configurations, and best practices for AWS deployment. +- Topics Covered: + - [Introduction to Terraform and its pivotal role in automating EC2 instances.](./2023/day60/README.md) + - [Familiarizing with basic and essential Terraform commands.](./2023/day61/README.md) + - [The integration between Terraform and Docker, encompassing Blocks, Resources, and providers.](./2023/day62/README.md) + - [Understanding the significance of Terraform variables, and how they interplay in Terraform configurations.](./2023/day63/README.md) + - [Deep-dive into the realms of Terraform with AWS, emphasizing resource creation and management.](./2023/day64/README.md) + - [Expanding horizons with hands-on Terraform projects, crafting AWS infrastructure using Infrastructure-as-Code techniques.](./2023/day66/README.md) + - [AWS S3 Bucket creation, management, and the underlying intricacies.](./2023/day67/README.md) + - [Embracing scalability with Terraform - comprehending the art of scaling infrastructure.](./2023/day68/README.md) + - [Unraveling the world of Meta-Arguments and their application in Terraform.](./2023/day69/README.md) + - [Introduction to the modular world of Terraform - the core, the applications, and the benefits.](./2023/day70/README.md) + - [Preparing and acing Terraform interview questions.](./2023/day71/README.md) + +### [Day 72-78: 📊 Grafana Mastery: Monitoring, Dashboarding, and Alerting](./2023/day72/) + +- Description: Grafana, one of the most versatile open-source platforms for observability. From understanding its essence to setting it up and further integrating it with various platforms like Docker and cloud services, this comprehensive guide offers a mix of theory and hands-on tasks. +- Topics Covered: + - [Introducing Grafana and exploring its features, benefits, monitoring capabilities, databases compatibility, metrics, visualizations, and distinction from Prometheus.](./2023/day72/README.md) + - [Setting up Grafana on a local environment within AWS EC2.](./2023/day73/README.md) + - [Connecting AWS EC2 instances with Grafana for efficient monitoring.](./2023/day74/README.md) + - [Implementing Docker, creating containers, and sharing real-time logs with Grafana.](./2023/day75/README.md) + - [Constructing a Grafana dashboard for an organized visualization of metrics.](./2023/day76/README.md) + - [Establishing alert systems with Grafana for prompt notifications on system irregularities.](./2023/day77/README.md) + - [Exploring Grafana Cloud, setting up alerts for EC2 instances, and managing AWS billing alerts.](./2023/day78/README.md) + +### [Day 79+🔥: Comprehensive Dive into DevOps Projects & Prometheus Mastery](./2023/day79/) + +- Description: Delve into an extensive journey exploring the vast capabilities of Prometheus, combined with hands-on DevOps projects that span a variety of tools, platforms, and methodologies. Learn how to monitor, automate, deploy, and manage applications effectively using modern DevOps techniques. +- Topics Covered: + - [In-depth understanding of Prometheus: its architecture, features, components, database, and data retention.](./2023/day79/README.md) + - Projects to automate and streamline processes: + - [Building, testing, and deploying with Jenkins and GitHub.](./2023/day80/README.md) + - [Deploying using Jenkins' declarative syntax.](./2023/day81/README.md) + - [Hosting static websites on AWS S3.](./2023/day82/README.md) + - [Application deployment with Docker Swarm.](./2023/day83/README.md) + - [Deploying a Netflix clone using Kubernetes.](./2023/day84/README.md) + - [Utilizing AWS ECS Fargate and ECR with a Node JS app.](./2023/day85/README.md) + - [Deployment on AWS platforms using GitHub Actions.](./2023/day86/README.md) + - [Setting up and deploying a Django Todo app on AWS EC2 with a Kubeadm Kubernetes cluster.](./2023/day88/README.md) + - [Mounting AWS S3 Bucket on Amazon EC2 using S3FS.](./2023/day89/README.md)