diff --git a/2025/ansible/README.md b/2025/ansible/README.md index 8b13789179..5721fffd01 100644 --- a/2025/ansible/README.md +++ b/2025/ansible/README.md @@ -1 +1,193 @@ +# Week 9: Ansible Automation Challenge +This set of tasks is part of the 90DaysOfDevOps challenge and focuses on solving real-world automation problems using Ansible. By completing these tasks on your designated Ansible project repository, you'll work on scenarios that mirror production environments and industry practices. The tasks cover installation, dynamic inventory management, robust playbook development, role organization, secure secret management, and orchestration of multi-tier applications. Your work will help you build practical skills and prepare for technical interviews. + +**Important:** +1. Fork or create your designated Ansible project repository (or use your own) and implement all tasks on your fork. +2. Document all steps, commands, screenshots, and observations in a file named `solution.md` within your fork. +3. Submit your `solution.md` file in the Week 9 (Ansible) task folder of the 90DaysOfDevOps repository. + +--- + +## Task 1: Install Ansible and Configure a Dynamic Inventory + +**Real-World Scenario:** +In production, inventories change frequently. Set up Ansible with a dynamic inventory (using a script or AWS EC2 plugin) to automatically fetch and update target hosts. + +**Steps:** +1. **Install Ansible:** + - Follow the official installation guide to install Ansible on your local machine. +2. **Configure a Dynamic Inventory:** + - Set up a dynamic inventory using an inventory script or the AWS EC2 dynamic inventory plugin. +3. **Test Connectivity:** + - Run: + ```bash + ansible all -m ping -i dynamic_inventory.py + ``` + to ensure all servers are reachable. +4. **Document in `solution.md`:** + - Include your dynamic inventory configuration and test outputs. + - Explain how dynamic inventories adapt to a production environment. + +**Interview Questions:** +- How do dynamic inventories improve the management of production hosts? +- What challenges do dynamic inventory sources present and how can you mitigate them? + +--- + +## Task 2: Develop a Robust Playbook to Install and Configure Nginx + +**Real-World Scenario:** +Web servers like Nginx must be reliably deployed and configured in production. Create a playbook that installs Nginx, configures it using advanced Jinja2 templating (with loops, conditionals, and filters), and verifies that Nginx is running correctly. Incorporate asynchronous task execution with error handling for long-running operations. + +**Steps:** +1. **Create a Comprehensive Playbook:** + - Write a playbook (e.g., `nginx_setup.yml`) that: + - Installs Nginx. + - Deploys a templated Nginx configuration using a Jinja2 template (`nginx.conf.j2`) that includes loops and conditionals. + - Implements asynchronous execution (`async` and `poll`) with error handling. +2. **Test the Playbook:** + - Run the playbook against your dynamic inventory. +3. **Document in `solution.md`:** + - Include your playbook and Jinja2 template. + - Describe your strategies for asynchronous execution and error handling. + +**Interview Questions:** +- How do Jinja2 templates with loops and conditionals improve production configuration management? +- What are the challenges of managing long-running tasks with async in Ansible, and how do you handle errors? + +--- + +## Task 3: Organize Complex Playbooks Using Roles and Advanced Variables + +**Real-World Scenario:** +For large-scale production environments, organizing your playbooks into roles enhances maintainability and collaboration. Refactor your playbooks into roles (e.g., `nginx`, `app`, `db`) and use advanced variable files (with hierarchies and conditionals) to manage different configurations. + +**Steps:** +1. **Create Roles:** + - Develop roles for different components (e.g., `nginx`, `app`, `db`) with the standard directory structure (`tasks/`, `handlers/`, `templates/`, `vars/`). +2. **Utilize Advanced Variables:** + - Create hierarchical variable files with default values and override files for various scenarios. +3. **Refactor and Execute:** + - Update your composite playbook to include the roles. +4. **Document in `solution.md`:** + - Provide the role directory structure and sample variable files. + - Explain how this organization improves maintainability and flexibility. + +**Interview Questions:** +- How do roles improve scalability and collaboration in large-scale Ansible projects? +- What strategies do you use for variable precedence and hierarchy in complex environments? + +--- + +## Task 4: Secure Production Data with Advanced Ansible Vault Techniques + +**Real-World Scenario:** +In production, managing secrets securely is critical. Use Ansible Vault to encrypt sensitive data and explore advanced techniques like splitting secrets into multiple files and decrypting them at runtime. + +**Steps:** +1. **Create Encrypted Files:** + - Use `ansible-vault create` to encrypt multiple secret files. +2. **Integrate Vault in Your Playbooks:** + - Modify your playbooks to load encrypted variables from multiple files. +3. **Test Decryption:** + - Run your playbooks with the vault password to ensure proper decryption. +4. **Document in `solution.md`:** + - Outline your vault strategy and best practices (without exposing secrets). + - Explain the importance of secure secret management. + +**Interview Questions:** +- How does Ansible Vault secure sensitive data in production? +- What advanced techniques can you use for managing secrets at scale? + +--- + +## Task 5: Advanced Orchestration for Multi-Tier Deployments + +**Real-World Scenario:** +Deploy a multi-tier application (e.g., frontend, backend, and database) using Ansible roles to manage each tier. Use orchestration features (such as `serial`, `order`, and async execution) to ensure a smooth deployment process. + +**Steps:** +1. **Develop a Composite Playbook:** + - Write a playbook that calls multiple roles (e.g., `nginx` for frontend, `app` for backend, `db` for the database). +2. **Manage Execution Order and Async Tasks:** + - Use features like `serial` or `order` and implement asynchronous tasks with error handling where necessary. +3. **Document in `solution.md`:** + - Include your composite playbook and explain your orchestration strategy. + - Describe any asynchronous task handling and error management. + +**Interview Questions:** +- How do you orchestrate multi-tier deployments with Ansible? +- What are the challenges and solutions for asynchronous task execution in a multi-tier environment? + +--- + +## Bonus Task: Multi-Environment Setup with Terraform & Ansible + +**Real-World Scenario:** +Integrate Terraform and Ansible to provision and configure AWS infrastructure across multiple environments (dev, staging, prod). Use Terraform to provision resources using environment-specific variable files and use Ansible to configure them (e.g., install and configure Nginx). + +**Steps:** +1. **Provision with Terraform:** + - Create environment-specific variable files (e.g., `dev.tfvars`, `staging.tfvars`, `prod.tfvars`). + - Apply your Terraform configuration for each environment: + ```bash + terraform apply -var-file="dev.tfvars" + ``` +2. **Configure with Ansible:** + - Create separate inventory files or use a dynamic inventory based on Terraform outputs. + - Write a playbook (e.g., `nginx_setup.yml`) to install and configure Nginx. + - Execute the playbook for each environment. +3. **Document in `solution.md`:** + - Provide your environment-specific variable files, inventory files, and playbook. + - Summarize how Terraform outputs integrate with Ansible to manage multi-environment deployments. + +**Interview Questions:** +- How do you integrate Terraform outputs into Ansible inventories in a production workflow? +- What challenges might you face when managing multi-environment configurations, and how do you overcome them? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Fork or use your designated Ansible project repository and ensure all files (playbooks, roles, inventory files, `solution.md`, etc.) are committed and pushed to your fork. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `ansible-challenge`) to the main repository. + - **Title:** + ``` + Week 9 Challenge - Ansible Automation Challenge + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Submit Your Documentation:** + - **Important:** Place your `solution.md` file in the Week 9 (Ansible) task folder of the 90DaysOfDevOps repository. + +4. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Ansible challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., dynamic inventory, multi-tier orchestration, advanced Vault usage, and Terraform-Ansible integration). + - Use the hashtags: **#90DaysOfDevOps #Ansible #DevOps #InterviewPrep** + - Optionally, provide links to your fork or blog posts detailing your journey. + +--- + +## TrainWithShubham Resources for Ansible + +- **[Ansible Short Notes](https://www.trainwithshubham.com/products/Ansible-Short-Notes-64ad5f72b308530823e2c036)** +- **[Ansible One-Shot Video](https://youtu.be/4GwafiGsTUM?si=gqlIsNrfAv495WGj)** +- **[Multi-env setup blog](https://trainwithshubham.blog/devops-project-multi-environment-infrastructure-with-terraform-and-ansible/)** + +--- + +## Additional Resources + +- **[Ansible Official Documentation](https://docs.ansible.com/)** +- **[Ansible Modules Documentation](https://docs.ansible.com/ansible/latest/modules/modules_by_category.html)** +- **[Ansible Galaxy](https://galaxy.ansible.com/)** +- **[Ansible Best Practices](https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. diff --git a/2025/cicd/README.md b/2025/cicd/README.md index 8b13789179..2d68a9b891 100644 --- a/2025/cicd/README.md +++ b/2025/cicd/README.md @@ -1 +1,288 @@ +# Week 6 : Jenkins ( CI/CD ) Basics and Advanced real world challenge +This set of tasks is designed as part of the 90DaysOfDevOps challenge to simulate real-world scenarios you might encounter on the job or in technical interviews. By completing these tasks, you'll gain practical experience with advanced Jenkins topics, including pipelines, distributed agents, RBAC, shared libraries, vulnerability scanning, and automated notifications. + +Complete each task and document all steps, commands, Screenshots, and observations in a file named `solution.md`. This documentation will serve as both your preparation guide and a portfolio piece for interviews. + +--- + +## Task 1: Create a Jenkins Pipeline Job for CI/CD + +**Scenario:** +Create an end-to-end CI/CD pipeline for a sample application. + +**Steps:** +1. **Set Up a Pipeline Job:** + - Create a new Pipeline job in Jenkins. + - Write a basic Jenkinsfile that automates the build, test, and deployment of a sample application (e.g., a simple web app). + - Suggested stages: **Build**, **Test**, **Deploy**. +2. **Run and Verify the Pipeline:** + - Trigger the pipeline and ensure each stage runs successfully. + - Verify the execution by checking console logs and, if applicable, using `docker ps` to confirm container status. +3. **Document in `solution.md`:** + - Include your Jenkinsfile code and explain the purpose of each stage. + - Note any issues you encountered and how you resolved them. + +**Interview Questions:** +- How do declarative pipelines streamline the CI/CD process compared to scripted pipelines? +- What are the benefits of breaking the pipeline into distinct stages? + +--- + +## Task 2: Build a Multi-Branch Pipeline for a Microservices Application + +**Scenario:** +You have a microservices-based application with multiple components stored in separate Git repositories. Your goal is to create a multi-branch pipeline that builds, tests, and deploys each service concurrently. + +**Steps:** +1. **Set Up a Multi-Branch Pipeline Job:** + - Create a new multi-branch pipeline in Jenkins. + - Configure it to scan your Git repository (or repositories) for branches. +2. **Develop a Jenkinsfile for Each Service:** + - Write a Jenkinsfile that includes stages for **Checkout**, **Build**, **Test**, and **Deploy**. + - Include parallel stages if applicable (e.g., running tests for different services concurrently). +3. **Simulate a Merge Scenario:** + - Create a feature branch and simulate a pull request workflow (using the Jenkins “Pipeline Multibranch” plugin with PR support if available). +4. **Document in `solution.md`:** + - List the Jenkinsfile(s) used, explain your pipeline design, and describe how multi-branch pipelines help manage microservices deployments in production. + +**Interview Questions:** +- How does a multi-branch pipeline improve continuous integration for microservices? +- What challenges might you face when merging feature branches in a multi-branch pipeline? + +--- + +## Task 3: Configure and Scale Jenkins Agents/Nodes + +**Scenario:** +Your build workload has increased, and you need to configure multiple agents (across different OS types) to distribute the load. + +**Steps:** +1. **Set Up Multiple Agents:** + - Configure at least two agents (e.g., one Linux-based and one Windows-based) in Jenkins. + - Use Docker containers or VMs to simulate different environments. +2. **Label Agents:** + - Assign labels (e.g., `linux`, `windows`) and modify your Jenkinsfile to run appropriate stages on the correct agent. +3. **Run Parallel Jobs:** + - Create jobs that run in parallel across these agents. +4. **Document in `solution.md`:** + - Explain how you configured and verified each agent. + - Describe the benefits of distributed builds in terms of speed and reliability. + +**Interview Questions:** +- What are the benefits and challenges of using distributed agents in Jenkins? +- How can you ensure that jobs are assigned to the correct agent in a multi-platform environment? + +--- + +## Task 4: Implement and Test RBAC in a Multi-Team Environment + +**Scenario:** +In a large organization, different teams (developers, testers, and operations) require different levels of access to Jenkins. You need to configure RBAC to secure your CI/CD pipeline. + +**Steps:** +1. **Configure RBAC:** + - Use Matrix-based security or the Role Strategy Plugin to create roles (e.g., Admin, Developer, Tester). + - Define permissions for each role. +2. **Create Test Accounts:** + - Simulate real-world usage by creating user accounts for each role and verifying access. +3. **Document in `solution.md`:** + - Include screenshots or logs of your RBAC configuration. + - Explain the importance of access control and provide a potential risk scenario that RBAC helps mitigate. + +**Interview Questions:** +- Why is RBAC essential in a CI/CD environment, and what are the consequences of weak access control? +- Can you describe a scenario where inadequate RBAC could lead to security issues? + +--- + +## Task 5: Develop and Integrate a Jenkins Shared Library + +**Scenario:** +You are working on multiple pipelines that share common tasks (like code quality checks or deployment steps). To avoid duplication and ensure consistency, you need to develop a Shared Library. + +**Steps:** +1. **Create a Shared Library Repository:** + - Set up a separate Git repository that hosts your shared library code. + - Develop reusable functions (e.g., a function for sending notifications or a common test stage). +2. **Integrate the Library:** + - Update your Jenkinsfile(s) from previous tasks to load and use the shared library. + - Use syntax similar to: + ```groovy + @Library('my-shared-library') _ + pipeline { + // pipeline code using shared functions + } + ``` +3. **Document in `solution.md`:** + - Provide code examples from your shared library. + - Explain how this approach improves maintainability and reduces errors. + +**Interview Questions:** +- How do shared libraries contribute to code reuse and maintainability in large organizations? +- Provide an example of a function that would be ideal for a shared library and explain its benefits. + +--- + +## Task 6: Integrate Vulnerability Scanning with Trivy + +**Scenario:** +Security is critical in CI/CD. You must ensure that the Docker images built in your pipeline are free from known vulnerabilities. + +**Steps:** +1. **Add a Vulnerability Scan Stage:** + - Update your Jenkins pipeline to include a stage that runs Trivy on your Docker image: + ```groovy + stage('Vulnerability Scan') { + steps { + sh 'trivy image /sample-app:v1.0' + } + } + ``` +2. **Configure Fail Criteria:** + - Optionally, set the stage to fail the build if critical vulnerabilities are detected. +3. **Document in `solution.md`:** + - Summarize the scan output, note the vulnerabilities and severity, and describe any remediation steps. + - Reflect on the importance of automated security scanning in CI/CD pipelines. + +**Interview Questions:** +- Why is integrating vulnerability scanning into a CI/CD pipeline important? +- How does Trivy help improve the security of your Docker images? + +--- + +## Task 7: Dynamic Pipeline Parameterization + +**Scenario:** +In production environments, pipelines need to be flexible and configurable. Implement dynamic parameterization to allow the pipeline to accept runtime parameters (such as target environment, version numbers, or deployment options). + +**Steps:** +1. **Modify Your Jenkinsfile:** + - Update your Jenkinsfile to accept parameters. For example: + ```groovy + pipeline { + agent any + parameters { + string(name: 'TARGET_ENV', defaultValue: 'staging', description: 'Deployment target environment') + string(name: 'APP_VERSION', defaultValue: '1.0.0', description: 'Application version to deploy') + } + stages { + stage('Build') { + steps { + echo "Building version ${params.APP_VERSION} for ${params.TARGET_ENV} environment..." + // Build commands here + } + } + // Add other stages as needed + } + } + ``` +2. **Run the Parameterized Pipeline:** + - Trigger the pipeline and provide different parameter values to observe how the pipeline behavior changes. +3. **Document in `solution.md`:** + - Explain how parameterization makes the pipeline dynamic. + - Include sample outputs and discuss how this flexibility is useful in a production CI/CD environment. + +**Interview Questions:** +- How does pipeline parameterization improve the flexibility of CI/CD workflows? +- Provide an example of a scenario where dynamic parameters would be critical in a deployment pipeline. + +--- + +## Task 8: Integrate Email Notifications for Build Events + +**Scenario:** +Automated notifications keep teams informed about build statuses. Configure Jenkins to send email alerts upon build completion or failure. + +**Steps:** +1. **Configure SMTP Settings:** + - Set up SMTP details in Jenkins under "Manage Jenkins" → "Configure System". +2. **Update Your Jenkinsfile:** + - Add a stage that uses the `emailext` plugin to send notifications: + ```groovy + stage('Notify') { + steps { + emailext ( + subject: "Build Notification: ${env.JOB_NAME} - Build #${env.BUILD_NUMBER}", + body: "The build has completed successfully. Check details at: ${env.BUILD_URL}", + recipientProviders: [[$class: 'DevelopersRecipientProvider']] + ) + } + } + ``` +3. **Test the Notification:** + - Trigger the pipeline and verify that an email is sent. +4. **Document in `solution.md`:** + - Explain your configuration steps, note any challenges, and describe how you resolved them. + +**Interview Questions:** +- What are the advantages of automating email notifications in CI/CD? +- How would you troubleshoot issues if email notifications fail to send? + +--- + +## Task 9: Troubleshooting, Monitoring & Advanced Debugging + +**Scenario:** +Real-world CI/CD pipelines sometimes fail. Demonstrate how you would troubleshoot and monitor your Jenkins environment. + +**Steps:** +1. **Troubleshooting:** + - Simulate a pipeline failure (e.g., by introducing an error in the Jenkinsfile) and document your troubleshooting process. + - Use commands like `docker logs` and review Jenkins console output. +2. **Monitoring:** + - Describe methods for monitoring Jenkins, such as using system logs or monitoring plugins. +3. **Advanced Debugging:** + - Add debugging statements (e.g., `echo` commands) in your Jenkinsfile to output environment variables or intermediate results. + - Use Jenkins' "Replay" feature to test modifications without committing changes. +4. **Document in `solution.md`:** + - Provide a detailed account of your troubleshooting, monitoring, and debugging strategies. + - Reflect on how these practices help maintain a stable CI/CD environment. + +**Interview Questions:** +- How would you approach troubleshooting a failing Jenkins pipeline? +- What are some effective strategies for monitoring Jenkins in a production environment? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Ensure all files (e.g., Jenkinsfile, configuration scripts, `solution.md`, etc.) are committed and pushed to your repository. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `jenkins-challenge`) to the main repository. + - **Title:** + ``` + Week 6 Challenge - DevOps Batch 9: Jenkins CI/CD Challenge + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Jenkins challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., agent configuration, RBAC, shared libraries, vulnerability scanning, and troubleshooting). + - Use the hashtags: **#90DaysOfDevOps #Jenkins #CI/CD #DevOps #InterviewPrep** + - Optionally, provide links to your repository or blog posts detailing your journey. + +--- + + +## TrainWithShubham Resources for Jenkins CI/CD + +- **[Jenkins Short notes](https://www.trainwithshubham.com/products/64aac20780964e534608664d?dgps_u=l&dgps_s=ucpd&dgps_t=cp_u&dgps_u_st=p&dgps_uid=66c972da3795a9659545d71a)** +- **[Jenkins One-Shot Video](https://youtu.be/XaSdKR2fOU4?si=eDmLQMSSh_eMPT_p)** +- **[TWS blog on Jenkins CI/CD](https://trainwithshubham.blog/automate-cicd-spring-boot-banking-app-jenkins-docker-github/)** + +## Additional Resources + +- **[Jenkins Official Documentation](https://www.jenkins.io/doc/)** +- **[Jenkins Pipeline Documentation](https://www.jenkins.io/doc/book/pipeline/)** +- **[Jenkins Agents and Nodes](https://www.jenkins.io/doc/book/managing/nodes/)** +- **[Jenkins RBAC & Role Strategy Plugin](https://plugins.jenkins.io/role-strategy/)** +- **[Jenkins Shared Libraries](https://www.jenkins.io/doc/book/pipeline/shared-libraries/)** +- **[Trivy Vulnerability Scanner](https://trivy.dev/latest/docs/scanner/vulnerability/)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. \ No newline at end of file diff --git a/2025/docker/README.md b/2025/docker/README.md index 8b13789179..194a3ac090 100644 --- a/2025/docker/README.md +++ b/2025/docker/README.md @@ -1 +1,235 @@ +# Week 5: Docker Basics & Advanced Challenge +Welcome to the Week 5 Docker Challenge! In this task, you will work with Docker concepts and tools taught by Shubham Bhaiya. This challenge covers the following topics: + +- **Introduction and Purpose:** Understand Docker’s role in modern development. +- **Virtualization vs. Containerization:** Learn the differences and benefits. +- **Build Kya Hota Hai:** Understand the Docker build process. +- **Docker Terminologies:** Get familiar with key Docker terms. +- **Docker Components:** Explore Docker Engine, images, containers, and more. +- **Project Building Using Docker:** Containerize a sample project. +- **Multi-stage Docker Builds / Distroless Images:** Optimize your images. +- **Docker Hub (Push/Tag/Pull):** Manage and distribute your Docker images. +- **Docker Volumes:** Persist data across container runs. +- **Docker Networking:** Connect containers using networks. +- **Docker Compose:** Orchestrate multi-container applications. +- **Docker Scout:** Analyze your images for vulnerabilities and insights. + +Complete all the tasks below and document your steps, commands, and observations in a file named `solution.md`. Finally, share your experience on LinkedIn using the provided guidelines. + +--- + +## Challenge Tasks + +### Task 1: Introduction and Conceptual Understanding +1. **Write an Introduction:** + - In your `solution.md`, provide a brief explanation of Docker’s purpose in modern DevOps. + - Compare **Virtualization vs. Containerization** and explain why containerization is the preferred approach for microservices and CI/CD pipelines. + +--- + +### Task 2: Create a Dockerfile for a Sample Project +1. **Select or Create a Sample Application:** + - Choose a simple application (for example, a basic Node.js, Python, or Java app that prints “Hello, Docker!” or serves a simple web page). + +2. **Write a Dockerfile:** + - Create a `Dockerfile` that defines how to build an image for your application. + - Include comments in your Dockerfile explaining each instruction. + - Build your image using: + ```bash + docker build -t /sample-app:latest . + ``` + +3. **Verify Your Build:** + - Run your container locally to ensure it works as expected: + ```bash + docker run -d -p 8080:80 /sample-app:latest + ``` + - Verify the container is running with: + ```bash + docker ps + ``` + - Check logs using: + ```bash + docker logs + ``` + +--- + +### Task 3: Explore Docker Terminologies and Components +1. **Document Key Terminologies:** + - In your `solution.md`, list and briefly describe key Docker terms such as image, container, Dockerfile, volume, and network. + - Explain the main Docker components (Docker Engine, Docker Hub, etc.) and how they interact. + +--- + +### Task 4: Optimize Your Docker Image with Multi-Stage Builds +1. **Implement a Multi-Stage Docker Build:** + - Modify your existing `Dockerfile` to include multi-stage builds. + - Aim to produce a lightweight, **distroless** (or minimal) final image. +2. **Compare Image Sizes:** + - Build your image before and after the multi-stage build modification and compare their sizes using: + ```bash + docker images + ``` +3. **Document the Differences:** + - Explain in `solution.md` the benefits of multi-stage builds and the impact on image size. + +--- + +### Task 5: Manage Your Image with Docker Hub +1. **Tag Your Image:** + - Tag your image appropriately: + ```bash + docker tag /sample-app:latest /sample-app:v1.0 + ``` +2. **Push Your Image to Docker Hub:** + - Log in to Docker Hub if necessary: + ```bash + docker login + ``` + - Push the image: + ```bash + docker push /sample-app:v1.0 + ``` +3. **(Optional) Pull the Image:** + - Verify by pulling your image: + ```bash + docker pull /sample-app:v1.0 + ``` + +--- + +### Task 6: Persist Data with Docker Volumes +1. **Create a Docker Volume:** + - Create a Docker volume: + ```bash + docker volume create my_volume + ``` +2. **Run a Container with the Volume:** + - Run a container using the volume to persist data: + ```bash + docker run -d -v my_volume:/app/data /sample-app:v1.0 + ``` +3. **Document the Process:** + - In `solution.md`, explain how Docker volumes help with data persistence and why they are useful. + +--- + +### Task 7: Configure Docker Networking +1. **Create a Custom Docker Network:** + - Create a custom Docker network: + ```bash + docker network create my_network + ``` +2. **Run Containers on the Same Network:** + - Run two containers (e.g., your sample app and a simple database like MySQL) on the same network to demonstrate inter-container communication: + ```bash + docker run -d --name sample-app --network my_network /sample-app:v1.0 + docker run -d --name my-db --network my_network -e MYSQL_ROOT_PASSWORD=root mysql:latest + ``` +3. **Document the Process:** + - In `solution.md`, describe how Docker networking enables container communication and its significance in multi-container applications. + +--- + +### Task 8: Orchestrate with Docker Compose +1. **Create a docker-compose.yml File:** + - Write a `docker-compose.yml` file that defines at least two services (e.g., your sample app and a database). + - Include definitions for services, networks, and volumes. +2. **Deploy Your Application:** + - Bring up your application using: + ```bash + docker-compose up -d + ``` + - Test the setup, then shut it down using: + ```bash + docker-compose down + ``` +3. **Document the Process:** + - Explain each service and configuration in your `solution.md`. + +--- + +### Task 9: Analyze Your Image with Docker Scout +1. **Run Docker Scout Analysis:** + - Execute Docker Scout on your image to generate a detailed report of vulnerabilities and insights: + ```bash + docker scout cves /sample-app:v1.0 + ``` + - Alternatively, if available, run: + ```bash + docker scout quickview /sample-app:v1.0 + ``` + to get a summarized view of the image’s security posture. + - **Optional:** Save the output to a file for further analysis: + ```bash + docker scout cves /sample-app:v1.0 > scout_report.txt + ``` + +2. **Review and Interpret the Report:** + - Carefully review the output and focus on: + - **List of CVEs:** Identify vulnerabilities along with their severity ratings (e.g., Critical, High, Medium, Low). + - **Affected Layers/Dependencies:** Determine which image layers or dependencies are responsible for the vulnerabilities. + - **Suggested Remediations:** Note any recommended fixes or mitigation strategies provided by Docker Scout. + - **Comparison Step:** If possible, compare this report with previous builds to assess improvements or regressions in your image's security posture. + - If Docker Scout is not available in your environment, document that fact and consider using an alternative vulnerability scanner (e.g., Trivy, Clair) for a comparative analysis. + +3. **Document Your Findings:** + - In your `solution.md`, provide a detailed summary of your analysis: + - List the identified vulnerabilities along with their severity levels. + - Specify which layers or dependencies contributed to these vulnerabilities. + - Outline any actionable recommendations or remediation steps. + - Reflect on how these insights might influence your image optimization or overall security strategy. + - **Optional:** Include screenshots or attach the saved report file (`scout_report.txt`) as evidence of your analysis. + +--- + +### Task 10: Documentation and Critical Reflection +1. **Update `solution.md`:** + - List all the commands and steps you executed. + - Provide explanations for each task and detail any improvements made (e.g., image optimization with multi-stage builds). +2. **Reflect on Docker’s Impact:** + - Write a brief reflection on the importance of Docker in modern software development, discussing its benefits and potential challenges. + +--- + +## 📢 How to Submit + +1. **Push Your Final Work:** + - Ensure that your complete project—including your `Dockerfile`, `docker-compose.yml`, `solution.md`, and any additional files (e.g., the Docker Scout report if saved)—is committed and pushed to your repository. + - Verify that all your changes are visible in your repository. + +2. **Create a Pull Request (PR):** + - Open a PR from your working branch (e.g., `docker-challenge`) to the main repository. + - Use a clear and descriptive title, for example: + ``` + Week 5 Challenge - DevOps Batch 9: Docker Basics & Advanced Challenge + ``` + - In the PR description, include the following details: + - A brief summary of your approach and the tasks you completed. + - A list of the key Docker commands used during the challenge. + - Any insights or challenges you encountered (e.g., lessons learned from multi-stage builds or Docker Scout analysis). + +3. **Share Your Experience on LinkedIn:** + - Write a LinkedIn post summarizing your Week 5 Docker challenge experience. + - In your post, include: + - A brief description of the challenge and what you learned. + - Screenshots, logs, or excerpts from your `solution.md` that highlight key steps or interesting findings (e.g., Docker Scout reports). + - The hashtags: **#90DaysOfDevOps #Docker #DevOps** + - Optionally, links to any blog posts or related GitHub repositories that further explain your journey. + +--- + +## Additional Resources + +- **[Docker Documentation](https://docs.docker.com/)** +- **[Docker Hub](https://docs.docker.com/docker-hub/)** +- **[Multi-stage Builds](https://docs.docker.com/develop/develop-images/multistage-build/)** +- **[Docker Compose](https://docs.docker.com/compose/)** +- **[Docker Scan (Vulnerability Scanning)](https://docs.docker.com/engine/scan/)** +- **[Containerization vs. Virtualization](https://www.docker.com/resources/what-container)** + +--- + +Happy coding and best of luck with this Docker challenge! Document your journey thoroughly in `solution.md` and refer to these resources for additional guidance. diff --git a/2025/git/01_Git_and_Github_Basics/README.md b/2025/git/01_Git_and_Github_Basics/README.md new file mode 100644 index 0000000000..589e08c57c --- /dev/null +++ b/2025/git/01_Git_and_Github_Basics/README.md @@ -0,0 +1,212 @@ +# Week 4: Git and GitHub Challenge + +Welcome to the Week 4 Challenge! In this task you will practice the essential Git and GitHub commands and concepts taught by Shubham Bhaiya. This includes: + +- **Git Basics:** `git init`, `git add`, `git commit` +- **Repository Management:** `git clone`, forking a repository, and understanding how a GitHub repo is made +- **Branching:** Creating branches (`git branch`), switching between branches (`git switch` / `git checkout`), and viewing commit history (`git log`) +- **Authentication:** Pushing and pulling using a Personal Access Token (PAT) +- **Critical Thinking:** Explaining why branching strategies are important in collaborative development + +To make this challenge more difficult, additional steps have been added. You will also be required to explore SSH authentication as a bonus task. Complete all the tasks and document every step in `solution.md`. Finally, share your experience on LinkedIn (details provided at the end). + +--- + +## Challenge Tasks + +### Task 1: Fork and Clone the Repository +1. **Fork the Repository:** + - Visit [this repository](https://github.com/LondheShubham153/90DaysOfDevOps) and fork it to your own GitHub account. If not done yet. + +2. **Clone Your Fork Locally:** + - Clone the forked repository using HTTPS: + ```bash + git clone + ``` + - Change directory into the cloned repository: + ```bash + cd 2025/git/01_Git_and_Github_Basics + ``` + +--- + +### Task 2: Initialize a Local Repository and Create a File +1. **Set Up Your Challenge Directory:** + - Inside the cloned repository, create a new directory for this challenge: + ```bash + mkdir week-4-challenge + cd week-4-challenge + ``` + +2. **Initialize a Git Repository:** + - Initialize the directory as a new Git repository: + ```bash + git init + ``` + +3. **Create a File:** + - Create a file named `info.txt` and add some initial content (for example, your name and a brief introduction). + +4. **Stage and Commit Your File:** + - Stage the file: + ```bash + git add info.txt + ``` + - Commit the file with a descriptive message: + ```bash + git commit -m "Initial commit: Add info.txt with introductory content" + ``` + +--- + +## Task 3: Configure Remote URL with PAT and Push/Pull + +1. **Configure Remote URL with Your PAT:** + To avoid entering your Personal Access Token (PAT) every time you push or pull, update your remote URL to include your credentials. + + **⚠️ Note:** Embedding your PAT in the URL is only for this exercise. It is not recommended for production use. + + Replace ``, ``, and `` with your actual GitHub username, your PAT, and the repository name respectively: + + ```bash + git remote add origin https://:@github.com//90DaysOfDevOps.git + ``` + If a remote named `origin` already exists, update it with: + ```bash + git remote set-url origin https://:@github.com//90DaysOfDevOps.git + ``` +2. **Push Your Commit to Remote:** + - Push your current branch (typically `main`) and set the upstream: + ```bash + git push -u origin main + ``` +3. **(Optional) Pull Remote Changes:** + - Verify your configuration by pulling changes: + ```bash + git pull origin main + ``` + +--- + +### Task 4: Explore Your Commit History +1. **View the Git Log:** + - Check your commit history using: + ```bash + git log + ``` + - Take note of the commit hash and details as you will reference these in your documentation. + +--- + +### Task 5: Advanced Branching and Switching +1. **Create a New Branch:** + - Create a branch called `feature-update`: + ```bash + git branch feature-update + ``` + +2. **Switch to the New Branch:** + - Switch using `git switch`: + ```bash + git switch feature-update + ``` + - Alternatively, you can use: + ```bash + git checkout feature-update + ``` + +3. **Modify the File and Commit Changes:** + - Edit `info.txt` (for example, add more details or improvements). + - Stage and commit your changes: + ```bash + git add info.txt + git commit -m "Feature update: Enhance info.txt with additional details" + git push origin feature-update + ``` + - Merge this branch to `main` via a Pull Request on GitHub. + +4. **(Advanced) Optional Extra Challenge:** + - If you feel confident, create another branch (e.g., `experimental`) from your main branch, make a conflicting change to `info.txt`, then switch back to `feature-update` and merge `experimental` to simulate a merge conflict. Resolve the conflict manually, then commit the resolution. + > *Note: This extra step is optional and intended for those looking for an additional challenge.* + +--- + +### Task 6: Explain Branching Strategies +1. **Document Your Process:** + - Create (or update) a file named `solution.md` in your repository. + - List all the Git commands you used in Tasks 1–4. + - **Explain:** Write a brief explanation on **why branching strategies are important** in collaborative development. Consider addressing: + - Isolating features and bug fixes + - Facilitating parallel development + - Reducing merge conflicts + - Enabling effective code reviews + +--- + +### Bonus Task: Explore SSH Authentication +1. **Generate an SSH Key (if not already set up):** + - Create an SSH key pair: + ```bash + ssh-keygen + ``` + - Follow the prompts and then locate your public key (typically found at `~/.ssh/id_ed25519.pub`). + +2. **Add Your SSH Public Key to GitHub:** + - Copy the contents of your public key and add it to your GitHub account under **SSH and GPG keys**. + (See [Connecting to GitHub with SSH](https://docs.github.com/en/authentication/connecting-to-github-with-ssh) for help.) + +3. **Switch Your Remote URL to SSH:** + - Change the remote URL from HTTPS to SSH: + ```bash + git remote set-url origin git@github.com:/90DaysOfDevOps.git + ``` + +4. **Push Your Branch Using SSH:** + - Test the SSH connection by pushing your branch: + ```bash + git push origin feature-update + ``` + +--- + +## 📢 How to Submit + +1. **Push Your Final Work:** + - Ensure your branch (e.g., `feature-update`) with the updated `solution.md` file is pushed to your fork. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch to the main repository. + - Use a clear title such as: + ``` + Week 4 Challenge - DevOps Batch 9: Git & GitHub Advanced Challenge + ``` + - In the PR description, summarize your process and list the Git commands you used. + +3. **Share Your Experience on LinkedIn:** + - Write a LinkedIn post summarizing your Week 4 experience. + - Include screenshots or logs of your tasks. + - Use hashtags: **#90DaysOfDevOps #GitGithub #DevOps** + - Optionally, share any blog posts, GitHub repos, or articles you create about this challenge. + +--- + +## Additional Resources + +- **Git Documentation:** + [https://git-scm.com/docs](https://git-scm.com/docs) + +- **Creating a Personal Access Token:** + [GitHub PAT Setup](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) + +- **Forking and Cloning Repositories:** + [Fork a Repository](https://docs.github.com/en/get-started/quickstart/fork-a-repo) | [Cloning a Repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) + +- **SSH Authentication with GitHub:** + [Connecting to GitHub with SSH](https://docs.github.com/en/authentication/connecting-to-github-with-ssh) + +- **Understanding Branching Strategies:** + [Git Branching Strategies](https://www.atlassian.com/git/tutorials/comparing-workflows) + +--- + +Happy coding and best of luck with this challenge! Document your journey thoroughly and be sure to explore the additional resources if you get stuck. diff --git a/2025/git/02_Git_and_Github_Advanced/README.md b/2025/git/02_Git_and_Github_Advanced/README.md new file mode 100644 index 0000000000..5b9e775252 --- /dev/null +++ b/2025/git/02_Git_and_Github_Advanced/README.md @@ -0,0 +1,208 @@ +# Week 4: Git & GitHub Advanced Challenge + +This challenge covers advanced Git concepts essential for real-world DevOps workflows. By the end of this challenge, you will: + +- Understand how to work with Pull Requests effectively. +- Learn to undo changes using Reset & Revert. +- Use Stashing to manage uncommitted work. +- Apply Cherry-picking for selective commits. +- Keep a clean commit history using Rebasing. +- Learn industry-standard Branching Strategies. + +## **Topics Covered** +1. Pull Requests – Collaborating in teams. +2. Reset & Revert – Undo changes safely. +3. Stashing – Saving work temporarily. +4. Cherry-picking – Selecting specific commits. +5. Rebasing – Maintaining a clean history. +6. Branching Strategies – Industry best practices. + +## **Challenge Tasks** + +### **Task 1: Working with Pull Requests (PRs)** +**Scenario:** You are working on a new feature and need to merge your changes into the main branch using a Pull Request. + +1. Fork a repository and clone it locally. + ```bash + git clone + cd + ``` +2. Create a feature branch and make changes. + ```bash + git checkout -b feature-branch + echo "New Feature" >> feature.txt + git add . + git commit -m "Added a new feature" + ``` +3. Push the changes and create a Pull Request. + ```bash + git push origin feature-branch + ``` +4. Open a PR on GitHub, request a review, and merge it once approved. + +**Document in `solution.md`** +- Steps to create a PR. +- Best practices for writing PR descriptions. +- Handling review comments. + +--- + +### **Task 2: Undoing Changes – Reset & Revert** +**Scenario:** You accidentally committed incorrect changes and need to undo them. + +1. Create and modify a file. + ```bash + echo "Wrong code" >> wrong.txt + git add . + git commit -m "Committed by mistake" + ``` +2. Soft Reset (keeps changes staged). + ```bash + git reset --soft HEAD~1 + ``` +3. Mixed Reset (unstages changes but keeps files). + ```bash + git reset --mixed HEAD~1 + ``` +4. Hard Reset (removes all changes). + ```bash + git reset --hard HEAD~1 + ``` +5. Revert a commit safely. + ```bash + git revert HEAD + ``` + +**Document in `solution.md`** +- Differences between `reset` and `revert`. +- When to use each method. + +--- + +### **Task 3: Stashing - Save Work Without Committing** +**Scenario:** You need to switch branches but don’t want to commit incomplete work. + +1. Modify a file without committing. + ```bash + echo "Temporary Change" >> temp.txt + git add temp.txt + ``` +2. Stash the changes. + ```bash + git stash + ``` +3. Switch to another branch and apply the stash. + ```bash + git checkout main + git stash pop + ``` + +**Document in `solution.md`** +- When to use `git stash`. +- Difference between `git stash pop` and `git stash apply`. + +--- + +### **Task 4: Cherry-Picking - Selectively Apply Commits** +**Scenario:** A bug fix exists in another branch, and you only want to apply that specific commit. + +1. Find the commit to cherry-pick. + ```bash + git log --oneline + ``` +2. Apply a specific commit to the current branch. + ```bash + git cherry-pick + ``` +3. Resolve conflicts if any. + ```bash + git cherry-pick --continue + ``` + +**Document in `solution.md`** +- How cherry-picking is used in bug fixes. +- Risks of cherry-picking. + +--- + +### **Task 5: Rebasing - Keeping a Clean Commit History** +**Scenario:** Your branch is behind the main branch and needs to be updated without extra merge commits. + +1. Fetch the latest changes. + ```bash + git fetch origin main + ``` +2. Rebase the feature branch onto main. + ```bash + git rebase origin/main + ``` +3. Resolve conflicts and continue. + ```bash + git rebase --continue + ``` + +**Document in `solution.md`** +- Difference between `merge` and `rebase`. +- Best practices for rebasing. + +--- + +### **Task 6: Branching Strategies Used in Companies** +**Scenario:** Understand real-world branching strategies used in DevOps workflows. + +1. Research and explain Git workflows: + - Git Flow (Feature, Release, Hotfix branches). + - GitHub Flow (Main + Feature branches). + - Trunk-Based Development (Continuous Integration). + +2. Simulate a Git workflow using branches. + ```bash + git branch feature-1 + git branch hotfix-1 + git checkout feature-1 + ``` + +**Document in `solution.md`** +- Which strategy is best for DevOps and CI/CD. +- Pros and cons of different workflows. + +--- + +## **How to Submit** + +1. **Push your work to GitHub.** + ```bash + git add . + git commit -m "Completed Git & GitHub Advanced Challenge" + git push origin main + ``` + +2. **Create a Pull Request.** + - Title: + ``` + Git & GitHub Advanced Challenge - Completed + ``` + - PR Description: + - Steps followed for each task. + - Screenshots or logs (if applicable). + - +3. **Share Your Experience on LinkedIn:** + - Write a LinkedIn post summarizing your Week 4 Git & GitHub challenge experience. + - In your post, include: + - A brief description of the challenge and what you learned. + - Screenshots or excerpts from your `solution.md` that highlight key steps or interesting findings. + - The hashtags: **#90DaysOfDevOps #Git #GitHub #VersionControl #DevOps** + - Optionally, links to any blog posts or related GitHub repositories that further explain your journey. + +--- + +## **Additional Resources** +- [Git Official Documentation](https://git-scm.com/doc) +- [Git Reset & Revert Guide](https://www.atlassian.com/git/tutorials/resetting-checking-out-and-reverting) +- [Git Stash Explained](https://git-scm.com/book/en/v2/Git-Tools-Stashing-and-Cleaning) +- [Cherry-Picking Best Practices](https://www.atlassian.com/git/tutorials/cherry-pick) +- [Branching Strategies for DevOps](https://www.atlassian.com/git/tutorials/comparing-workflows) + +--- + +Happy coding and best of luck with this challenge! Document your journey thoroughly and be sure to explore the additional resources if you get stuck. diff --git a/2025/git/README.md b/2025/git/README.md deleted file mode 100644 index 8b13789179..0000000000 --- a/2025/git/README.md +++ /dev/null @@ -1 +0,0 @@ - diff --git a/2025/kubernetes/README.md b/2025/kubernetes/README.md index 8b13789179..030d3fd81b 100644 --- a/2025/kubernetes/README.md +++ b/2025/kubernetes/README.md @@ -1 +1,299 @@ +# Week 7 : Kubernetes Basics & Advanced Challenges +This set of tasks is designed as part of the 90DaysOfDevOps challenge to simulate real-world scenarios you might encounter on the job or in technical interviews. By completing these tasks on the [SpringBoot BankApp](https://github.com/Amitabh-DevOps/Springboot-BankApp), you'll gain practical experience with advanced Kubernetes topics, including architecture, core objects, networking, storage management, configuration, autoscaling, security & access control, job scheduling, and bonus topics like Helm, Service Mesh, or AWS EKS. + +> [!IMPORTANT] +> +> 1. Fork the [SpringBoot BankApp](https://github.com/Amitabh-DevOps/Springboot-BankApp) and implement all tasks on your fork. +> 2. Document all steps, commands, screenshots, and observations in a file named `solution.md` within your fork. +> 3. Submit your `solution.md` file in the Week 7 (Kubernetes) task folder of the 90DaysOfDevOps repository. + +--- + +## Task 1: Understand Kubernetes Architecture & Deploy a Sample Pod + +**Scenario:** +Familiarize yourself with Kubernetes’ control plane and worker node components, then deploy a simple Pod manually. + +**Steps:** +1. **Study Kubernetes Architecture:** + - Review the roles of control plane components (API Server, Scheduler, Controller Manager, etcd, Cloud Controller) and worker node components (Kubelet, Container Runtime, Kube Proxy). +2. **Deploy a Sample Pod:** + - Create a YAML file (e.g., `pod.yaml`) to deploy a simple Pod (such as an NGINX container). + - Apply the YAML using: + ```bash + kubectl apply -f pod.yaml + ``` +3. **Document in `solution.md`:** + - Describe the Kubernetes architecture components. + - Include your Pod YAML and explain each section. + +> [!NOTE] +> +> **Interview Questions:** +> - Can you explain how the Kubernetes control plane components work together and the role of etcd in this architecture? +> - If a Pod fails to start, what steps would you take to diagnose the issue? + +--- + +## Task 2: Deploy and Manage Core Kubernetes Objects + +**Scenario:** +Deploy core Kubernetes objects for the SpringBoot BankApp application, including Deployments, ReplicaSets, StatefulSets, DaemonSets, and use Namespaces to isolate resources. + +**Steps:** +1. **Create a Namespace:** + - Write a YAML file to create a Namespace for the SpringBoot BankApp application. + - Apply the YAML: + ```bash + kubectl apply -f namespace.yaml + ``` +2. **Deploy a Deployment:** + - Create a YAML file for a Deployment (within your Namespace) that manages a set of Pods running a component of SpringBoot BankApp. + - Verify that a ReplicaSet is created automatically. +3. **Deploy a StatefulSet:** + - Write a YAML file for a StatefulSet (for example, for a database component) and apply it. +4. **Deploy a DaemonSet:** + - Create a YAML file for a DaemonSet to run a Pod on every node. +5. **Document in `solution.md`:** + - Include the YAML files for the Namespace, Deployment, StatefulSet, and DaemonSet. + - Explain the differences between these objects and when to use each. + +> [!NOTE] +> +> **Interview Questions:** +> - How does a Deployment ensure that the desired state of Pods is maintained in a cluster? +> - Can you explain the differences between a Deployment, StatefulSet, and DaemonSet, and provide an example scenario for each? + +--- + +## Task 3: Networking & Exposure – Create Services, Ingress, and Network Policies + +**Scenario:** +Expose your SpringBoot BankApp application to internal and external traffic by creating Services and configuring an Ingress, while using Network Policies to secure communication. + +**Steps:** +1. **Create a Service:** + - Write a YAML file for a Service of type ClusterIP. + - Modify the Service type to NodePort or LoadBalancer and apply the YAML. +2. **Configure an Ingress:** + - Create an Ingress resource to route external traffic to your application. +3. **Implement a Network Policy:** + - Write a YAML file for a Network Policy that restricts traffic to your application Pods. +4. **Document in `solution.md`:** + - Include the YAML files for your Service, Ingress, and Network Policy. + - Explain the differences between Service types and the roles of Ingress and Network Policies. + +> [!NOTE] +> +> **Interview Questions:** +> - How do NodePort and LoadBalancer Services differ in terms of exposure and use cases? +> - What is the role of a Network Policy in Kubernetes, and can you describe a scenario where it is essential? + +--- + +## Task 4: Storage Management – Use Persistent Volumes and Claims + +**Scenario:** +Deploy a component of the SpringBoot BankApp application that requires persistent storage by creating Persistent Volumes (PV), Persistent Volume Claims (PVC), and a StorageClass for dynamic provisioning. + +**Steps:** +1. **Create a Persistent Volume and Claim:** + - Write YAML files for a static PV and a corresponding PVC. +2. **Deploy an Application Using the PVC:** + - Modify a Pod or Deployment YAML to mount the PVC. +3. **Document in `solution.md`:** + - Include your PV, PVC, and application YAML. + - Explain how StorageClasses facilitate dynamic storage provisioning. + +> [!NOTE] +> +> **Interview Questions:** +> - What are the main differences between a Persistent Volume and a Persistent Volume Claim? +> - How does a StorageClass simplify storage management in Kubernetes? + +--- + +## Task 5: Configuration & Secrets Management with ConfigMaps and Secrets + +**Scenario:** +Deploy a component of the SpringBoot BankApp application that consumes external configuration and sensitive data using ConfigMaps and Secrets. + +**Steps:** +1. **Create a ConfigMap:** + - Write a YAML file for a ConfigMap containing configuration data. +2. **Create a Secret:** + - Write a YAML file for a Secret containing sensitive information. +3. **Deploy an Application:** + - Update your application YAML to mount the ConfigMap and Secret. +4. **Document in `solution.md`:** + - Include the YAML files and explain how the application uses these resources. + +> [!NOTE] +> +> **Interview Questions:** +> - How would you update a running application if a ConfigMap or Secret is modified? +> - What measures do you take to secure Secrets in Kubernetes? + +--- + +## Task 6: Autoscaling & Resource Management + +**Scenario:** +Implement autoscaling for a component of the SpringBoot BankApp application using the Horizontal Pod Autoscaler (HPA). Optionally, explore Vertical Pod Autoscaling (VPA) and ensure the Metrics Server is running. + +**Steps:** +1. **Deploy an Application with Resource Requests:** + - Deploy an application with defined resource requests and limits. +2. **Create an HPA Resource:** + - Write a YAML file for an HPA that scales the number of replicas based on CPU or memory usage. +3. **(Optional) Implement VPA & Metrics Server:** + - Optionally, deploy a VPA and verify that the Metrics Server is running. +4. **Document in `solution.md`:** + - Include the YAML files and explain how HPA (and optionally VPA) work. + - Discuss the benefits of autoscaling in production. + +> [!NOTE] +> +> **Interview Questions:** +> - What is the process by which the Horizontal Pod Autoscaler scales an application? +> - In what scenarios would vertical scaling (VPA) be more beneficial than horizontal scaling (HPA)? + +--- + +## Task 7: Security & Access Control + +**Scenario:** +Secure your Kubernetes cluster by implementing Role-Based Access Control (RBAC) and additional security measures. + +### Part A: RBAC Implementation +**Steps:** +1. **Configure RBAC:** + - Create roles and role bindings using YAML files for specific user groups (e.g., Admin, Developer, Tester). +2. **Create Test Accounts:** + - Simulate real-world usage by creating user accounts for each role and verifying access. +3. **Optional Enhancement:** + - Simulate an unauthorized action (e.g., a Developer attempting to delete a critical resource) and document how RBAC prevents it. + - Analyze RBAC logs (if available) to verify that unauthorized access attempts are recorded. +4. **Document in `solution.md`:** + - Include screenshots or logs of your RBAC configuration. + - Describe the roles, permissions, and potential risks mitigated by proper RBAC implementation. + +> [!NOTE] +> +> **Interview Questions:** +> - How do RBAC policies help secure a multi-team Kubernetes environment? +> - Can you provide an example of how improper RBAC could compromise a cluster? + +### Part B: Additional Security Controls +**Steps:** +1. **Set Up Taints & Tolerations:** + - Apply taints to nodes and specify tolerations in your Pod specifications. +2. **Define a Pod Disruption Budget (PDB):** + - Write a YAML file for a PDB to ensure a minimum number of Pods remain available during maintenance. +3. **Document in `solution.md`:** + - Include the YAML files and explain how taints, tolerations, and PDBs contribute to cluster stability and security. + +> [!NOTE] +> +> **Interview Questions:** +> - How do taints and tolerations ensure that critical workloads are isolated from interference? +> - Why are Pod Disruption Budgets important for maintaining application availability? + +--- + +## Task 8: Job Scheduling & Custom Resources + +**Scenario:** +Manage scheduled tasks and extend Kubernetes functionality by creating Jobs, CronJobs, and a Custom Resource Definition (CRD). + +**Steps:** +1. **Create a Job and CronJob:** + - Write YAML files for a Job (a one-time task) and a CronJob (a scheduled task). +2. **Create a Custom Resource Definition (CRD):** + - Write a YAML file for a CRD and use `kubectl` to create a custom resource. +3. **Document in `solution.md`:** + - Include the YAML files and explain the use cases for Jobs, CronJobs, and CRDs. + - Reflect on how CRDs extend Kubernetes capabilities. + +> [!NOTE] +> +> **Interview Questions:** +> - What factors would influence your decision to use a CronJob versus a Job? +> - How do CRDs enable custom extensions in Kubernetes? + +--- + +## Task 9: Bonus Task: Advanced Deployment with Helm, Service Mesh, or EKS + +**Scenario:** +For an added challenge, deploy a component of the SpringBoot BankApp application using Helm, implement a basic Service Mesh (e.g., Istio), or deploy your cluster on AWS EKS. + +**Steps:** +1. **Helm Deployment:** + - Create a Helm chart for your application. + - Deploy the application using Helm and perform an update. + - *OR* +2. **Service Mesh Implementation:** + - Deploy a basic Service Mesh (using Istio, Linkerd, or Consul) and demonstrate traffic management between services. + - *OR* +3. **Deploy on AWS EKS:** + - Set up an EKS cluster and deploy your application there. +4. **Document in `solution.md`:** + - Include your Helm chart files, Service Mesh configuration, or EKS deployment details. + - Explain the advantages of using Helm, a Service Mesh, or EKS in a production environment. + +> [!NOTE] +> +> **Interview Questions:** +> - How does Helm simplify application deployments in Kubernetes? +> - What are the benefits of using a Service Mesh in a microservices architecture? +> - How does deploying on AWS EKS compare with managing your own Kubernetes cluster? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Ensure all files (e.g., Manifest files, scripts, solution.md, etc.) are committed and pushed to your 90DaysOfDevOps repository. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `kubernetes-challenge`) to the main repository. + - **Title:** + ``` + Week 7 Challenge - DevOps Batch 9: Kubernetes Basics & Advanced Challenge + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Kubernetes challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., architecture, autoscaling, security, job scheduling, and advanced deployments). + - Use the hashtags: **#90DaysOfDevOps #Kubernetes #DevOps #InterviewPrep** + - Optionally, provide links to your fork or blog posts detailing your journey. + +--- + +## TrainWithShubham Resources for Kubernetes + +- **[Kubernetes Short Notes](https://www.trainwithshubham.com/products/6515573bf42fc83942cd112e?dgps_u=l&dgps_s=ucpd&dgps_t=cp_u&dgps_u_st=u&dgps_uid=66c972da3795a9659545d71a)** +- **[Kubernetes One-Shot Video](https://youtu.be/W04brGNgxN4?si=oPscVYz0VFzZig8Q)** +- **[TWS blog on Kubernetes](https://trainwithshubham.blog/)** + +--- + +## Additional Resources + +- **[Kubernetes Official Documentation](https://kubernetes.io/docs/)** +- **[Kubernetes Concepts](https://kubernetes.io/docs/concepts/)** +- **[Helm Documentation](https://helm.sh/docs/)** +- **[Istio Documentation](https://istio.io/latest/docs/)** +- **[Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)** +- **[Kubernetes Networking](https://kubernetes.io/docs/concepts/services-networking/)** +- **[Kubernetes Storage](https://kubernetes.io/docs/concepts/storage/)** +- **[Kubernetes Autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/)** +- **[Kubernetes Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. diff --git a/2025/observability/README.md b/2025/observability/README.md new file mode 100644 index 0000000000..3363f243d3 --- /dev/null +++ b/2025/observability/README.md @@ -0,0 +1,185 @@ +# Week 10: Observability Challenge with Prometheus and Grafana on KIND/EKS + +This challenge is part of the 90DaysOfDevOps program and focuses on solving advanced, production-grade observability scenarios using Prometheus and Grafana. You will deploy, configure, and fine-tune monitoring and alerting systems on a KIND cluster, and as a bonus, monitor and log an AWS EKS cluster. This exercise is designed to push your skills with advanced configurations, custom queries, dynamic dashboards, and robust alerting mechanisms, while preparing you for technical interviews. + +**Important:** +1. Fork the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop) and implement all tasks on your fork. +2. Document all steps, commands, screenshots, and observations in a file named `solution.md` within your fork. +3. Submit your `solution.md` file in the Week 10 (Observability) task folder of the 90DaysOfDevOps repository. + +--- + +## Task 1: Setup a KIND Cluster for Observability + +**Real-World Scenario:** +Simulate a production-like Kubernetes environment locally by creating a KIND cluster to serve as the foundation for your monitoring setup. + +**Steps:** +1. **Install KIND:** + - Follow the official KIND installation guide. +2. **Create a KIND Cluster:** + - Run: + ```bash + kind create cluster --name observability-cluster + ``` +3. **Verify the Cluster:** + - Run `kubectl get nodes` and capture the output. +4. **Document in `solution.md`:** + - Include installation steps, the commands used, and output from `kubectl get nodes`. + +**Interview Questions:** +- What are the benefits and limitations of using KIND for production-like testing? +- How can you simulate production scenarios using a local KIND cluster? + +--- + +## Task 2: Deploy Prometheus on KIND with Advanced Configurations + +**Real-World Scenario:** +Deploy Prometheus on your KIND cluster with a custom configuration that includes advanced scrape settings and relabeling rules to ensure high-quality metric collection. + +**Steps:** +1. **Create a Custom Prometheus Configuration:** + - Write a `prometheus.yml` with custom scrape configurations targeting cluster components (e.g., kube-state-metrics, Node Exporter) and advanced relabeling rules to clean up metric labels. +2. **Deploy Prometheus:** + - Deploy Prometheus using a Kubernetes Deployment or via a Helm chart. +3. **Verify and Tune:** + - Access the Prometheus UI to verify that metrics are being scraped as expected. + - Adjust relabeling rules and scrape intervals to optimize performance. +4. **Document in `solution.md`:** + - Include your `prometheus.yml` and screenshots of the Prometheus UI showing active targets and effective relabeling. + +**Interview Questions:** +- How do advanced relabeling rules refine metric collection in Prometheus? +- What performance issues might you encounter when scraping targets on a KIND cluster, and how would you address them? + +--- + +## Task 3: Deploy Grafana and Build Production-Grade Dashboards + +**Real-World Scenario:** +Deploy Grafana on your KIND cluster and configure it to use Prometheus as a data source. Then, create dashboards that reflect real production metrics, including custom queries and complex visualizations. + +**Steps:** +1. **Deploy Grafana:** + - Create a Kubernetes Deployment and Service for Grafana. +2. **Configure the Data Source:** + - In the Grafana UI, add Prometheus as a data source. +3. **Design Production Dashboards:** + - Create dashboards with panels that display key metrics (e.g., CPU, memory, disk I/O, network latency) using advanced PromQL queries. + - Customize panel visualizations (e.g., graphs, tables, heatmaps) to present data effectively. +4. **Document in `solution.md`:** + - Include configuration details, screenshots of dashboards, and an explanation of the queries and visualization choices. + +**Interview Questions:** +- What factors are critical when designing dashboards for production monitoring? +- How do you optimize PromQL queries for performance and clarity in Grafana? + +--- + +## Task 4: Configure Alerting and Notification Rules + +**Real-World Scenario:** +Establish robust alerting to detect critical issues (e.g., resource exhaustion, node failures) and notify the operations team immediately. + +**Steps:** +1. **Define Alerting Rules:** + - Add alerting rules in `prometheus.yml` or configure Prometheus Alertmanager for specific conditions. +2. **Configure Notification Channels:** + - Set up Grafana (or Alertmanager) to send notifications via email, Slack, or another channel. +3. **Test Alerts:** + - Simulate alert conditions (e.g., by temporarily reducing resources) to verify that notifications are sent. +4. **Document in `solution.md`:** + - Include your alerting configuration, screenshots of triggered alerts, and a brief rationale for chosen thresholds. + +**Interview Questions:** +- How do you design effective alerting rules to minimize false positives in production? +- What challenges do you face in configuring notifications for a dynamic environment? + +--- + +## Task 5: Deploy Node Exporter for Enhanced System Metrics + +**Real-World Scenario:** +Enhance system monitoring by deploying Node Exporter on your KIND cluster to collect detailed metrics such as CPU, memory, disk, and network usage, which are critical for troubleshooting production issues. + +**Steps:** +1. **Deploy Node Exporter:** + - Create a Deployment or DaemonSet to deploy Node Exporter across all nodes in your KIND cluster. +2. **Verify Metrics Collection:** + - Ensure Node Exporter endpoints are correctly scraped by Prometheus. +3. **Document in `solution.md`:** + - Include your Node Exporter YAML configuration and screenshots showing metrics collected in Prometheus. + - Explain the importance of system-level metrics in production monitoring. + +**Interview Questions:** +- What additional system metrics does Node Exporter provide that are crucial for production? +- How would you integrate Node Exporter metrics into your existing Prometheus setup? + +--- + +## Bonus Task: Monitor and Log an AWS EKS Cluster + +**Real-World Scenario:** +For an added challenge, provision or use an existing AWS EKS cluster and set up Prometheus and Grafana to monitor and log its performance. This task simulates the observability of a production cloud environment. + +**Steps:** +1. **Provision an EKS Cluster:** + - Use Terraform to deploy an EKS cluster (or leverage an existing one) and document key configuration settings. +2. **Deploy Prometheus and Grafana on EKS:** + - Configure Prometheus with appropriate scrape targets for the EKS cluster. + - Deploy Grafana and integrate it with Prometheus. +3. **Integrate Logging (Optional):** + - Optionally, configure a logging solution (e.g., Fluentd or CloudWatch) to capture EKS logs. +4. **Document in `solution.md`:** + - Summarize your EKS provisioning steps, Prometheus and Grafana configurations, and any logging integration. + - Explain how monitoring and logging improve observability in a cloud environment. + +**Interview Questions:** +- What are the key challenges of monitoring an EKS cluster versus a local KIND cluster? +- How would you integrate logging with monitoring tools to ensure comprehensive observability? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Fork the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop) and ensure all files (Prometheus and Grafana configurations, Node Exporter YAML, Terraform files for the bonus task, `solution.md`, etc.) are committed and pushed to your fork. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `observability-challenge`) to the main repository. + - **Title:** + ``` + Week 10 Challenge - Observability Challenge (Prometheus & Grafana on KIND/EKS) + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Submit Your Documentation:** + - **Important:** Place your `solution.md` file in the Week 10 (Observability) task folder of the 90DaysOfDevOps repository. + +4. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Observability challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., KIND/EKS setup, advanced configurations, dashboard creation, alerting strategies, and Node Exporter integration). + - Use the hashtags: **#90DaysOfDevOps #Prometheus #Grafana #KIND #EKS #Observability #DevOps #InterviewPrep** + - Optionally, provide links to your repository or blog posts detailing your journey. + +--- + +## TrainWithShubham Resources for Observability + +- **[Prometheus & Grafana One-Shot Video](https://youtu.be/DXZUunEeHqM?si=go1m-THyng7Ipyu6)** + +--- + +## Additional Resources + +- **[Prometheus Official Documentation](https://prometheus.io/docs/)** +- **[Grafana Official Documentation](https://grafana.com/docs/)** +- **[Alertmanager Documentation](https://prometheus.io/docs/alerting/latest/alertmanager/)** +- **[Kubernetes Monitoring with Prometheus](https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/)** +- **[Grafana Dashboards](https://grafana.com/grafana/dashboards/)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. diff --git a/2025/terraform/README.md b/2025/terraform/README.md index 8b13789179..26a696d37c 100644 --- a/2025/terraform/README.md +++ b/2025/terraform/README.md @@ -1 +1,228 @@ +# Week 8: Terraform (Infrastructure as Code) Challenge +This set of tasks is designed as part of the 90DaysOfDevOps challenge to simulate complex, real-world scenarios you might encounter on the job or in technical interviews. By completing these tasks on the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop), you'll gain practical experience with advanced Terraform topics, including provisioning, state management, variables, modules, workspaces, resource lifecycle management, drift detection, and environment management. + +**Important:** +1. Fork the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop) and implement all tasks on your fork. +2. Document all steps, commands, screenshots, and observations in a file named `solution.md` within your fork. +3. Submit your `solution.md` file in the Week 8 (Terraform) task folder of the 90DaysOfDevOps repository. + +--- + +## Task 1: Install Terraform, Initialize, and Provision a Basic Resource + +**Scenario:** +Begin by installing Terraform, initializing a project, and provisioning a basic resource (e.g., an AWS EC2 instance) to validate your setup. + +**Steps:** +1. **Install Terraform:** + - Download and install Terraform on your local machine. +2. **Initialize a Terraform Project:** + - Create a new directory for your Terraform project. + - Run `terraform init` to initialize the project. +3. **Provision a Basic Resource:** + - Create a configuration file (e.g., `main.tf`) to provision an AWS EC2 instance (or a similar resource for your cloud provider). + - Run `terraform apply` and confirm the changes. +4. **Document in `solution.md`:** + - Include the installation steps, your `main.tf` file, and the output of your `terraform apply` command. + +**Interview Questions:** +- How does Terraform manage resource creation and state? +- What is the significance of the `terraform init` command in a new project? + +--- + +## Task 2: Manage Terraform State with a Remote Backend + +**Scenario:** +Ensuring state consistency is critical when multiple team members work on infrastructure. Configure a remote backend (e.g., AWS S3 with DynamoDB for locking) to store your Terraform state file. + +**Steps:** +1. **Configure a Remote Backend:** + - Create a backend configuration in your `main.tf` or a separate backend file to configure a remote backend. +2. **Reinitialize Terraform:** + - Run `terraform init` to reinitialize your project with the new backend. +3. **Document in `solution.md`:** + - Include the backend configuration details. + - Explain the benefits of using a remote backend and state locking in collaborative environments. + +**Interview Questions:** +- Why is remote state management important in Terraform? +- How does state locking prevent conflicts during collaborative updates? + +--- + +## Task 3: Use Variables, Outputs, and Workspaces + +**Scenario:** +Improve the flexibility and reusability of your Terraform configuration by using variables, outputs, and workspaces to manage multiple environments. + +**Steps:** +1. **Define Variables and Outputs:** + - Create a `variables.tf` file to define configurable parameters (e.g., region, instance type). + - Create an `outputs.tf` file to output key information (e.g., public IP address of the EC2 instance). +2. **Implement Workspaces:** + - Use `terraform workspace new` to create separate workspaces for different environments (e.g., dev, staging, prod). +3. **Document in `solution.md`:** + - Include your `variables.tf`, `outputs.tf`, and a summary of your workspace setup. + - Explain how these features enable dynamic and multi-environment deployments. + +**Interview Questions:** +- How do variables and outputs enhance the reusability of Terraform configurations? +- What is the purpose of workspaces in Terraform, and how would you use them in a production scenario? + +--- + +## Task 4: Create and Use Terraform Modules + +**Scenario:** +Enhance reusability by creating a Terraform module for commonly used resources, and integrate it into your main configuration. + +**Steps:** +1. **Create a Module:** + - In a separate directory (e.g., `modules/ec2_instance`), create a module with `main.tf`, `variables.tf`, and `outputs.tf` for provisioning an EC2 instance. +2. **Reference the Module:** + - Update your main configuration to call the module using a `module` block. +3. **Document in `solution.md`:** + - Provide the module code and the main configuration. + - Explain how modules promote consistency and reduce code duplication. + +**Interview Questions:** +- What are the advantages of using modules in Terraform? +- How would you structure a module for reusable infrastructure components? + +--- + +## Task 5: Resource Dependencies and Lifecycle Management + +**Scenario:** +Ensure correct resource creation order and safe updates by managing dependencies and customizing resource lifecycles. + +**Steps:** +1. **Define Resource Dependencies:** + - Use the `depends_on` meta-argument in your configuration to specify dependencies explicitly. +2. **Configure Resource Lifecycles:** + - Add lifecycle blocks (e.g., `create_before_destroy`) in your resource definitions to manage updates safely. +3. **Document in `solution.md`:** + - Include examples of resource dependencies and lifecycle configurations in your code. + - Explain how these settings prevent downtime during updates. + +**Interview Questions:** +- How does Terraform handle resource dependencies? +- Can you explain the purpose of the `create_before_destroy` lifecycle argument? + +--- + +## Task 6: Infrastructure Drift Detection and Change Management + +**Scenario:** +In production, changes might occur outside of Terraform. Use Terraform commands to detect infrastructure drift and manage changes. + +**Steps:** +1. **Detect Drift:** + - Run `terraform plan` to identify differences between your configuration and the actual infrastructure. +2. **Reconcile Changes:** + - Describe your approach to updating the state or reapplying configurations when drift is detected. +3. **Document in `solution.md`:** + - Include examples of drift detection and your strategy for reconciling differences. + - Reflect on the importance of change management in infrastructure as code. + +**Interview Questions:** +- What is infrastructure drift, and why is it a concern in production environments? +- How would you resolve discrepancies between your Terraform configuration and actual infrastructure? + +--- + +## Task 7: (Optional) Dynamic Pipeline Parameterization for Terraform + +**Scenario:** +Enhance your Terraform configurations by using dynamic input parameters and conditional logic to deploy resources differently based on environment-specific values. + +**Steps:** +1. **Enhance Variables with Conditionals:** + - Update your `variables.tf` to include default values and conditional expressions for environment-specific configurations. +2. **Apply Conditional Logic:** + - Use conditional expressions in your resource definitions to adjust attributes based on variable values. +3. **Document in `solution.md`:** + - Explain how dynamic parameterization improves flexibility. + - Include sample outputs demonstrating different configurations. + +**Interview Questions:** +- How do conditional expressions in Terraform improve configuration flexibility? +- Provide an example scenario where dynamic parameters are critical in a deployment pipeline. + +--- + + +### **Bonus Task: Multi-Environment Setup with Terraform & Ansible ** + +**Scenario:** +Set up **AWS infrastructure** for multiple environments (dev, staging, prod) using **Terraform** for provisioning and **Ansible** for configuration. This includes installing both tools, creating dynamic inventories, and automating Nginx configuration across environments. + +1. **Install Tools:** + - Install **Terraform** and **Ansible** on your local machine. + +2. **Provision AWS Infrastructure with Terraform:** + - Create Terraform files to spin up EC2 instances (or similar resources) in dev, staging, and prod. + - Apply configurations (e.g., `terraform apply -var-file="dev.tfvars"`) for each environment. + +3. **Configure Hosts with Ansible:** + - Generate **dynamic inventories** (or separate inventory files) based on Terraform outputs. + - Write a playbook to install and configure **Nginx** across all environments. + - Run `ansible-playbook -i nginx_setup.yml` to automate the setup. + +4. **Automate & Document:** + - Ensure infrastructure changes are version-controlled. + - Place all steps, commands, and observations in `solution.md`. + +**Interview Questions :** +- **Terraform & Ansible Integration:** How do you share Terraform outputs (host details) with Ansible inventories? +- **Multi-Environment Management:** What strategies ensure consistency while keeping dev, staging, and prod isolated? +- **Nginx Configuration:** How do you handle environment-specific differences for Nginx setups? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Fork the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop) and ensure all Terraform files (configuration files, modules, variable files, `solution.md`, etc.) are committed and pushed to your fork. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `terraform-challenge`) to the main repository. + - **Title:** + ``` + Week 8 Challenge - Terraform Infrastructure as Code Challenge + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Submit Your Documentation:** + - **Important:** Place your `solution.md` file in the Week 8 (Terraform) task folder of the 90DaysOfDevOps repository. + +4. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Terraform challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., state management, module usage, drift detection, multi-environment setups). + - Use the hashtags: **#90DaysOfDevOps #Terraform #DevOps #InterviewPrep** + - Optionally, provide links to your fork or blog posts detailing your journey. + +--- + +## TrainWithShubham Resources for Terraform + +- **[Terraform Short Notes](https://www.trainwithshubham.com/products/66d5c45f7345de4e9c1d8b05?dgps_u=l&dgps_s=ucpd&dgps_t=cp_u&dgps_u_st=u&dgps_uid=66c972da3795a9659545d71a)** +- **[Terraform One-Shot Video](https://youtu.be/S9mohJI_R34?si=QdRm-JrdKs8ZswXZ)** +- **[Multi-Environment Setup Blog](https://amitabhdevops.hashnode.dev/devops-project-multi-environment-infrastructure-with-terraform-and-ansible)** + +--- + +## Additional Resources + +- **[Terraform Official Documentation](https://www.terraform.io/docs/)** +- **[Terraform Providers](https://www.terraform.io/docs/providers/index.html)** +- **[Terraform Modules](https://www.terraform.io/docs/modules/index.html)** +- **[Terraform State Management](https://www.terraform.io/docs/state/index.html)** +- **[Terraform Workspaces](https://www.terraform.io/docs/language/state/workspaces.html)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews.