Skip to content

Samia-A-Mohamed/Web-App-DevOps-Project

 
 

Repository files navigation

Web-App-DevOps-Project

Welcome to the Web App DevOps Project repo! This application allows you to efficiently manage and track orders for a potential business. It provides an intuitive user interface for viewing existing orders and adding new ones.

Table of Contents

Features

  • Order List: View a comprehensive list of orders including details like date UUID, user ID, card number, store code, product code, product quantity, order date, and shipping date.

Screenshot 2023-08-31 at 15 48 48

  • Pagination: Easily navigate through multiple pages of orders using the built-in pagination feature.

Screenshot 2023-08-31 at 15 49 08

  • Add New Order: Fill out a user-friendly form to add new orders to the system with necessary information.

Screenshot 2023-08-31 at 15 49 26

  • Data Validation: Ensure data accuracy and completeness with required fields, date restrictions, and card number validation.

  • Additional feature (delivery date): It may be possible to add an additonal column for delivery date

Getting Started

Prerequisites

For the application to succesfully run, you need to install the following packages:

  • flask (version 2.2.2)
  • pyodbc (version 4.0.39)
  • SQLAlchemy (version 2.0.21)
  • werkzeug (version 2.2.3)

Usage

To run the application, you simply need to run the app.py script in this repository. Once the application starts you should be able to access it locally at http://127.0.0.1:5000. Here you will be meet with the following two pages:

  1. Order List Page: Navigate to the "Order List" page to view all existing orders. Use the pagination controls to navigate between pages.

  2. Add New Order Page: Click on the "Add New Order" tab to access the order form. Complete all required fields and ensure that your entries meet the specified criteria.

Technology Stack

  • Backend: Flask is used to build the backend of the application, handling routing, data processing, and interactions with the database.

  • Frontend: The user interface is designed using HTML, CSS, and JavaScript to ensure a smooth and intuitive user experience.

  • Database: The application employs an Azure SQL Database as its database system to store order-related data.

Contributors

  • Maya Iuga
  • Samia Mohamed

License

This project is licensed under the MIT License. For more details, refer to the LICENSE file.

WebApp DevOps Project Documentation

I have been entrusted with the critical task of building a comprehensive end-to-end DevOps pipeline to support my organization's internal web application, designed to manage and monitor deliveries across the company.

image

Overall, the end-to-end DevOPs pipeline included the following steps:

  1. Implementing version control by integrating new features into the web application.
  2. Containerization using Docker Images.
  3. Infrastructure as Code (IaC) to define and manage resources within Azure using Terraform.
  4. Orchestrating the deployment of the containerized application using Kubernetes.
  5. Employing CI/CD (Continuous Integration/Continuous Deployment) using Azure DevOps.

Addition of delivery date column An issue was raised regarding the addition of a delivery date column. This was then tackled locally by creating a branch called feature/add-delivery-date where the code changes were made. Commit was made and the changes were pushed to Github. A pull request was submitted to merge the code changes from the feature branch into the main branch. Which was then reviewed and approved. It has been determined that the delivery_date column is no longer necessary in the backend database, and so I needed to revert the changes made back. This was done by creating a new branch based from the main branch called revert-delivery-date. I reverted the changes by finding the commit hash of the specific change to revert us using git log or GitHub's commit history. I ran git revert COMMIT_HASH command to revert the changes and pushed the changes to Github. Finally, a pull request was created, approved and merged on to main branch. This process ensured version control and shows that new features can be added or removed without it affecting the codebase.

Containerization Process

1. Dockerfile was created in the same working directory as the project

image

This Dockerfile included installing the ODBC driver and necessary dependencies that allow the web application to connect to the Azure SQL Database in the backend, and thus retrieve the orders data.

2. Docker image is built

As I had MacOS M2 chip, to build the image, I ran docker build -t image_name --no-cache --platform linux/amd64 . But if you are on other Linux OS or windows, you can simply run docker build -t image_name .

3. Run the docker container locally

Again as I was using MacOS M2 chip, I ran docker run -p 5000:5000 --platform linux/amd64 image_name But if you are on other Linux OS or windows, you can simply run docker run -p 5000:5000 image_name

Note: Issues I faced

  • I ran into an error when I built the image and ran the container initially: docker: Error response from daemon: Ports are not available: exposing port TCP 0.0.0.0:5000 -> 0.0.0.0:0: listen tcp 0.0.0.0:5000: bind: address already in use.
  • To fix this issue, I built another image, changing the app.py file: app.run(host='0.0.0.0', port=5000, debug=True) to app.run(host='0.0.0.0', port=5001, debug=True) , then the Dockerfile to Expose 5001 instead of 5000.
  • I then ran docker run -p 5001:5001 --platform linux/amd64 name of the image and this successfully ran.
  • To check that is worked, I opened a web browser and went to http://127.0.0.1:5001 to show the application running on a web browser.
image

4. Tag Docker image and push to Docker hub

  • Image created was tagged using docker tag image-name docker-hub-username/image-name:version-tag
  • Logged in using docker login
  • Image was then pushed to Docker hub using docker push docker-hub-username/image-name:version-tag
  • The Docker image should be seen on your Docker hub

Image information

Name: webapp-devops

Tags: 1.0

Instructions for use:

  • You can pull this image created by using docker pull samiaaax/webapp-devops:1.0
  • You can push using docker push samiaaax/webapp-devops:tagname
image

Defining Networking Services and AKS Cluster with IaC

I have been tasked to deploy the containerized application onto a Kubernetes Cluster to ensure application scalability. To to this, I have chosen to implement infrastructure as code using Terraform.

Creating a Terraform Project and Module

I created a Terraform project that will serve as the foundation for provisioning an Azure Kubernetes Service (AKS) cluster using infrastructure as code (IaC). The project was organized into two Terraform modules: one for provisioning the necessary Azure Networking Services (networking-module) for an AKS cluster and one for provisioning the Kubernetes cluster itself (aks-cluster-module).

Defining the Networking Module Input Variables

Inside the networking module directory, the following input variables were defined in a variables.tf file.

  • A resource_group_name variable that will represent the name of the Azure Resource Group where the networking resources will be deployed in.
  • A location variable that specifies the Azure region where the networking resources will be deployed to.
  • A vnet_address_space variable that specifies the address space for the Virtual Network (VNet)
image

Defining Networking Resources and NSG rules

Within the networking module's main.tf configuration file, the following networking resources for an AKS cluster are defined:

  • Azure Resource Group: Named this by resource by referencing the resource_group_name variable created variables.tf file
  • Virtual Network (VNet): aks-vnet
  • Control Plane Subnet: control-plane-subnet
  • Worker Node Subnet: worker-node-subnet
image
  • Network Security Group (NSG): aks-nsg

Within the NSG, two inbound rules inbound rules were defined: one to allow traffic to the kube-apiserver (named kube-apiserver-rule) and one to allow inbound SSH traffic (named ssh-rule). image

Defining the Networking Module Output Variables

Within the network-module, output.tf file was created to define the following output variables:

  • A vnet_id variable that will store the ID of the previously created VNet.
  • A control_plane_subnet_id variable that will hold the ID of the control plane subnet within the VNet.
  • A worker_node_subnet_id variable that will store the ID of the worker node subnet within the VNet.
  • A networking_resource_group_name variable that will provide the name of the Azure Resource Group where the networking resources were provisioned in.
  • A aks_nsg_id variable that will store the ID of the Network Security Group (NSG).
image

This networking module was then initialised by running terraform init command.

Defining the AKS Cluster Module Input Variables

Within the aks-cluster module, the following input variables are defined:

  • A aks_cluster_name variable that represents the name of the AKS cluster
  • A cluster_location variable that specifies the Azure region where the AKS cluster will be deployed to
  • A dns_prefix variable that defines the DNS prefix of cluster
  • A kubernetes_version variable that specifies which Kubernetes version the cluster will use
  • A service_principal_client_id variable that provides the Client ID for the service principal associated with the cluster
  • A service_principal_secret variable that supplies the Client Secret for the service principal
image

Defining AKS Cluster Resources

In the main.tf file within the aks-cluster module, resources were defined which included creating the AKS cluster, specifying the node pool and the service principal. This was done by using the input variables as arguments. image

Defining the AKS Cluster Module Output Variables

Within the outputs.tf file in the aks-cluster module, the following output variables are defined:

  • A aks_cluster_name variable that will store the name of the provisioned cluster
  • A aks_cluster_id variable that will store the ID of the cluster
  • A aks_kubeconfig variable that will capture the Kubernetes configuration file of the cluster.
image

This aks-cluster module was then initialised by running terraform init command.

Creating an AKS Cluster with IaC

In the project's main directory called aks-terraform, a main.tf configuration file was created. Within this file, the Azure provider block is defined to enable authentication to Azure using a personal service principal credentials. The input variables for the client_id and client_secret arguments are defined in a variables.tf file.

image

image

To use these variables securely, in a text editor in a command line, .zshrc file is opened and these lines were added: image

After provisioning the provider block, the networking and aks-cluster modules are integrated in the project's main configuration file:

image

Within the main project directory, the Terraform project is initialised by running terraform init command. Afterwards, terraform apply is run to initiate the creation of the defined infrastructure, including the networking resources and AKS cluster. The resultant state file is added to .gitignore to avoid exposing any secrets.

image image image

Kubernetes Deployment to AKS

With the necessary infrastructure in place, the next step is to proceed with the deployment of the containerised application to the AKS cluster.

Deployment and Service Manifests

Firstly, a Kubernetes manifest file, named application-manifest.yaml is created. Inside this file, the necessary Deployment resource was defined, which deploys your containerized web application onto the Terraform-provisioned AKS cluster and contains the following necessary information:

  • Deployment named flask-app-deployment that acts as a central reference for managing the containerized application
  • replicas set to maintain two identical replicas of this application
  • selector section helps the Deployment identify and manage Pods with the label app:flask-app
  • template section describes how the Pods should look. The label app:flask-app is assigned to Pods created by this template. Inside the template, there's a container definition named flask-app-container. It uses the Docker image version 1.0 and exposes port 5001 for the application.
  • strategy the deployment strategy is set to RollingUpdate. This strategy ensures a smooth transition during updates. Specifically, it allows at most one old Pod to be unavailable at a time (maxUnavailable: 1) and limits the number of new Pods added simultaneously to one (maxSurge: 1). image

After this, the necessary Service resource was defined within the same manifest file to facilitate internal communication within the AKS cluster. This included the following information:

  • service named flask-app-service: this service is configured to direct traffic to Pods labeled with app: flask-app.
  • ports: when services within the cluster want to communicate with this application, they can use the service's cluster IP address and port 5001, as defined in targetPort.
  • service type: set to ClusterIP, designating it as an internal service within the AKS cluster image

Deploying Kubernetes manifest to AKS

Before deploying the Kubernetes manifests to AKS cluster, the current Kubernetes context is set to the correct cluster (Terraform-provisioned AKS cluster). The context specifies the cluster and user details for the AKS cluster, ensuring the deployment occurs in the intended environment. To do this, the following commands were run: kubectl config get-contexts to check the available Kubernetes contexts and kubectl config use-context terraform-aks-cluster to switch to the correct Terraform context. image

So now by operating in the AKS context, deployment can proceed. This was done by applying the manifest file using kubectl apply -f application-manifest.yaml which showed the deployment and service resources running.

To test and validate the application after deployment, various kubectl commands were run. kubectl describe deployment shows that the correct number of pods were running at a time (two identical pods running at a time). image

image

kubectl describe services command shows that Cluster IP is deployed to the AKS cluster with their specified ports. image

Once the health of pods and services were inspected, port forwarding was initiated to access the deployment locally by running kubectl port-forward 5001:5001 command.

image

With port forwarding in place, the web application hosted within the AKS cluster was be accessed locally at http://127.0.0.1:5001 and the functionality of AddOrder was accessible. image

Additionally, to confirm the deployment has been successful on the Azure Portal, navigating to the Workloads section will show that the application was deployed with the specified number of pods running. image

Creating a CI/CD pipeline on Azure DevOps

Now that deployment of application to Kubernetes was successful, the next step is to create a CI/CD pipeline on Azure DevOps that will automate the containerisation and deployment process.

The Build Pipeline Setup

First, an Azure DevOps Project was created and configured using the source repository on Github and selecting the project that hosts the application code. As an initial configuration, starter template was selected. A connection between Azure DevOps and the Docker Hub account where the application image is stored is needed to facilitate the seamless integration of the CI/CD pipeline with the Docker Hub container registry. To do this, a personal access token was created on Docker hub and an Azure DevOps service connection was created to utilize this token. From this, the configuration of the starter pipeline was modified to enable it to build and push a Docker image to Docker Hub. The pipeline was set to automatically run each time there is a push to the main branch of the application repository. image

The pipeline was saved and ran: image

The latest image was shown on Dockerhub to validate the creation of CI/CD pipeline - it created the Docker image of the latest version: image

The Release Pipeline Setup

Now that the build pipeline was setup for the containerisation process, the release pipeline needed to be set up to automate the deployment process. To do this, an AKS service connection within Azure DevOps was configured using Azure Resource Manager. image

Afterwards, this service connection and the application manifest was used to facilitate the automatic deployment of the application to the AKS cluster. image

The pipeline was saved and ran: image

image

The deployment of the application was validated and the application's behaviour was monitored using kubectl commands: image

image

Port-forward was initiated by using kubectl command to access the application locally and to validate the functionality of the deployment: image

image

AKS Cluster Monitoring

Monitoring and alerting of the AKS cluster is needed to address any potential issues during and after the deployment of the application.

Setting up Container Insight

A container insight was enabled to efficiently monitor the application performance and health of AKS cluster. This was done by writing the following command on CLI: image

The --enable-managed-identity command allows the AKS cluster to use a Microsoft Entra ID identity instead of the Service Principal to authenticate with Azure services. To collect advanced data and metrics from the cluster, using Container Insights, the cluster needs to have managed identity enabled.

Secondly, permissions to the Service Principal was added on Azure portal. The following three roles were assigned one by one:

Monitoring Metrics Publisher Monitoring Contributor Log Analytics Contributor

image

After container insights is configured and enabled, container Insights begins collecting data from your AKS cluster once activated but doesn't show historical data. For it to start collecting data, the cluster was restarted after logging in from the CLI:

image image

Log Analytics Configuration

Azure Log Analytics is a service that monitors cloud and on-premises resources and applications. It allows you to collect and analyze data generated by resources in your cloud and on-premises environments. Log Analytics was configured on the terraform-aks-cluster to execute the following logs:

1.Average Node CPU Usage Percentage per Minute

image

2.Average Node Memory Usage Percentage per Minute

3.Pods Counts with Phase

4.Warning Value in Container Logs

5.Kubernetes Events

After these were set up, they were saved and pinned to the dashboard for further referencing and to be able to run them again when required.

image

Configuring Alarms and Alert Rules

An alert rule to trigger an alarm when the used disk percentage in the AKS cluster exceeds 90% was set up. The alert checks every 5 minutes and have a loopback period of 15 minutes, and the alert was configured to send notifications to my email address.

image

By Default, the resource group has the alert rules for CPU usage and memory working set percentage were already in place.

image

These alert rules were adjusted to trigger when they exceed 80%. CPU and memory are critical resources in the AKS cluster. When they are heavily utilized, it can lead to decreased application performance. By setting alert rules to trigger at 80%, it ensures that notifications are sent when these resources are approaching critical levels.

image

AKS Integration with Azure Key Vault for Secrets Management

Currently, the credentials for establishing a connection with the Azure SQL Database are hardcoded within the application code. Therefore implementing a solution to securely store and retrieve the database credentials is essential to handling sensitive information.

Creating a Key Vault and Secrets

A Key Vault was created for the process of securing the application code where the sensitive information is securely stored. Key Vault Adminstrator role was assigned to Microsoft Entra ID user to grant the necessary permissions for managing secrets within the Key Vault.

image

Within this Key Vault, the following secrets were configured to connect to the backend database: 1.Server Name 2.Server Username 3.Server Password 4.Database Name

image

Enabling Managed Identity for AKS and Assigning Permissions

Managed Identity for the AKS cluster was enabled to allow it authenticate and interact securely with the Key Vault without exposing credentials. To do this, the following command was ran on Azure CLI after logging in: az aks update --resource-group (resource-group) --name (aks-cluster-name) --enable-managed-identity

image

The following command was executed to get information about the managed identity created for the AKS cluster: az aks show --resource-group (resource-group) --name (aks-cluster-name) --query identityProfile

image

The clientID from this command was then used to execute the command to assign permissions to the managed identity:

image

image

Assigning the Key Vault Secrets Officer role to the managed identity associated with AKS, allows it to retrieve and manage secrets.

Updating the application code

Integrating the Azure Identity and Azure Key Vault libraries into the Python application code facilitates communication with Azure Key Vault.

image image

The requirements file for the application's Docker image was updated to include the newly required libraries.

image

End-to-End Testing and Validation

Firstly, testing of the modified application to ensure seamless integration with Azure Key Vault was done locally. This was done by kubectl commands:

image

image

Deployment of the modified application to the AKS cluster using the pre-established Azure DevOps CI/CD pipeline included the following steps:

  1. Modifying azure-pipeline.yaml file for the building the new image

image

  1. Running the establish pipeline on Azure DevOps

image

image
  1. Testing the functionality of the application using port forwarding

image

image

image

image

About

This repo contains the code for a Python Flask Web App that can be used to build a DevOps pipeline where you containerise, deploy and manage a web app.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • HCL 31.1%
  • Python 20.9%
  • HTML 18.3%
  • CSS 15.0%
  • JavaScript 9.8%
  • Dockerfile 4.9%