This project contains an Internal Developer Platform (IDP) reference implementation for AWS. This project can bring up an IDP on EKS with all the tools configured and ready to use in production. It will install addons on an EKS cluster as Argo CD apps using GitOps Bridge App of ApplicationSets pattern. Check out the Getting Started guide for installing this solution on an EKS cluster.
Note
Applications deployed in this repository are a starting point to get environment into production.
All the addons are helm charts with static values configured in packages/<addon-name>/values.yaml
and dynamic values based on Argo CD cluster secret label/annotations values in packages/addons/values.yaml
.
Name | Namespace | Purpose | Chart Version | Chart |
---|---|---|---|---|
Argo CD | argocd | Installation and management of addon Argo CD application | 8.0.14 | Link |
Argo Workflows | argo | Workflow tool for continuous integration tasks | 0.45.18 | Link |
Backstage | backstage | Self-Service Web UI (Developer Portal) for developers | 2.6.0 | Link |
Cert Manager | cert-manager | Certificate manager for addons and developer applications using Let's Encrypt | 1.17.2 | Link |
Crossplane | crossplane-system | IaC controller for provisioning infrastructure | 1.20.0 | Link |
ACK | ack-system | IaC controller for provisioning infrastructure | TBD | Coming soon check #54 |
External DNS | external-dns | Domain management using Route 53 | 1.16.1 | Link |
External Secrets | external-secrets | Secret Management using AWS Secret Manager and AWS Systems Manager Parameter Store | Version | Link |
Ingress NGINX | ingress-nginx | Ingress controller for L7 network traffic routing | 4.7.0 | Link |
Keycloak | keycloak | Identity provider for User Authentication | 24.7.3 | Link |
Check out more details about the installation flow.
This diagram illustrates the high-level installation flow for the CNOE AWS Reference Implementation. It shows how the local environment interacts with AWS resources to deploy and configure the platform on an EKS cluster.
flowchart TD
subgraph "Local Environment"
config["config.yaml"]
secrets["GitHub App Credentials
(private/*.yaml)"]
create_secrets["create-config-secrets.sh"]
install["install.sh"]
helm["helm"]
end
subgraph "AWS"
aws_secrets["AWS Secrets Manager
- cnoe-ref-impl/config
- cnoe-ref-impl/github-app"]
subgraph "EKS Cluster"
eks_argocd["Argo CD"]
eso["External Secret Operator"]
appset["addons-appset
(ApplicationSet)"]
subgraph "Addons"
backstage["Backstage"]
keycloak["Keycloak"]
crossplane["Crossplane"]
cert_manager["Cert Manager"]
external_dns["External DNS"]
ingress["Ingress NGINX"]
argo_workflows["Argo Workflows"]
end
end
end
config --> create_secrets
secrets --> create_secrets
create_secrets --> aws_secrets
config --> install
install --> helm
helm -- "Installs" --> eks_argocd
helm -- "Installs" --> eso
helm -- "Creates" --> appset
aws_secrets -- "Provides configuration" --> eso
appset -- "Creates Argo CD Addon ApplicationSets" --> Addons
eks_argocd -- "Manages" --> Addons
eso -- "Provides secrets to" --> Addons
classDef aws fill:#FF9900,stroke:#232F3E,color:white;
classDef k8s fill:#326CE5,stroke:#254AA5,color:white;
classDef tools fill:#4CAF50,stroke:#388E3C,color:white;
classDef config fill:#9C27B0,stroke:#7B1FA2,color:white;
class aws_secrets,EKS aws;
class eks_argocd,eso,appset,backstage,keycloak,crossplane,cert_manager,external_dns,ingress,argo_workflows k8s;
class helm,install,create_secrets tools;
class config,secrets config;
The installation requires the following binaries in the local environment:
Configure the AWS CLI with credentials of an IAM role which has access to the EKS cluster. Follow the instructions in the AWS documentation to configure the AWS CLI.
If the installation steps are being executed on an EC2 instance, ensure that the EC2 IAM instance role has permissions to access the EKS cluster or the AWS CLI is configured as mentioned above.
Backstage and Argo CD in this reference implementation are integrated with GitHub. Both Backstage and ArgoCD use Github Apps for authenticating with Github.
Therefore, a GitHub Organization should be created in order to create GitHub Apps for these integrations. Follow the instructions in GitHub documentation to create new organization or visit here.
Note
It is recommended to use a Github Organization instead of a personal github ID as Backstage has certain limitations for using personal account Github Apps for authenticating to Github. Also, the Github Organization is FREE.
Once the organization is created, fork this repository to the new GitHub Organization by following instructions in GitHub documentation.
There are two ways to create a GitHub App. You can use the Backstage CLI npx @backstage/cli create-github-app <github-org>
as per instructions in the Backstage documentation, or create it manually per these instructions in the GitHub documentation.
Create the following apps and store them in the corresponding file path.
The template files for both these Github Apps are available in private
directory. Copy these template files to above mentioned file path by running following command:
cp private/argocd-github.yaml.template private/argocd-github.yaml
cp private/backstage-github.yaml.template private/backstage-github.yaml
After this, update the values in these files by getting them from files created by backstage-cli
(if used) or get the values from Github page of App Overview.
Argo CD requires url
and installationId
of the GitHub app. The url
is the GitHub URL of the organization. The installationId
can be captured by navigating to the app installation page with URL https://github.com/organizations/<Organization-name>/settings/installations/<ID>
. You can find more information on this page.
Warning
If the app is created using the Backstage CLI, it creates files in the current working directory. These files contain credentials. Handle them with care. It is recommended to remove these files after copying the content over to files in the private
directory
Note
The rest of the installation process assumes the GitHub apps credentials are available in private/backstage-github.yaml
and private/argocd-github.yaml
The reference implementation uses config.yaml file in the repository root directory to configure the installation values. The config.yaml
should be updated with appropriate values before proceeding. Refer to the following table and update all the values appropriately. All the values are required.
Parameter | Description | Type |
---|---|---|
cluster_name |
Name of the EKS cluster for reference implementation (The name should satisfy criteria of a valid kubernetes resource name) | string |
auto_mode |
Set to "true" if EKS cluster is Auto Mode, otherwise "false" | string |
repo.url |
GitHub URL of the fork in the Github Org | string |
repo.revision |
Branch or tag which should be used for Argo CD Apps | string |
repo.basepath |
Directory in which the configuration of addons is stored | string |
region |
AWS Region of the EKS cluster and config secret | string |
domain |
Base Domain name for exposing services (This should be base domain or sub domain of the Route53 Hosted Zone) | string |
route53_hosted_zone_id |
Route53 hosted zone ID for configuring external-dns | string |
path_routing |
Enable path routing ("true") vs domain-based routing ("false") | string |
tags |
Arbitrary key-value pairs for AWS resource tagging | object |
Tip
If these values are updated after installation, ensure to run the command in the next step to update the values in AWS Secret Manager. Otherwise, the updated values will not reflect in the live installation.
The values required for the installation are stored in AWS Secret Manager in two secrets:
- cnoe-ref-impl/config: Stores values from
config.yaml
in JSON - cnoe-ref-impl/github-app: Stores GitHub App credentials with file name as key and content of the file as value from private directory.
Run the command below to create new secrets or update the existing secrets if they already exist.
./scripts/create-config-secrets.sh
Warning
DO NOT move to next steps without completing all the instructions in this step
The reference implementation can be installed on a new EKS cluster which can be created like this:
export REPO_ROOT=$(git rev-parse --show-toplevel)
$REPO_ROOT/scripts/create-cluster.sh
You will be prompted to select eksctl
or terraform
For more details on each type of tools check the corresponding guides:
- eksctl: Follow the instructions
- terraform: Follow the instructions
This will create all the prerequisite AWS Resources required for the reference implementation, which includes:
- EKS cluster with Auto Mode or Without Auto Mode (Managed Node Group with 4 nodes)
- Pod Identity Associations for following Addons:
Name | Namespace | Service Account Name | Permissions |
---|---|---|---|
Crossplane | crossplane-system | provider-aws | Admin Permissions but with permission boundary |
External Secrets | external-secrets | external-secrets | Permissions |
External DNS | external-dns | external-dns | Permissions |
AWS Load Balancer Controller (When not using Auto Mode) | kube-system | aws-load-balancer-controller | Permissions |
AWS EBS CSI Controller (When not using Auto Mode) | kube-system | ebs-csi-controller-sa | Permissions |
Note
Using Existing EKS Cluster
The reference implementation can be installed on an existing EKS Cluster only if the above prerequisites are completed.
Note
Before moving forward, ensure that the kubectl context is set to the EKS cluster and the configured AWS IAM role has access to the cluster.
All the addons are installed as Argo CD apps. At the start of the installation, Argo CD and External Secret Operator are installed on the EKS cluster as a helm chart. Once Argo CD on EKS is up, other addons are installed through it and finally the Argo CD on EKS also manages itself and External Secret Operator. Check out more details about the installation flow. Run the following command to start the installation.
scripts/install.sh
The addons with Web UI are exposed using the base domain configured in Step 5. The URLs can be retrieved by running the following command:
scripts/get-urls.sh
The URL depends on the setting for path_routing
. Refer to following table for URLs:
App Name | URL (w/ Path Routing) | URL (w/o Path Routing) |
---|---|---|
Backstage | https://[domain] |
https://backstage.[domain] |
Argo CD | https://[domain]/argocd |
https://argocd.[domain] |
Argo Workflows | https://[domain]/argo-workflows |
https://argo-workflows.[domain] |
The installation script will continue to run until all the Argo CD apps for addons are healthy. To monitor the process, use the instructions below to access the instance of Argo CD running on EKS.
Check if the kubectl context is set to the EKS cluster and it can access the EKS cluster.
You can use kubectl
to check the status of the Argo CD applications
kubectl get applications -n argocd --watch
Get the credentials for Argo CD and start a port-forward with this command:
kubectl get secrets -n argocd argocd-initial-admin-secret -oyaml | yq '.data.password' | base64 -d && echo
kubectl port-forward -n argocd svc/argocd-server 8080:80
Depending upon the configuration, Argo CD will be accessible at http://localhost:8080 or http://localhost:8080/argocd.
All the addons are configured with Keycloak SSO user1
and the user password for it can be retrieved using the following command:
kubectl get secret -n keycloak keycloak-config -o jsonpath='{.data.USER1_PASSWORD}' | base64 -d && echo
Once all the Argo CD apps on the EKS cluster are reporting healthy status, try out the examples to create a new application through Backstage. For troubleshooting, refer to the troubleshooting guide.
Warning
Before proceeding with the cleanup, ensure any Kubernetes resources created outside of the installation process such as Argo CD Apps, deployments, volumes etc. are deleted.
Run the following command to remove all the addons created by this installation:
scripts/uninstall.sh
This script will only remove resources other than CRDs from the EKS cluster so that the same cluster can be used for re-installation which is useful during development. To remove CRDs, use the following command:
scripts/cleanup-crds.sh