Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,7 @@
path = common-dev-assets
url = https://github.com/terraform-ibm-modules/common-dev-assets
branch = main
[submodule "scripts/common-bash-library"]
path = scripts/common-bash-library
url = https://github.com/terraform-ibm-modules/common-bash-library
branch = test
7 changes: 4 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,11 @@ Optionally, the module supports advanced security group management for the worke

### Before you begin

- Ensure that you have an up-to-date version of the [IBM Cloud CLI](https://cloud.ibm.com/docs/cli?topic=cli-getting-started).
- Ensure that you have an up-to-date version of the [IBM Cloud Kubernetes service CLI](https://cloud.ibm.com/docs/containers?topic=containers-kubernetes-service-cli).
- Ensure that you have an up-to-date version of the [IBM Cloud VPC Infrastructure service CLI](https://cloud.ibm.com/docs/vpc?topic=vpc-vpc-reference). Only required if providing additional security groups with the `var.additional_lb_security_group_ids`.
- Ensure that you have an up-to-date version of the [jq](https://jqlang.github.io/jq).
- Ensure that you have an up-to-date version of the [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl).

By default, the module automatically downloads the required dependencies if they are not already installed. You can disable this behavior by setting `install_required_binaries` to `false`. When enabled, the module fetches dependencies from official online binaries. If you prefer to use third-party repositories, you can specify their URLs by setting the following environment variables: `KUBECTL_DOWNLOAD_URL`, `JQ_DOWNLOAD_URL`.

<!-- Below content is automatically populated via pre-commit hook -->
<!-- BEGIN OVERVIEW HOOK -->
## Overview
Expand Down Expand Up @@ -323,6 +322,7 @@ Optionally, you need the following permissions to attach Access Management tags
| [kubernetes_config_map_v1_data.set_autoscaling](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/config_map_v1_data) | resource |
| [null_resource.config_map_status](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [null_resource.confirm_network_healthy](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [null_resource.install_required_binaries](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [null_resource.ocp_console_management](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [time_sleep.wait_for_auth_policy](https://registry.terraform.io/providers/hashicorp/time/latest/docs/resources/sleep) | resource |
| [ibm_container_addons.existing_addons](https://registry.terraform.io/providers/ibm-cloud/ibm/latest/docs/data-sources/container_addons) | data source |
Expand Down Expand Up @@ -359,6 +359,7 @@ Optionally, you need the following permissions to attach Access Management tags
| <a name="input_existing_secrets_manager_instance_crn"></a> [existing\_secrets\_manager\_instance\_crn](#input\_existing\_secrets\_manager\_instance\_crn) | CRN of the Secrets Manager instance where Ingress certificate secrets are stored. If 'enable\_secrets\_manager\_integration' is set to true then this value is required. | `string` | `null` | no |
| <a name="input_force_delete_storage"></a> [force\_delete\_storage](#input\_force\_delete\_storage) | Flag indicating whether or not to delete attached storage when destroying the cluster - Default: false | `bool` | `false` | no |
| <a name="input_ignore_worker_pool_size_changes"></a> [ignore\_worker\_pool\_size\_changes](#input\_ignore\_worker\_pool\_size\_changes) | Enable if using worker autoscaling. Stops Terraform managing worker count | `bool` | `false` | no |
| <a name="input_install_required_binaries"></a> [install\_required\_binaries](#input\_install\_required\_binaries) | When set to true, a script will run to check if `kubectl` and `jq` exist on the runtime and if not attempt to download them from the public internet and install them to /tmp. If the runtime does not have access to the public internet, you can override the download urls using environment variables `KUBECTL_DOWNLOAD_URL` and `JQ_DOWNLOAD_URL`. Set to false to skip running this script. | `bool` | `true` | no |
| <a name="input_kms_config"></a> [kms\_config](#input\_kms\_config) | Use to attach a KMS instance to the cluster. If account\_id is not provided, defaults to the account in use. | <pre>object({<br/> crk_id = string<br/> instance_id = string<br/> private_endpoint = optional(bool, true) # defaults to true<br/> account_id = optional(string) # To attach KMS instance from another account<br/> wait_for_apply = optional(bool, true) # defaults to true so terraform will wait until the KMS is applied to the master, ready and deployed<br/> })</pre> | `null` | no |
| <a name="input_manage_all_addons"></a> [manage\_all\_addons](#input\_manage\_all\_addons) | Instructs Terraform to manage all cluster addons, even if addons were installed outside of the module. If set to 'true' this module destroys any addons that were installed by other sources. | `bool` | `false` | no |
| <a name="input_number_of_lbs"></a> [number\_of\_lbs](#input\_number\_of\_lbs) | The number of LBs to associated the `additional_lb_security_group_names` security group with. | `number` | `1` | no |
Expand Down
3 changes: 2 additions & 1 deletion examples/basic/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,8 @@ locals {
}

module "ocp_base" {
source = "../.."
source = "terraform-ibm-modules/base-ocp-vpc/ibm"
version = "v3.73.3"
resource_group_id = module.resource_group.resource_group_id
region = var.region
tags = var.resource_tags
Expand Down
18 changes: 14 additions & 4 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,17 @@ locals {
default_wp_validation = local.rhcos_check ? true : tobool("If RHCOS is used with this cluster, the default worker pool should be created with RHCOS.")
}

resource "null_resource" "install_required_binaries" {
count = var.install_required_binaries && (var.verify_worker_network_readiness || var.enable_ocp_console != null || lookup(var.addons, "cluster-autoscaler", null) != null) ? 1 : 0
triggers = {
build_number = timestamp()
}
provisioner "local-exec" {
command = "${path.module}/scripts/install-binaries.sh"
interpreter = ["/bin/bash", "-c"]
}
}

# Lookup the current default kube version
data "ibm_container_cluster_versions" "cluster_versions" {
resource_group_id = var.resource_group_id
Expand Down Expand Up @@ -478,7 +489,7 @@ resource "null_resource" "confirm_network_healthy" {
# Worker pool creation can start before the 'ibm_container_vpc_cluster' completes since there is no explicit
# depends_on in 'ibm_container_vpc_worker_pool', just an implicit depends_on on the cluster ID. Cluster ID can exist before
# 'ibm_container_vpc_cluster' completes, so hence need to add explicit depends on against 'ibm_container_vpc_cluster' here.
depends_on = [ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools]
depends_on = [null_resource.install_required_binaries, ibm_container_vpc_cluster.cluster, ibm_container_vpc_cluster.cluster_with_upgrade, ibm_container_vpc_cluster.autoscaling_cluster, ibm_container_vpc_cluster.autoscaling_cluster_with_upgrade, module.worker_pools]

provisioner "local-exec" {
command = "${path.module}/scripts/confirm_network_healthy.sh"
Expand All @@ -494,7 +505,7 @@ resource "null_resource" "confirm_network_healthy" {
##############################################################################
resource "null_resource" "ocp_console_management" {
count = var.enable_ocp_console != null ? 1 : 0
depends_on = [null_resource.confirm_network_healthy]
depends_on = [null_resource.install_required_binaries, null_resource.confirm_network_healthy]
provisioner "local-exec" {
command = "${path.module}/scripts/enable_disable_ocp_console.sh"
interpreter = ["/bin/bash", "-c"]
Expand Down Expand Up @@ -568,7 +579,7 @@ locals {

resource "null_resource" "config_map_status" {
count = lookup(var.addons, "cluster-autoscaler", null) != null ? 1 : 0
depends_on = [ibm_container_addons.addons]
depends_on = [null_resource.install_required_binaries, ibm_container_addons.addons]

provisioner "local-exec" {
command = "${path.module}/scripts/get_config_map_status.sh"
Expand Down Expand Up @@ -759,7 +770,6 @@ resource "time_sleep" "wait_for_auth_policy" {
create_duration = "30s"
}


resource "ibm_container_ingress_instance" "instance" {
count = var.enable_secrets_manager_integration ? 1 : 0
depends_on = [time_sleep.wait_for_auth_policy]
Expand Down
3 changes: 3 additions & 0 deletions modules/kube-audit/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ No modules.
| Name | Type |
|------|------|
| [helm_release.kube_audit](https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release) | resource |
| [null_resource.install_required_binaries](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [null_resource.set_audit_log_policy](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [null_resource.set_audit_webhook](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) | resource |
| [time_sleep.wait_for_kube_audit](https://registry.terraform.io/providers/hashicorp/time/latest/docs/resources/sleep) | resource |
Expand All @@ -88,7 +89,9 @@ No modules.
| <a name="input_cluster_config_endpoint_type"></a> [cluster\_config\_endpoint\_type](#input\_cluster\_config\_endpoint\_type) | Specify which type of endpoint to use for for cluster config access: 'default', 'private', 'vpe', 'link'. 'default' value will use the default endpoint of the cluster. | `string` | `"default"` | no |
| <a name="input_cluster_id"></a> [cluster\_id](#input\_cluster\_id) | The ID of the cluster to deploy the log collection service in. | `string` | n/a | yes |
| <a name="input_cluster_resource_group_id"></a> [cluster\_resource\_group\_id](#input\_cluster\_resource\_group\_id) | The resource group ID of the cluster. | `string` | n/a | yes |
| <a name="input_disable_external_binary_download"></a> [disable\_external\_binary\_download](#input\_disable\_external\_binary\_download) | Set this variable to true to prevent the script from downloading binaries from the internet. | `bool` | `false` | no |
| <a name="input_ibmcloud_api_key"></a> [ibmcloud\_api\_key](#input\_ibmcloud\_api\_key) | The IBM Cloud api key to generate an IAM token. | `string` | n/a | yes |
| <a name="input_install_required_binaries"></a> [install\_required\_binaries](#input\_install\_required\_binaries) | This module includes scripts to support cluster provisioning. Set this variable to true to install all required runtime dependencies. | `bool` | `true` | no |
| <a name="input_region"></a> [region](#input\_region) | The IBM Cloud region where the cluster is provisioned. | `string` | n/a | yes |
| <a name="input_use_private_endpoint"></a> [use\_private\_endpoint](#input\_use\_private\_endpoint) | Set this to true to force all api calls to use the IBM Cloud private endpoints. | `bool` | `false` | no |
| <a name="input_wait_till"></a> [wait\_till](#input\_wait\_till) | To avoid long wait times when you run your Terraform code, you can specify the stage when you want Terraform to mark the cluster resource creation as completed. Depending on what stage you choose, the cluster creation might not be fully completed and continues to run in the background. However, your Terraform code can continue to run without waiting for the cluster to be fully created. Supported args are `MasterNodeReady`, `OneWorkerNodeReady`, `IngressReady` and `Normal` | `string` | `"IngressReady"` | no |
Expand Down
20 changes: 18 additions & 2 deletions modules/kube-audit/main.tf
Original file line number Diff line number Diff line change
@@ -1,3 +1,18 @@
resource "null_resource" "install_required_binaries" {
count = var.install_required_binaries ? 1 : 0

triggers = {
build_number = timestamp()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need this to trigger every time. It only need to trigger if the null resource has to run again

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess thats not possible, if we set triggers to other null_resource blocks, the install script will run after there is a change in the other null_resource block and not before.

}
provisioner "local-exec" {
command = "${path.root}/scripts/install-binaries.sh"
interpreter = ["/bin/bash", "-c"]
environment = {
DISABLE_EXTERNAL_DOWNLOADS = var.disable_external_binary_download
}
}
}

data "ibm_container_cluster_config" "cluster_config" {
cluster_name_id = var.cluster_id
config_dir = "${path.module}/kubeconfig"
Expand All @@ -19,6 +34,7 @@ locals {
}

resource "null_resource" "set_audit_log_policy" {
depends_on = [null_resource.install_required_binaries]
triggers = {
audit_log_policy = var.audit_log_policy
}
Expand All @@ -40,7 +56,7 @@ locals {
}

resource "helm_release" "kube_audit" {
depends_on = [null_resource.set_audit_log_policy, data.ibm_container_vpc_cluster.cluster]
depends_on = [null_resource.install_required_binaries, null_resource.set_audit_log_policy, data.ibm_container_vpc_cluster.cluster]
name = var.audit_deployment_name
chart = local.kube_audit_chart_location
timeout = 1200
Expand Down Expand Up @@ -96,7 +112,7 @@ locals {
# }

resource "null_resource" "set_audit_webhook" {
depends_on = [time_sleep.wait_for_kube_audit]
depends_on = [time_sleep.wait_for_kube_audit, null_resource.install_required_binaries]
triggers = {
audit_log_policy = var.audit_log_policy
}
Expand Down
3 changes: 3 additions & 0 deletions modules/kube-audit/scripts/confirm-rollout-status.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@

set -e

# The binaries downloaded by the install-binaries script are located in the /tmp directory.
export PATH=$PATH:"/tmp"

deployment=$1
namespace=$2

Expand Down
17 changes: 10 additions & 7 deletions modules/kube-audit/scripts/set_audit_log_policy.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,18 +4,21 @@ set -euo pipefail

AUDIT_POLICY="$1"

STORAGE_PROFILE="oc patch apiserver cluster --type='merge' -p '{\"spec\":{\"audit\":{\"profile\":\"$AUDIT_POLICY\"}}}'"
# The binaries downloaded by the install-binaries script are located in the /tmp directory.
export PATH=$PATH:"/tmp"

STORAGE_PROFILE="kubectl patch apiserver cluster --type='merge' -p '{\"spec\":{\"audit\":{\"profile\":\"$AUDIT_POLICY\"}}}'"
MAX_ATTEMPTS=10
RETRY_WAIT=5

function check_oc_cli() {
if ! command -v oc &>/dev/null; then
echo "Error: OpenShift CLI (oc) is not installed. Exiting."
function check_kubectl_cli() {
if ! command -v kubectl &>/dev/null; then
echo "Error: kubectl is not installed. Exiting."
exit 1
fi
}

function apply_oc_patch() {
function apply_kubectl_patch() {

local attempt=0
while [ $attempt -lt $MAX_ATTEMPTS ]; do
Expand All @@ -38,7 +41,7 @@ function apply_oc_patch() {

echo "========================================="

check_oc_cli
apply_oc_patch
check_kubectl_cli
apply_kubectl_patch
sleep 30
echo "========================================="
Loading