Skip to content

Sensitive properties of nested blocks are shown during a plan in v12.6 #201

@katbyte

Description

@katbyte

This an extension of:

Terraform Version

terraform: 0.12.6
azurerm: 1.32.1

Terraform Configuration Files

    ssh_key {
      key_data = var.aks_public_key_data
    }
  }

  network_profile {
    network_plugin     = var.aks_network_plugin
    network_policy     = var.aks_network_policy
    dns_service_ip     = var.aks_dnsServiceIP
    docker_bridge_cidr = var.aks_dockerBridgeCidr
    service_cidr       = var.aks_serviceCidr
  }

  service_principal {
    client_id     = data.azurerm_key_vault_secret.cluster_sp_id.value
    client_secret = data.azurerm_key_vault_secret.cluster_sp_secret.value
  }

  role_based_access_control {
    enabled = true
    azure_active_directory {
      client_app_id     = var.aks_aad_clientapp_id
      server_app_id     = var.aks_aad_serverapp_id
      server_app_secret = var.aks_aad_serverapp_secret
      tenant_id         = data.azurerm_client_config.current.tenant_id
    }
  }
}

Debug Output

Crash Output

Expected Behavior

Sensitive values should be masked masked.

Actual Behavior

On a delete with a plan when there is nested sub objects with only select properties marked as sensitive some of the values are shown in plan.

The client_key and password fields in the kube_admin_config block are shown as literal strings instead of being masked. We're running terraform version 0.12.6, and you can see the provider versions in the plan output below.

Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "external" (terraform-providers/external) 1.2.0...
- Downloading plugin for provider "template" (terraform-providers/template) 2.1.2...
- Downloading plugin for provider "azurerm" (terraform-providers/azurerm) 1.32.1...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.azurerm: version = "~> 1.32"
* provider.external: version = "~> 1.2"
* provider.template: version = "~> 2.1"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Success! The configuration is valid.

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

azurerm_resource_group.sandboxuks1: Refreshing state... [id=/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_NAME>].
module.aks-cluster.data.azurerm_client_config.current: Refreshing state...
module.aks-cluster.data.azurerm_subscription.current: Refreshing state...
module.aks-cluster.azurerm_kubernetes_cluster.aks_with_aad_parameters[0]: Refreshing state... [id=/subscriptions/<SUBSCRIPTION_ID>/resourcegroups/<RESOURCEGROUP_NAME>/providers/Microsoft.ContainerService/managedClusters/<RESOURCEGROUP_NAME>]
module.aks-cluster.azurerm_public_ip.aks_pip: Refreshing state... [id=/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_NAME>_PIP/providers/Microsoft.Network/publicIPAddresses/<RESOURCEGROUP_NAME>_PIP]
module.aks-cluster.azurerm_monitor_diagnostic_setting.aks_cluster_diagnostics[0]: Refreshing state... [id=/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_NAME>/providers/Microsoft.ContainerService/managedClusters/<RESOURCEGROUP_NAME>|aks-cluster-to-eventhub]

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
<= read (data resources)

Terraform will perform the following actions:

# module.aks-cluster.data.external.aks_agentpool_name will be read during apply
# (config refers to values not yet known)
<= data "external" "aks_agentpool_name"  {
    + id      = (known after apply)
    + program = [
        + "bash",
        + ".terraform/modules/aks-cluster/scripts/agentpool.sh",
        ]
    + query   = {
        + "rg_name" = "MC_<RESOURCEGROUP_NAME>_<RESOURCEGROUP_NAME>_uksouth"
        }
    + result  = (known after apply)
    }

# module.aks-cluster.azurerm_kubernetes_cluster.aks_with_aad_parameters[0] must be replaced
-/+ resource "azurerm_kubernetes_cluster" "aks_with_aad_parameters" {
    - api_server_authorized_ip_ranges = [] -> null
        dns_prefix                      = "<DNS_PREFIX>"
    ~ fqdn                            = "<CLUSTER_FQDN>" -> (known after apply)
    ~ id                              = "/subscriptions/<SUBSCRIPTION_ID>/resourcegroups/<RESOURCEGROUP_NAME>/providers/Microsoft.ContainerService/managedClusters/<RESOURCEGROUP_NAME>" -> (known after apply)
    ~ kube_admin_config               = [
        - {
            - client_certificate     = "<CLIENT_CERT>"
            - client_key             = "LITERAL_CLIENT_KEY_STRING" <--this is not obscured in any way, the client key is shown as a literal string
            - cluster_ca_certificate = "<CLUSTER_CA_CERT>"
            - host                   = "<HOSTNAME>"
            - password               = "LITERAL_PASSWORD_STRING" <--this is not obscured in any way, this is the current password for the kube admin config
            - username               = "clusterAdmin_<RESOURCEGROUP_NAME>_<RESOURCEGROUP_NAME>"
            },
        ] -> (known after apply)
    ~ kube_admin_config_raw           = (sensitive value)
    ~ kube_config                     = [
        - {
            - client_certificate     = ""
            - client_key             = ""
            - cluster_ca_certificate = "<CLUSTER_CA_CERT>"
            - host                   = "<HOSTNAME>"
            - password               = ""
            - username               = "clusterUser_<RESOURCEGROUP_NAME>_<RESOURCEGROUP_NAME>"
            },
        ] -> (known after apply)
    ~ kube_config_raw                 = (sensitive value)
    ~ kubernetes_version              = "1.13.9" -> "1.13.7"
        location                        = "uksouth"
        name                            = "<RESOURCEGROUP_NAME>"
    ~ node_resource_group             = "MC_<RESOURCEGROUP_NAME>_<RESOURCEGROUP_NAME>_uksouth" -> (known after apply)
        resource_group_name             = "<RESOURCEGROUP_NAME>"
    ~ tags                            = {} -> (known after apply)

    ~ addon_profile {
        + aci_connector_linux {
            + enabled     = (known after apply)
            + subnet_name = (known after apply)
            }

        + http_application_routing {
            + enabled                            = (known after apply)
            + http_application_routing_zone_name = (known after apply)
            }

        + oms_agent {
            + enabled                    = (known after apply)
            + log_analytics_workspace_id = (known after apply)
            }
        }

    ~ agent_pool_profile {
        - availability_zones  = [] -> null
            count               = 2
        + dns_prefix          = (known after apply)
        - enable_auto_scaling = false -> null
        ~ fqdn                = "<AGENTPOOL_PROFILE_FQDN>" -> (known after apply)
        - max_count           = 0 -> null
            max_pods            = 80
        - min_count           = 0 -> null
            name                = "agentpool"
        - node_taints         = [] -> null
            os_disk_size_gb     = 64
            os_type             = "Linux"
            type                = "AvailabilitySet"
            vm_size             = "Standard_DS4_v2"
            vnet_subnet_id      = "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_NAME>/providers/Microsoft.Network/virtualNetworks/<VNET_NAME>/subnets/<SUBNET_NAME>"
        }

        linux_profile {
            admin_username = "<ADMIN_USERNAME>"

            ssh_key {
                key_data = "<SSH_KEY>"
            }
        }

    ~ network_profile {
            dns_service_ip     = "10.254.0.10"
            docker_bridge_cidr = "172.17.0.1/16"
            load_balancer_sku  = "basic"
            network_plugin     = "azure"
        ~ network_policy     = "calico" -> "azure" # forces replacement
        + pod_cidr           = (known after apply)
            service_cidr       = "10.254.0.0/20"
        }

        role_based_access_control {
            enabled = true

            azure_active_directory {
                client_app_id     = "<AAD_CLIENT_ID>"
                server_app_id     = "<AAD_SERVER_ID>"
                server_app_secret = (sensitive value)
                tenant_id         = "*******"
            }
        }

    - service_principal {
        - client_id = "<SP_CLIENT_ID>" -> null
        }
    + service_principal {
        + client_id     = "<SP_CLIENT_ID>"
        + client_secret = (sensitive value)
        }
    }

# module.aks-cluster.azurerm_monitor_diagnostic_setting.aks_nsg_diagnostics[0] must be replaced
-/+ resource "azurerm_monitor_diagnostic_setting" "aks_nsg_diagnostics" {
        eventhub_authorization_rule_id = "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_ID>/providers/Microsoft.EventHub/namespaces/<NS_NAME>/AuthorizationRules/RootManageSharedAccessKey"
        eventhub_name                  = "aks-nsg-diagnostics"
    ~ id                             = "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_ID>/providers/Microsoft.Network/networkSecurityGroups/<AGENTPOOL_NAME>" -> (known after apply)
        name                           = "aks-nsg-to-eventhub"
    ~ target_resource_id             = "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_ID>/providers/Microsoft.Network/networkSecurityGroups/<AGENTPOOL_NAME>" -> (known after apply) # forces replacement

    - log {
        - category = "NetworkSecurityGroupEvent" -> null
        - enabled  = true -> null

        - retention_policy {
            - days    = 0 -> null
            - enabled = true -> null
            }
        }
    + log {
        + category = "NetworkSecurityGroupEvent"
        + enabled  = true

        + retention_policy {
            + enabled = true
            }
        }
    - log {
        - category = "NetworkSecurityGroupRuleCounter" -> null
        - enabled  = true -> null

        - retention_policy {
            - days    = 0 -> null
            - enabled = true -> null
            }
        }
    + log {
        + category = "NetworkSecurityGroupRuleCounter"
        + enabled  = true

        + retention_policy {
            + enabled = true
            }
        }
    }

Plan: 2 to add, 0 to change, 2 to destroy.

------------------------------------------------------------------------

This plan was saved to: terraform.tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "terraform.tfplan"`

Steps to Reproduce

Additional Context

References

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingsubsystem/typesIssues and feature requests related to the type system of Terraform and our shims around it.upstream-protocolRequires change of protocol specification, i.e. can't be done under the current protocolupstream-terraform

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions