diff --git a/README.md b/README.md
index 41ace7c..c1406b9 100644
--- a/README.md
+++ b/README.md
@@ -1,83 +1,148 @@
-# Overview
+# terraformscaffold-azure
-See the [tfscaffold readme](https://github.com/tfutils/tfscaffold) for information on tfscaffold this is specifically for the Azure version and the various changes required to make it work there. Additionally this contains elements for the various example components.
+A framework for controlling multi-environment multi-component terraform-managed Azure infrastructure
-## Required mounts
+Absolutely stolen from [tfscaffold](https://github.com/tfutils/tfscaffold)
-There are a number of required mounts otherwise tfscaffold wont actually know what to do. Note that tfscaffold is /tfscaffold in the container.
+## Overview
-- components (your terraform)
-- modules (any terraform modules)
-- etc (terraform variables)
+Terraform scaffold consists of a terraform wrapper bash script, a bootstrap script and a set of directories providing the locations to store terraform code and variables.
-## Optional mounts
+| Thing | Things about the Thing |
+|-------|------------------------|
+| bin/terraform.sh | The terraformscaffold script |
+| bootstrap/ | The bootstrap terraform code used for creating the terraformscaffold S3 bucket |
+| components/ | The location for terraform "components". Terraform code intended to be run directly as a root module. |
+| etc/ | The location for environment-specific terraform variables files:
`env_{region}_{environment}.tfvars`
`versions_{region}_{environment}.tfvars` |
+| lib/ | Optional useful libraries, such as Jenkins pipeline groovy script |
+| modules/ | The optional location for terraform modules called by components |
+| plugin-cache/ | The default directory used for caching plugin downloads |
+| src/ | The optional location for source files, e.g. source for lambda functions zipped up into artefacts inside components |
-- plugin-cache (terraform plugin-cache)
+## Concepts & Assumptions
-Plugin-cache isnt required but it will download it every single time if you dont mount this folder.
+### Multi-Component Environment Concept
-## Important changes from tfscaffold
+The Scaffold is built around the concept that a logical "environment" may consist of any number of independent components across independent Azure subscriptions. What provides consistency across these components, and therefore defines the environment, are the variables shared between the components. For example, the CIDR block defining a primary VPC for a production environment is needed by the component that creates the VPC, it may also be needed by components in other VPCs or accounts. All components in a production enviroment are likely to share the "envirornment" variable value "production".
-app-id, password and tenant are the important changes that have been added over the standard tfscaffold, simply because azure works differently. These are now required when calling tfscaffold.
+Scaffold achieves this by maintaining variables specific to an environment in the file _etc/env_{region}_{environment}.tfvars_, and then providing those variables as inputs to all components run for that environment. Any variables not required by a component are safely ignored, but all components have visibility of all variables for an environment.
-## Examples
+Important Note: Variables in the Env and Versions variables files are not merged. You cannot, for example, define the same map variable in both files and have the keys in each definition merged into a resulting set. When a variable is defined more than once, the last one terraform evaluates takes precedence and overrides any previously encountered definition. To prevent unexpected consequences, never define variable values in more than one place.
-### Bootstrap
+### State File Location Consistency and Referencing
-Windows
+Scaffold uses Azure Storage Account for storage of tfstate files. The account and location are specifically defined to be predictable to organise their storage and permit the use of terraform_remote_state resources. Any scaffold component can reliably refer to the location of the state file for another component and depend upon the outputs provided. The naming convention is: `https://${project}tfstate${region}.blob.core.windows.net/${project}-${region}-tfstate/${project}-${region}-${component}-${component}.tfstate`.
-``` powershell
-docker run -v C:\git\my_project\tfscaffold\components\:/tfscaffold/components `
- -v C:\git\my_project\tfscaffold\etc\:/tfscaffold/etc `
- -v C:\git\my_project\tfscaffold\modules\:/tfscaffold/modules `
- -v C:\git\my_project\tfscaffold\plugin-cache\:/tfscaffold/plugin-cache `
-tfscaffold -a apply -r uksouth -p demo --bootstrap `
---app-id 'some-app-id' `
---password 'some-password' `
---tenant 'some-tenant'
+### Variables Files: Environment & Versions
+
+Scaffold provides a logical separation of several types of environment variable:
+ * Global variables
+ * Region-scoped global variables
+ * Group variables
+ * Static environment variables
+ * Frequently-changing versions variables
+
+This seperation is purely logical, not functional. It makes no functional difference in which file a variable lives, or even whether a versions variables file exists; but it provides the capacity to seperate out mostly static variables that define the construction of the environment from variables that could change on each apply.
+
+### Azure Credentials
+
+Terraform Scaffold does not provide any mechanism for running terraform across multiple Azure subscriptions simultaneously, storing state files in a different subscription to the subscription being modified or any other functionality that would require Scaffold to intelligently manage Azure credentials. After extensive research and development it has become apparant that, despite some features available in terraform to handle more than one Azure subscription the features are not sufficiently mature or flexible to allow their application in a generic form.
+
+Therefore, to ensure widest possible reach and capability of Scaffold, it requires that a specific set of Azure credentials be provided to it at invocation. These credentials must have the necessary access to read and write to the bootstrapped Storage Account state file, and to create/modify/destroy the Azure resources being controlled via terraform.
+
+There are two main ways to provide credentials to Scaffold for Azure.
+
+#### Using az login
+
+If you want to run Terraform as you user, the simplest way to provide credentials is to use `az login`. You will also need to set the subscription if your user has access to more than one and/or you want to act on a subscription that isn't the default:
+
+```
+az login
+az account set --subscription="00000000-0000-0000-0000-000000000000"
```
-Linux
-
-``` bash
-docker run -v ~/git/jumpbox/tfscaffold/components/:/tfscaffold/components \
- -v ~/git/jumpbox/tfscaffold/etc/:/tfscaffold/etc \
- -v ~/git/jumpbox/tfscaffold/modules/:/tfscaffold/modules \
- -v ~/git/jumpbox/tfscaffold/plugin-cache/:/tfscaffold/plugin-cache \
-mikewinterbjss/tfscaffold -a apply -r uksouth -p changeme --bootstrap \
---app-id 'some-app-id' \
---password 'some-password' \
---tenant 'some-tenant'
+#### Using Service Principal
+
+If you are using Scaffold for Azure in you pipeline or you need to connect as a service principal instead of a you user, you can set the environment variables and Scaffold for Azure will pick them up.
+
+```
+export ARM_CLIENT_ID="00000000-0000-0000-0000-000000000000"
+export ARM_CLIENT_SECRET="00000000-0000-0000-0000-000000000000"
+export ARM_TENANT_ID="00000000-0000-0000-0000-000000000000"
```
-### Keyvault (plan/apply etc)
+### pre_apply.sh & post_apply.sh
+
+Although as yet somewhat unrefined, Scaffold provides the capacity to incorporate additional scripted actions to take prior to and after running terraform on a given component. If there is a file called "pre_apply.sh" present in the top level of the component you are working with, then it will be executed as a bash script prior to any terraform action. If a file called post_apply.sh is present it will be executed immediately following any terraform action. This capability clearly could do with some improvement to support complex deployments with script dependencies, but as yet I have none to play with.
+
+### Provider Plugins
-Again this is an example but its the core password management function of this piece of work. If you think there is a better way to manage the secrets / passwords etc... feel free to create an example component.
+Since 0.10 terraform has split its providers out into plugins which are downloaded separately. This has caused some issues in automation where you can be downloading the same provider endlessly. A long term solution to this has not yet been decided upon as there are many ways to implement a solution, but none really suit all likely scenarios. Ideally I would like to see management of this handled by kamatama41/tfenv. For the moment, terraformscaffold will instruct terraform to cache plugins in the plugin-cache/ top level directory by default. This can be overridden by exporting the TF_PLUGIN_CACHE_DIR variable with an appropriate value. This at least means that within one code checkout, regardless of swapping around components to plan or apply, you will only need to download providers once. If you have a local artifact repository or some other preference for keeping copies of providers locally you can use it with this variable.
-In essence the keyvault is created and a random string generator creates a number of secrets, these are then output into the remote state. The remote state can then be used elsewhere and that way none of the passwords are added to a tf file.
+## Usage
+### Bootstrapping
+Before using Scaffold, a bootstrapping stage is required. Scaffold is responsible for creating and maintaining the S3 buckets it uses to store component state files and even keeps the state file that defines the scaffold bucket in the same bucket. This is done with a special bootstrap mode within the script, invoked with the '--bootstrap' parameter. When used with the "apply" action, this will cause the script to create a bootstrap bucket and then configure the bucket as a remote state location for itself. nd upload the tfstate used for managing the bucket to the bucket. Once created, the bucket can then be used for any terraform apply for the specific combination of project, region and AWS account.
-Windoze
+It is not recommended to modify the bootstrap code after creation as it risks the integrity of the state files stored in the bucket that manage other deployments; however this can be mitigated by configuring synchronisation with a master backup bucket external to Scaffold management.
-``` powershell
-docker run -v C:\git\jumpbox\tfscaffold\components\:/tfscaffold/components `
- -v C:\git\jumpbox\tfscaffold\etc\:/tfscaffold/etc `
- -v C:\git\jumpbox\tfscaffold\modules\:/tfscaffold/modules `
- -v C:\git\jumpbox\tfscaffold\plugin-cache\:/tfscaffold/plugin-cache `
-mikewinterbjss/tfscaffold -a plan -r uksouth -p changeme -e demo -c keyvault `
---app-id 'some-app-id' `
---password 'some-password' `
---tenant 'some-tenant'
+Bootstrapping usage:
+
+```bash
+bin/terraform.sh \
+ -p/--project `project` \
+ -r/--region `region` \
+ --bootstrap \
+ -a/--action plan
+```
+
+```bash
+bin/terraform.sh \
+ -p/--project `project` \
+ -r/--region `region` \
+ --bootstrap \
+ -a/--action apply
```
-Linux
-
-``` powershell
-docker run -v ~/git/jumpbox/tfscaffold/components/:/tfscaffold/components \
- -v ~/git/jumpbox/tfscaffold/etc/:/tfscaffold/etc \
- -v ~/git/jumpbox/tfscaffold/modules/:/tfscaffold/modules \
- -v ~/git/jumpbox/tfscaffold/plugin-cache/:/tfscaffold/plugin-cache \
-mikewinterbjss/tfscaffold -a plan -r uksouth -p changeme -e demo -c keyvault \
---app-id 'some-app-id' \
---password 'some-password' \
---tenant 'some-tenant'
+Where:
+* `project`: the name of the project to have a terraform bootstrap applied
+* `region` (optional): Defaults to value of the AZ_DEFAULT_REGION environment variable
+
+### Running
+
+The terraformscaffold script is invoked as bin/terraform.sh. Once a state bucket has been bootstrapped, bin/terraform.sh can be run to apply terraform code. Its usage as of 25/01/2017 is:
+
+```bash
+bin/terraform.sh \
+ -a/--action `action` \
+ -c/--component `component_name` \
+ -e/--environment `environment` \
+ -g/--group `group` (optional) \
+ -i/--build-id `build_id` (optional) \
+ -p/--project `project` \
+ -r/--region `region` \
+ -d/--detailed-exitcode (optional) \
+ -n/--no-color (optional) \
+ -w/--compact-warnings (optional) \
+ -- \
+
```
+
+Where:
+* `action`: Terraform action (or pseudo-action) to take, e.g. plan, apply, plan-destroy (runs plan with the -destroy flag), destroy, show
+* `build_id` (optional): Used in conjunction with the plan and apply actions, `build_id` causes the creation and consumption of terraform plan files (.tfplan)
+ * When `build_id` is omitted:
+ * "Plan" provides normal plan output without generating a plan file
+ * "Apply" directly applies the component based on the code and state it is given.
+ * When `build_id` is provided:
+ * "Plan" creates a plan file with `build_id` as part of the file name, and uploads the plan to the S3 state bucket under a key called "plans/" alongside the corresponding state file
+ * "Apply" looks for and downloads the corresponding plan file generated by a plan job, and applies the changes in the plan file
+ * It is usual to provide, for example, the Jenkins _$BUILD_ID_ parameter to Plan jobs, and then manually reference that particular Job ID when running a corresponding apply job.
+* `component_name`: The name of the terraform component in the components directory to run the `action` against.
+* `environment`: The name of the environment the component is to be actioned against, therefore implying the variables file(s) to be included
+* `group` (optional): The name of the group to which the environment belongs, permitting the use of a group tfvars file as a "meta-environment" shared by more than one environment
+* `project`: The name of the project being deployed, as per the default bucket-prefix and state file keyspace
+* `region` (optional): The AWS region name unique to all components and terraform processes. Defaults to the value of the _AZ_DEFAULT_REGION_ environment variable.
+* `detailed-exitcode` (optional): Passes detailed exit code flag to terraform.
+* `no-color` (optional): Passes no-color flag to terraform.
+* `compact-warnings` (optional): Passes compact-warnings flag to terraform.
+* `additional arguments`: Any arguments provided after "--" will be passed directly to terraform as its own arguments, e.g. allowing the provision of a 'target=value' parameter.
diff --git a/bin/terraform-az.sh b/bin/terraform.sh
similarity index 91%
rename from bin/terraform-az.sh
rename to bin/terraform.sh
index 66ed9e5..804fca7 100644
--- a/bin/terraform-az.sh
+++ b/bin/terraform.sh
@@ -43,9 +43,6 @@ Usage: ${0} \\
-i/--build-id [build_id] (optional) \\
-p/--project [project] \\
-r/--region [region] \\
- --app-id [app_id] \\
- --password [password] \\
- --tenant [tenant] \\
-- \\
@@ -70,7 +67,7 @@ build_id (optional):
component_name:
- the name of the terraform component module in the components directory
-
+
environment:
- dev
- test
@@ -200,31 +197,10 @@ while true; do
shift;
fi;
;;
- --app-id)
- shift;
- if [ -n "${1}" ]; then
- app_id="${1}";
- shift;
- fi;
- ;;
- --password)
- shift;
- if [ -n "${1}" ]; then
- password="${1}";
- shift;
- fi;
- ;;
- --tenant)
- shift;
- if [ -n "${1}" ]; then
- tenant="${1}";
- shift;
- fi;
- ;;
--bootstrap)
shift;
bootstrap="true"
- ;;
+ ;;
--)
shift;
break;
@@ -281,30 +257,32 @@ else
# Validate environment to work with
[ -n "${environment_arg}" ] \
|| error_and_die "Required argument missing: -e/--environment";
- readonly environment="${environment_arg}";
+ readonly environment="${environment_arg}";
fi
[ -n "${action}" ] \
|| error_and_die "Required argument missing: -a/--action";
-[ -n "${app_id}" ] \
- || error_and_die "Required argument missing: --app-id";
+# The upstream tfscaffold-azure uses command line args to pass in these values and then creates the required
+# environment variables. This means we end up with passwords on the command line in CI tools, when we could
+# just set the env variables and pull the required info from there, which is far closer to how the original
+# tfscaffold works for AWS
+app_id=$ARM_CLIENT_ID
+password=$ARM_CLIENT_SECRET
+tenant=$ARM_TENANT_ID
-[ -n "${password}" ] \
- || error_and_die "Required argument missing: --password";
+if [ ! -z "${app_id}" ]; then
+ echo -e "Found ARM_CLIENT_ID set, logging in as provided service principal..."
-[ -n "${tenant}" ] \
- || error_and_die "Required argument missing: --tenant";
+ [ -n "${password}" ] || error_and_die "You must set the ARM_CLIENT_SECRET environment variable"
+ [ -n "${tenant}" ] || error_and_die "You must set the ARM_TENANT_ID environment variable"
-# Run an AZ Login
-# This is required to get / validate some of the required elements needed for terraform without additionally
-# having to specify these elements.
-echo -n "AZ Login..."
-az login --service-principal --username ${app_id} --password ${password} --tenant ${tenant} > /dev/null 2>&1;
-echo "complete"
+ az login --service-principal --username ${app_id} --password ${password} --tenant ${tenant} > /dev/null 2>&1;
+else
+ echo -e "No ARM_CLIENT_ID set, assuming you have logged in to Azure with az login and set the correct subscription..."
+fi;
# Query AZ Subscription ID this should only return the relevant subscription id assigned to the service principal.
-# If you find that your subscription id is not as expected then validate the service principal.
echo -n "AZ Account show --query 'id'..."
subscription_id=$(az account show --query 'id' | sed 's/"//g' );
echo "complete"
@@ -312,26 +290,19 @@ if [ "${subscription_id}" == "Please run 'az login' to setup account." ]; then
error_and_die "Couldn't determine AZ Subscription ID. \"az account show --query 'id'\" responded with '${subscription_id}'";
else
echo -e "AZ Subscription ID: ${subscription_id}";
+ tenant=$(az account show --query 'tenantId' | sed 's/"//g');
fi;
-# Query AZ login details as we need the object id for certain things.
-echo -n "AZ ad sp show..."
-az_service_principal_object_id=$(az ad sp show --id ${app_id} --query 'objectId' | sed 's/"//g' );
-echo "complete"
-
# Set the environmental variables so that we can initialise the backend without hardcoding values.
# This is done in bootstrap/az_provider.tf
export ARM_SUBSCRIPTION_ID=${subscription_id}
-export ARM_TENANT_ID=${tenant}
-export ARM_CLIENT_ID=${app_id}
-export ARM_CLIENT_SECRET=${password}
# Validate Storage Account. Set default if undefined
if [ -n "${storage_account_prefix}" ]; then
readonly storage_account="${storage_account_prefix}${region}"
echo -e "AZ Storage account ${storage_account}";
else
- readonly storage_account="${project}tfs${region}";
+ readonly storage_account="${project}tfstate${region}";
echo -e "AZ Storage account ${storage_account}, no storage account prefix specified.";
fi;
@@ -364,7 +335,6 @@ case "${action}" in
;;
destroy)
destroy='-destroy';
- force='-force';
refresh="-refresh=true";
;;
plan)
@@ -422,7 +392,7 @@ if [ "${bootstrap}" == "true" ]; then
if [ "${action}" == "destroy" ]; then
error_and_die "You cannot destroy a bootstrap storage account using terraformscaffold, it's just too dangerous. If you're absolutely certain that you want to delete the storage account and all contents, including any possible state files environments and components within this project, then you will need to do it from the AZ Console or Cli";
fi;
-
+
# Bootstrap requires explicitly and only these parameters
# If they are empty dont pass them through otherwise we will always need variables in the terraform and whilst
# this might be ok for AWS azurerm is not so uniform.
@@ -432,13 +402,10 @@ if [ "${bootstrap}" == "true" ]; then
if [ ! -z "${storage_account}" ]; then tf_var_params+=" -var storage_account_name=${storage_account}"; fi;
if [ ! -z "${subscription_id}" ]; then tf_var_params+=" -var subscription_id=${subscription_id}"; fi;
if [ ! -z "${tenant}" ]; then tf_var_params+=" -var tenant=${tenant}"; fi;
- if [ ! -z "${app_id}" ]; then tf_var_params+=" -var app_id=${app_id}"; fi;
else
# Set the relevant params for az cli
if [ ! -z "${tenant}" ]; then tf_var_params+=" -var tenant=${tenant}"; fi;
- if [ ! -z "${app_id}" ]; then tf_var_params+=" -var app_id=${app_id}"; fi;
- if [ ! -z "${password}" ]; then tf_var_params+=" -var password=${password}"; fi;
- if [ ! -z "${az_service_principal_object_id}" ]; then tf_var_params+=" -var service_principal_object_id=${az_service_principal_object_id}"; fi;
+ if [ ! -z "${storage_account}" ]; then tf_var_params+=" -var storage_account_name=${storage_account}"; fi;
if [ ! -z "${region}" ]; then tf_var_params+=" -var region=${region}"; fi;
if [ ! -z "${project}" ]; then tf_var_params+=" -var project=${project}"; fi;
if [ ! -z "${environment}" ]; then tf_var_params+=" -var environment=${environment}"; fi;
@@ -458,14 +425,14 @@ else
declare -a secrets=();
readonly secrets_file_name="secret.tfvars";
readonly secrets_file_path="build/${secrets_file_name}";
-
+
# First validate the keyvault existence - cant grab secrets without them existing.
az keyvault secret list --vault-name ${project}-${region}-${environment} --query '[].id' >/dev/null 2>&1;
if [ $? -eq 0 ]; then
# Keyvault exists so find secrets file in the storage account
echo -n "AZ Keyvault found extracting data..."
$secrets_names=$(az keyvault secret list --vault-name ${project}-${region}-${environment} --query '[].id' | grep https | sed 's/"//g' | sed 's/,//g')
-
+
if [ -n "${secrets_names[0]}" ]; then
secret_count=1;
for secret_line in "${secrets_names[@]}"; do
@@ -480,18 +447,18 @@ else
# Use versions TFVAR files if exists
readonly versions_file_name="versions_${region}_${environment}.tfvars";
readonly versions_file_path="${base_path}/etc/${versions_file_name}";
-
+
# Check environment name is a known environment
# Could potentially support non-existent tfvars, but choosing not to.
readonly env_file_path="${base_path}/etc/env_${region}_${environment}.tfvars";
if [ ! -f "${env_file_path}" ]; then
error_and_die "Unknown environment. ${env_file_path} does not exist.";
fi;
-
+
# Check for presence of a global variables file, and use it if readable
readonly global_vars_file_name="global.tfvars";
readonly global_vars_file_path="${base_path}/etc/${global_vars_file_name}";
-
+
# Check for presence of a region variables file, and use it if readable
readonly region_vars_file_name="${region}.tfvars";
readonly region_vars_file_path="${base_path}/etc/${region_vars_file_name}";
@@ -501,10 +468,10 @@ else
readonly group_vars_file_name="group_${group}.tfvars";
readonly group_vars_file_path="${base_path}/etc/${group_vars_file_name}";
fi;
-
+
# Collect the paths of the variables files to use
declare -a tf_var_file_paths;
-
+
# Use Global and Region first, to allow potential for terraform to do the
# honourable thing and override global and region settings with environment
# specific ones; however we do not officially support the same variable
@@ -524,14 +491,14 @@ else
echo -e "[WARNING] Group \"${group}\" has been specified, but no group variables file is available at ${group_vars_file_path}";
fi;
fi;
-
+
# We've already checked this is readable and its presence is mandatory
tf_var_file_paths+=("${env_file_path}");
-
+
# If present and readable, use versions and dynamic variables too
[ -f "${versions_file_path}" ] && tf_var_file_paths+=("${versions_file_path}");
[ -f "${dynamic_file_path}" ] && tf_var_file_paths+=("${dynamic_file_path}");
-
+
# Warn on duplication
duplicate_variables="$(cat "${tf_var_file_paths[@]}" | sed -n -e 's/\(^[a-zA-Z0-9_\-]\+\)\s*=.*$/\1/p' | sort | uniq -d)";
[ -n "${duplicate_variables}" ] \
@@ -546,12 +513,12 @@ ${duplicate_variables}
This could lead to unexpected behaviour. Overriding of variables
has previously been unpredictable and is not currently supported,
but it may work.
-
+
Recent changes to terraform might give you useful overriding and
map-merging functionality, please use with caution and report back
on your successes & failures.
###################################################################";
-
+
# Build up the tfvars arguments for terraform command line
for file_path in "${tf_var_file_paths[@]}"; do
tf_var_params+=" -var-file=${file_path}";
@@ -590,13 +557,13 @@ fi;
readonly backend_config="terraform {
backend \"azurerm\" {
storage_account_name = \"${storage_account}\"
- container_name = \"tfstate\"
+ container_name = \"${project}-${region}-tfstate\"
key = \"${backend_prefix}-${backend_filename}\"
subscription_id = \"${subscription_id}\"
client_id = \"${app_id}\"
client_secret = \"${password}\"
tenant_id = \"${tenant}\"
- resource_group_name = \"tfstate\"
+ resource_group_name = \"${project}-${region}-tfstate\"
}
}";
@@ -620,7 +587,7 @@ declare bootstrapped="true";
# This will attempted to get an access key if we are bootstrapping this will fail - hence the /dev/null
# if we are not bootstrapping this will grab the access key for use later.
echo -n "AZ Storage account keys list..."
-readonly access_key=$(az storage account keys list --resource-group tfstate --account-name ${storage_account} --query '[0].value') >/dev/null 2>&1;
+readonly access_key=$(az storage account keys list --resource-group ${project}-${region}-tfstate --account-name ${storage_account} --query '[0].value') >/dev/null 2>&1;
echo "complete"
# If we are in bootstrap mode, we need to know if we have already bootstrapped
@@ -633,7 +600,7 @@ if [ "${bootstrap}" == "true" ]; then
# meaning we havent bootstrapped. If however we have attempted to bootstrap and the storage account exists and
# we have an access key this will check the boostraon.tfstate exists, this is an unlikely scenario.
echo -n "AZ Storage blob show..."
- az storage blob show --container-name tfstate --name ${backend_prefix}-bootstrap.tfstate --account-name ${storage_account} --account-key ${access_key} >/dev/null 2>&1;
+ az storage blob show --container-name ${project}-${region}-tfstate --name ${backend_prefix}-bootstrap.tfstate --account-name ${storage_account} --account-key ${access_key} >/dev/null 2>&1;
[ $? -eq 0 ] || bootstrapped="false";
echo "complete"
fi;
@@ -644,15 +611,16 @@ if [ "${bootstrapped}" == "true" ]; then
# Nix the horrible hack on exit
trap "rm -f $(pwd)/backend_terraformscaffold.tf" EXIT;
-
+
# Configure remote state storage
echo "AZ remote state from ${storage_account}";
# TODO: Add -upgrade to init when we drop support for <0.10
- terraform init \
+ terraform init -upgrade \
|| error_and_die "Terraform init failed";
else
# We are bootstrapping. Download the providers, skip the backend config.
terraform init \
+ -upgrade \
-backend=false \
|| error_and_die "Terraform init failed";
fi;
@@ -698,9 +666,18 @@ case "${action}" in
# This is pretty nasty, but then so is Hashicorp's approach to backwards compatibility
# at some point in the future we can deprecate support for <0.10 and remove this in favour
# of always having auto-approve set to true
- if [ "${action}" == "apply" -a $(terraform version | head -n1 | cut -d" " -f2 | cut -d"." -f2) -gt 9 ]; then
+ if [ "${action}" == "apply" ]; then
echo "Compatibility: Adding to terraform arguments: -auto-approve=true";
extra_args+=" -auto-approve=true";
+ else # action is `destroy`
+ # Check terraform version - if pre-0.15, need to add `-force`; 0.15 and above instead use `-auto-approve`
+ if [ $(terraform version | head -n1 | cut -d" " -f2 | cut -d"." -f1) == "v0" ] && [ $(terraform version | head -n1 | cut -d" " -f2 | cut -d"." -f2) -lt 15 ]; then
+ echo "== v0 && -lt 15";
+ echo "Compatibility: Adding to terraform arguments: -force";
+ force='-force';
+ else
+ extra_args+=" -auto-approve";
+ fi;
fi;
if [ -n "${build_id}" ]; then
diff --git a/bootstrap/.terraform-version b/bootstrap/.terraform-version
index c112f0e..9075be4 100644
--- a/bootstrap/.terraform-version
+++ b/bootstrap/.terraform-version
@@ -1 +1 @@
-latest:^0.11
+1.5.5
diff --git a/bootstrap/az_provider.tf b/bootstrap/az_provider.tf
deleted file mode 100644
index a0c677f..0000000
--- a/bootstrap/az_provider.tf
+++ /dev/null
@@ -1,12 +0,0 @@
-# The default AWS provider in the default region
-provider "azurerm" {
- version = "=1.29.0"
-}
-
-provider "local" {
- version = "=1.2.2"
-}
-
-provider "template" {
- version = "=2.1.2"
-}
\ No newline at end of file
diff --git a/bootstrap/az_resource_groups.tf b/bootstrap/az_resource_groups.tf
deleted file mode 100644
index 66b3206..0000000
--- a/bootstrap/az_resource_groups.tf
+++ /dev/null
@@ -1,8 +0,0 @@
-resource "azurerm_resource_group" "tfstate" {
- name = "tfstate"
- location = "${var.region}"
-
- tags = {
- environment = "${var.environment}"
- }
-}
\ No newline at end of file
diff --git a/bootstrap/az_storage_account.tf b/bootstrap/az_storage_account.tf
deleted file mode 100644
index a6b2566..0000000
--- a/bootstrap/az_storage_account.tf
+++ /dev/null
@@ -1,9 +0,0 @@
-resource "azurerm_storage_account" "tfstate" {
- depends_on = ["azurerm_resource_group.tfstate"]
-
- name = "${var.storage_account_name}"
- resource_group_name = "${azurerm_resource_group.tfstate.name}"
- location = "${azurerm_resource_group.tfstate.location}"
- account_tier = "Standard"
- account_replication_type = "LRS"
-}
\ No newline at end of file
diff --git a/bootstrap/az_storage_containers.tf b/bootstrap/az_storage_containers.tf
deleted file mode 100644
index 3218b9d..0000000
--- a/bootstrap/az_storage_containers.tf
+++ /dev/null
@@ -1,19 +0,0 @@
-# tfstate
-resource "azurerm_storage_container" "tfstate" {
- depends_on = ["azurerm_storage_account.tfstate"]
-
- name = "tfstate"
- resource_group_name = "${azurerm_resource_group.tfstate.name}"
- storage_account_name = "${azurerm_storage_account.tfstate.name}"
- container_access_type = "private"
-}
-
-# builds
-resource "azurerm_storage_container" "builds" {
- depends_on = ["azurerm_storage_account.tfstate"]
-
- name = "builds"
- resource_group_name = "${azurerm_resource_group.tfstate.name}"
- storage_account_name = "${azurerm_storage_account.tfstate.name}"
- container_access_type = "private"
-}
\ No newline at end of file
diff --git a/bootstrap/data_azure.tf b/bootstrap/data_azure.tf
new file mode 100644
index 0000000..cee07df
--- /dev/null
+++ b/bootstrap/data_azure.tf
@@ -0,0 +1 @@
+data "azurerm_client_config" "current" {}
diff --git a/bootstrap/key_vault_access_policy_storage.tf b/bootstrap/key_vault_access_policy_storage.tf
new file mode 100644
index 0000000..5b4ffa1
--- /dev/null
+++ b/bootstrap/key_vault_access_policy_storage.tf
@@ -0,0 +1,22 @@
+resource "azurerm_key_vault_access_policy" "storage" {
+ key_vault_id = azurerm_key_vault.tfstate.id
+ tenant_id = var.tenant
+ object_id = azurerm_storage_account.tfstate.identity[0].principal_id
+
+ key_permissions = [
+ "Get",
+ "Create",
+ "List",
+ "Restore",
+ "Recover",
+ "UnwrapKey",
+ "WrapKey",
+ "Purge",
+ "Encrypt",
+ "Decrypt",
+ "Sign",
+ "Verify"
+ ]
+
+ secret_permissions = [ "Get" ]
+}
diff --git a/bootstrap/key_vault_key_tfstate.tf b/bootstrap/key_vault_key_tfstate.tf
new file mode 100644
index 0000000..2660bd1
--- /dev/null
+++ b/bootstrap/key_vault_key_tfstate.tf
@@ -0,0 +1,19 @@
+resource "azurerm_key_vault_key" "tfstate" {
+ name = "${var.project}-${var.region}-tfstate-key"
+ key_vault_id = azurerm_key_vault.tfstate.id
+ key_type = "RSA"
+ key_size = 2048
+ key_opts = [
+ "decrypt",
+ "encrypt",
+ "sign",
+ "unwrapKey",
+ "verify",
+ "wrapKey",
+ ]
+
+ depends_on = [
+ azurerm_key_vault_access_policy.client,
+ azurerm_key_vault_access_policy.storage,
+ ]
+}
diff --git a/bootstrap/key_vault_tfstate.tf b/bootstrap/key_vault_tfstate.tf
new file mode 100644
index 0000000..4604381
--- /dev/null
+++ b/bootstrap/key_vault_tfstate.tf
@@ -0,0 +1,9 @@
+resource "azurerm_key_vault" "tfstate" {
+ name = "${var.project}-${var.region}-tfstate"
+ location = azurerm_resource_group.tfstate.location
+ resource_group_name = azurerm_resource_group.tfstate.name
+ tenant_id = var.tenant
+ sku_name = "standard"
+
+ purge_protection_enabled = true
+}
diff --git a/bootstrap/provider.tf b/bootstrap/provider.tf
new file mode 100644
index 0000000..18128ed
--- /dev/null
+++ b/bootstrap/provider.tf
@@ -0,0 +1,10 @@
+provider "azurerm" {
+ features {
+ resource_group {
+ # Setting this to true as that will become the default in v3, so if we set it
+ # now, it should make upgrading easier down the line
+ # https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs#prevent_deletion_if_contains_resources
+ prevent_deletion_if_contains_resources = true
+ }
+ }
+}
diff --git a/bootstrap/resource_group_tfstate.tf b/bootstrap/resource_group_tfstate.tf
new file mode 100644
index 0000000..abb6381
--- /dev/null
+++ b/bootstrap/resource_group_tfstate.tf
@@ -0,0 +1,8 @@
+resource "azurerm_resource_group" "tfstate" {
+ name = "${var.project}-${var.region}-tfstate"
+ location = var.region
+
+ tags = {
+ environment = var.environment
+ }
+}
diff --git a/bootstrap/storage_account_customer_managed_key_tfstate.tf b/bootstrap/storage_account_customer_managed_key_tfstate.tf
new file mode 100644
index 0000000..9647d34
--- /dev/null
+++ b/bootstrap/storage_account_customer_managed_key_tfstate.tf
@@ -0,0 +1,5 @@
+resource "azurerm_storage_account_customer_managed_key" "tfstate" {
+ storage_account_id = azurerm_storage_account.tfstate.id
+ key_vault_id = azurerm_key_vault.tfstate.id
+ key_name = azurerm_key_vault_key.tfstate.name
+}
diff --git a/bootstrap/storage_account_tfstate.tf b/bootstrap/storage_account_tfstate.tf
new file mode 100644
index 0000000..fd1824d
--- /dev/null
+++ b/bootstrap/storage_account_tfstate.tf
@@ -0,0 +1,13 @@
+resource "azurerm_storage_account" "tfstate" {
+ depends_on = [ azurerm_resource_group.tfstate ]
+
+ name = var.storage_account_name
+ resource_group_name = azurerm_resource_group.tfstate.name
+ location = azurerm_resource_group.tfstate.location
+ account_tier = "Standard"
+ account_replication_type = "LRS"
+
+ identity {
+ type = "SystemAssigned"
+ }
+}
diff --git a/bootstrap/storage_container_tfstate.tf b/bootstrap/storage_container_tfstate.tf
new file mode 100644
index 0000000..b9a93b6
--- /dev/null
+++ b/bootstrap/storage_container_tfstate.tf
@@ -0,0 +1,8 @@
+# tfstate
+resource "azurerm_storage_container" "tfstate" {
+ depends_on = [ azurerm_storage_account.tfstate ]
+
+ name = "${var.project}-${var.region}-tfstate"
+ storage_account_name = azurerm_storage_account.tfstate.name
+ container_access_type = "private"
+}
diff --git a/bootstrap/az_variables.tf b/bootstrap/variables.tf
similarity index 74%
rename from bootstrap/az_variables.tf
rename to bootstrap/variables.tf
index 7d7eac8..e64f38c 100644
--- a/bootstrap/az_variables.tf
+++ b/bootstrap/variables.tf
@@ -1,40 +1,35 @@
variable "project" {
- type = "string"
+ type = string
description = "The name of the Project we are bootstrapping terraformscaffold for"
}
variable "region" {
- type = "string"
+ type = string
description = "The AWS Region into which we are bootstrapping terraformscaffold"
}
variable "environment" {
- type = "string"
+ type = string
description = "The name of the environment for the bootstrapping process; which is always bootstrap"
}
variable "component" {
- type = "string"
+ type = string
description = "The name of the component for the bootstrapping process; which is always bootstrap"
default = "bootstrap"
}
variable "storage_account_name" {
- type = "string"
+ type = string
description = "The name to use for the terraformscaffold storage account"
}
variable "tenant" {
- type = "string"
+ type = string
description = "The tenant id"
}
-variable "app_id" {
- type = "string"
- description = "The service principal id"
-}
-
variable "subscription_id" {
- type = "string"
+ type = string
description = "The AZ Subscription ID into which we are bootstrapping terraformscaffold"
-}
\ No newline at end of file
+}
diff --git a/bootstrap/versions.tf b/bootstrap/versions.tf
new file mode 100644
index 0000000..7cef412
--- /dev/null
+++ b/bootstrap/versions.tf
@@ -0,0 +1,15 @@
+terraform {
+ required_providers {
+ azurerm = {
+ source = "hashicorp/azurerm"
+ version = "=3.70.0"
+ }
+
+ local = {
+ source = "hashicorp/local"
+ version = "2.4.0"
+ }
+ }
+
+ required_version = ">= 1.0.10"
+}
diff --git a/components/keyvault/variables.tf b/components/keyvault/variables.tf
index c556eb5..f094a33 100644
--- a/components/keyvault/variables.tf
+++ b/components/keyvault/variables.tf
@@ -1,32 +1,44 @@
-# Define the variables that will be initialised in etc/{env,versions}__.tfvars...
-variable "environment" {
- type = "string"
- description = "The name of the environment"
-}
-
+##
+# Basic Required Variables for tfscaffold Components
+##
variable "project" {
- type = "string"
- description = "The name of the project"
+ type = string
+ description = "The name of the Project we are bootstrapping terraformscaffold for"
}
variable "region" {
- type = "string"
- description = "The region the deployment should happen in"
+ type = string
+ description = "The AWS Region into which we are bootstrapping terraformscaffold"
}
-variable "tenant" {
- type = "string"
- description = "The tenant id"
+variable "environment" {
+ type = string
+ description = "The name of the environment for the bootstrapping process; which is always bootstrap"
}
-variable "app_id" {
- type = "string"
- description = "The service app id; this is its name"
+##
+# tfscaffold variables specific to this component
+##
+
+# This is the only primary variable to have its value defined as
+# a default within its declaration in this file, because the variables
+# purpose is as an identifier unique to this component, rather
+# then to the environment from where all other variables come.
+
+variable "component" {
+ type = string
+ description = "The variable encapsulating the name of this component"
+ default = "lz"
}
-variable "service_principal_object_id" {
- type = "string"
- description = "The service principal object id"
+
+##
+# Variables specific to the "keyvault" Component
+##
+variable "component" {
+ type = string
+ description = "The name of the component for the bootstrapping process; which is always bootstrap"
+ default = "bootstrap"
}
variable "password" {
@@ -37,4 +49,4 @@ variable "password" {
variable "passwordy_mcssl_passwordface" {
type = "string"
description = "The auto prompt for ssl certificate password"
-}
\ No newline at end of file
+}