In the configureconsul.sh
file, replace the IP Addresses listed with the hosts on which you're actually running Consul in Server Mode.
Also replace the value for the "datacenter"
key with the name of your data center, e.g. eastus
.
To install ONLY consul on a server: Don't run the scripts with the word Vault.
Just omit scripts with vault
in the name.
To install ONLY Vault on a server: There's no need to omit any scripts.
But change the consul configuration. Replace "server": true,
with "server": false,
.
If you want to install this in a production cluster, run the *.sh
scripts named inside the Vagrantfile in the order that they're listed, on each host, using the arguments listed as bash arguments.
For example, for server.vm.provision "shell", path: "account.sh", args: "consul-replicate"
, you would run # account.sh consul-replicate
as root.
After running # account.sh consul-replicate
, you would run the next listed .sh
file, prereqs.sh
, and so on.
Once provisioned using the above instructions, send the Host Names and IP addresses to whoever will own and manage Vault further.
They'll then use the Vault API to initialize the Vault using init.sh
and set up a cluster with it.
Here's the content of the /etc/profile.d/vault.sh that I have on my latest Vagrant setup:
export VAULT_ADDR=http://127.0.0.1:8200 ## Add local Vault address to startup script
Yours could be this:
export VAULT_ADDR=http://vaultlb.mycorp.com ## Add Vault Load Balancer address to startup script
or
export VAULT_ADDR=https://hostfqdn:8200 ## Add Vault FQDN address to startup script
Without an environment variable, Vault will default to this:
Note: The above will affect all users of a Linux system. But if it's just one user, you can update ~/.bash_profile to the same effect.