-
Notifications
You must be signed in to change notification settings - Fork 890
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFE: Ability to restart vault on configmap change #748
Comments
NOTE: this may also be an issue with the statefulset (or my minikube), because |
Hi @TJM, indeed Vault server is deployed as a statefulset in this chart, but with a default updateStrategy of OnDelete. This is intentional as detailed in the Upgrading Vault on Kubernetes section of the docs. You're free to change that by setting |
I think this is still a nice thing to have, while leaving the default updateStrategy to "onDelete". If the hash of configmap changes, then the pod-template label will change too. When this happens, the status.updateRevision will no longer match status.currentRevision in server statefulset. In server-statefulset.yaml, we could simple change this
To
This is generic enough to allow operators take control of their label changes, and take necessary actions afterwards. |
Good point, @agill17, re-opening this. |
When updating the Vault config (and corresponding) configmap, we now generate a checksum of the config and set it as an annotation on both the configmap and the Vault StatefulSet pod template. This allows the deployer to know what pods need to be restarted to pick up the a changed config. We still recommend using the standard upgrade [method for Vault on Kubernetes](https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-raft-deployment-guide#upgrading-vault-on-kubernetes), i.e., using the `OnDelete` strategy for the Vault StatefulSet, so updating the config and doing a `helm upgrade` should not trigger the pods to restart, and then deleting pods one at a time, starting with the standby pods. With `kubectl` and `jq`, you can check check which pods need to be updated by first getting the value of the current configmap checksum: ```shell kubectl get pods -o json | jq -r ".items[] | select(.metadata.annotations.\"config/checksum\" != $(kubectl get configmap vault-config -o json | jq '.metadata.annotations."config/checksum"') ) | .metadata.name" ``` Fixes #748.
When updating the Vault config (and corresponding) configmap, we now generate a checksum of the config and set it as an annotation on both the configmap and the Vault StatefulSet pod template. This allows the deployer to know what pods need to be restarted to pick up the a changed config. We still recommend using the standard upgrade [method for Vault on Kubernetes](https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-raft-deployment-guide#upgrading-vault-on-kubernetes), i.e., using the `OnDelete` strategy for the Vault StatefulSet, so updating the config and doing a `helm upgrade` should not trigger the pods to restart, and then deleting pods one at a time, starting with the standby pods. With `kubectl` and `jq`, you can check check which pods need to be updated by first getting the value of the current configmap checksum: ```shell kubectl get pods -o json | jq -r ".items[] | select(.metadata.annotations.\"config/checksum\" != $(kubectl get configmap vault-config -o json | jq '.metadata.annotations."config/checksum"') ) | .metadata.name" ``` Fixes #748. --------- Co-authored-by: Tom Proctor <[email protected]>
Is your feature request related to a problem? Please describe.
When I change the configuration, such as to add "plugin_directory" to the config, it does not "notify" the pods to restart.
Describe the solution you'd like
A common way to handle that is to use a sha256sum of contents of the configmap as an annotation to the pod. This allows for a simple "notification" that the pods need to restart.
Describe alternatives you've considered
Some sort of post-upgrade hook?
Additional context
https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
The text was updated successfully, but these errors were encountered: