Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFE: Ability to restart vault on configmap change #748

Closed
TJM opened this issue Jun 16, 2022 · 4 comments · Fixed by #1001
Closed

RFE: Ability to restart vault on configmap change #748

TJM opened this issue Jun 16, 2022 · 4 comments · Fixed by #1001
Labels
enhancement New feature or request vault-server Area: operation and usage of vault server in k8s

Comments

@TJM
Copy link

TJM commented Jun 16, 2022

Is your feature request related to a problem? Please describe.

When I change the configuration, such as to add "plugin_directory" to the config, it does not "notify" the pods to restart.

Describe the solution you'd like

A common way to handle that is to use a sha256sum of contents of the configmap as an annotation to the pod. This allows for a simple "notification" that the pods need to restart.

Describe alternatives you've considered

Some sort of post-upgrade hook?

Additional context

https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments

@TJM TJM added the enhancement New feature or request label Jun 16, 2022
@TJM
Copy link
Author

TJM commented Jun 16, 2022

NOTE: this may also be an issue with the statefulset (or my minikube), because kubectl rollout restart sts/vault is also not working :(

@tvoran tvoran added the vault-server Area: operation and usage of vault server in k8s label Jun 16, 2022
@tvoran
Copy link
Member

tvoran commented Jun 16, 2022

Hi @TJM, indeed Vault server is deployed as a statefulset in this chart, but with a default updateStrategy of OnDelete. This is intentional as detailed in the Upgrading Vault on Kubernetes section of the docs.

You're free to change that by setting server.strategyType in the chart values.

@tvoran tvoran closed this as completed Jun 16, 2022
@agill17
Copy link

agill17 commented Jan 2, 2024

I think this is still a nice thing to have, while leaving the default updateStrategy to "onDelete".

If the hash of configmap changes, then the pod-template label will change too. When this happens, the status.updateRevision will no longer match status.currentRevision in server statefulset.
One could then use that status change to identify whether to roll/not-roll vault pods in safe manner.

In server-statefulset.yaml, we could simple change this

 {{- toYaml .Values.server.extraLabels | nindent 8 -}}

To

{{- tpl (toYaml .Values.server.extraLabels) .  | nindent 8 -}}

This is generic enough to allow operators take control of their label changes, and take necessary actions afterwards.

@tvoran
Copy link
Member

tvoran commented Jan 4, 2024

Good point, @agill17, re-opening this.

@tvoran tvoran reopened this Jan 4, 2024
swenson pushed a commit that referenced this issue Mar 7, 2024
When updating the Vault config (and corresponding)
configmap, we now generate a checksum of the config
and set it as an annotation on both the configmap
and the Vault StatefulSet pod template.

This allows the deployer to know what pods need to
be restarted to pick up the a changed config.

We still recommend using the standard upgrade
[method for Vault on Kubernetes](https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-raft-deployment-guide#upgrading-vault-on-kubernetes),
i.e., using the `OnDelete` strategy
for the Vault StatefulSet, so updating the config
and doing a `helm upgrade` should not trigger the
pods to restart, and then deleting pods one
at a time, starting with the standby pods.

With `kubectl` and `jq`, you can check check which
pods need to be updated by first getting the value
of the current configmap checksum:

```shell
kubectl get pods -o json | jq -r ".items[] | select(.metadata.annotations.\"config/checksum\" != $(kubectl get configmap vault-config -o json | jq '.metadata.annotations."config/checksum"') ) | .metadata.name"
```

Fixes #748.
swenson added a commit that referenced this issue Mar 18, 2024
When updating the Vault config (and corresponding)
configmap, we now generate a checksum of the config
and set it as an annotation on both the configmap
and the Vault StatefulSet pod template.

This allows the deployer to know what pods need to
be restarted to pick up the a changed config.

We still recommend using the standard upgrade
[method for Vault on Kubernetes](https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-raft-deployment-guide#upgrading-vault-on-kubernetes),
i.e., using the `OnDelete` strategy
for the Vault StatefulSet, so updating the config
and doing a `helm upgrade` should not trigger the
pods to restart, and then deleting pods one
at a time, starting with the standby pods.

With `kubectl` and `jq`, you can check check which
pods need to be updated by first getting the value
of the current configmap checksum:

```shell
kubectl get pods -o json | jq -r ".items[] | select(.metadata.annotations.\"config/checksum\" != $(kubectl get configmap vault-config -o json | jq '.metadata.annotations."config/checksum"') ) | .metadata.name"
```

Fixes #748.

---------

Co-authored-by: Tom Proctor <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request vault-server Area: operation and usage of vault server in k8s
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants