Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better disk limit strategy #320

Open
rgaudin opened this issue Dec 5, 2024 · 4 comments
Open

Better disk limit strategy #320

rgaudin opened this issue Dec 5, 2024 · 4 comments
Labels
enhancement New feature or request question Further information is requested

Comments

@rgaudin
Copy link
Member

rgaudin commented Dec 5, 2024

We are currently solely relying on local hostPath to store application data. So app data is stored on the k8s nodes and applications are thus tied to a specific node.

Because k8s is not meant to be used this way (it contradicts the core principle of k8s or moving pods across nodes) k8s is very aggressive when disk pressure is detected (90% of disk usage I think) and kills the running pods.

Our strategy so far is:

  • have plenty of extra disk space on each node (based on expected/guessed data usage)
  • have an image prune policy to purge OCI images stuff when disk usage reaches a threshold
  • manually look at disk space every week in the routine.

Given we accidentally triggered DiskPressure while restoring a backup the other day (and k8s killed all pods and the ingress did not restart for a different reason), we should start discussing better strategies.

@rgaudin rgaudin added enhancement New feature or request question Further information is requested labels Dec 5, 2024
@rgaudin rgaudin changed the title Add disk limits Better disk limit strategy Dec 5, 2024
@benoit74
Copy link
Collaborator

benoit74 commented Dec 5, 2024

Quite important indeed, probably even urgent

@siddheshwar-9897
Copy link

We can replace the current hostPath-based storage with a distributed object storage solution like MinIO, which is S3-compatible and deployable within our Kubernetes cluster.

@kelson42
Copy link
Contributor

kelson42 commented Apr 6, 2025

I see following alternatives:

  • Object storage, via s3fs
  • Block storage via a SAN
  • Network file system, via NFS or CIFS

Do we have possible solutions for all these alternatives?

@benoit74
Copy link
Collaborator

benoit74 commented Apr 7, 2025

None of these strategies are "k8s native", I would prefer to look at something more common in the k8s ecosystem unless we have strong arguments to believe these are better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants