-
Notifications
You must be signed in to change notification settings - Fork 150
implement distributed resizing #195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: Travis Glenn Hansen <[email protected]>
|
Welcome @travisghansen! |
|
Hi @travisghansen. Thanks for your PR. I'm waiting for a kubernetes-csi member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Signed-off-by: Travis Glenn Hansen <[email protected]>
|
/ok-to-test |
| informerFactory, | ||
| driverName) | ||
| driverName, | ||
| *enableNodeDeployment) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is one potential scalability problem here: if there are now as many resizers running in the cluster as there are nodes, then the apiserver has to keep all of them informed about PV and PVC updates.
external-provisioner has the same problem, but it cannot really be avoided there because support for immediate provisioning depends on all provisioners seeing a new PVC.
I'm just wondering whether something better can be done for the other sidecars. The code below already checks for the util.VolumeSelectedNodeKey label. It would be possible to set up the PVC informer so that it does server-side filtering. It's doable, I just don't have a code example at hand.
That leaves the PV, though. external-provisioner would have to be modified to set a similar (the same?) label there. Such a label may even be useful for the provisioner. This code here does client-side filtering of PVs:
https://github.com/kubernetes-csi/external-provisioner/blob/3b752c36ca71fbf5d7310bb6a568b93844062189/pkg/controller/controller.go#L1132-L1145
That could be replaced with server-side filtering.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is code which sets up an informer with server-side filtering:
https://github.com/kubernetes-csi/external-provisioner/blob/3b752c36ca71fbf5d7310bb6a568b93844062189/cmd/csi-provisioner/csi-provisioner.go#L452-L464
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about that as well but didn't want to 'kick against the pricks' as they say with how the others were implemented. I also noted that we are not currently doing server-side filtering on the driver either (which I found odd).
Given my limited golang skill set I would definitely need hand-holding on proper code to manage the watches/filters should we go that route.
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
|
/remove-lifecycle stale |
|
Any update about this pr? I patched it and it seems the |
|
I haven’t had a chance to address the scalability concerns yet unfortunately. It should work fine for small clusters for sure. |
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
|
/remove-lifecycle stale |
|
wait to use this feature |
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
|
Why couldn't we accomplish similar thing using |
|
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
|
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
|
/remove-lifecycle stale |
|
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
|
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/reopen |
|
@travisghansen: Reopened this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: travisghansen The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
|
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/reopen |
|
@travisghansen: Reopened this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
|
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/reopen |
|
@travisghansen: Reopened this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
@gnufied can we just get this over the hill. I understand there are scaling concerns but not everyone is running clusters with 10k nodes. Let’s just get it merged and if/when scaling really becomes a problem we sort out server-side filters that can make the topology of the node. I have thought more about simply using the controller expand operation and decided it’s not enough. Controller expand doesn’t include the same parameters (namely where the volume is mounted etc) so it’s not really feasible to do proper expansion logic. |
I think node expand is already invoked and works no? I suppose you’re suggesting that node expand invoke the controller bits as a sub operation? |
|
I've done some research and 1 problem is that controller requires |
Signed-off-by: Travis Glenn Hansen [email protected]
What type of PR is this?
/kind feature
What this PR does / why we need it:
Add resizing support to distributed setups to compliement provisioner and snapshotter features of the same nature.
Which issue(s) this PR fixes:
Fixes #142
Special notes for your reviewer:
Does this PR introduce a user-facing change?: