-
Notifications
You must be signed in to change notification settings - Fork 608
Support for pod, CR resource type for benchmark Listing #3432
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Welcome @GunaKKIBM! |
Hi @GunaKKIBM. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: GunaKKIBM The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Respective PR in test-infra: kubernetes/test-infra#35086 |
@mborsz @serathius @wojtek-t Please review this PR. If the changes look good for Pod resource type, I can do the same for ClusterRole as well. Thanks! |
Co-authored-by: Kishen Viswanathan <[email protected]>
This looks good, one thing to address is that we would like to have control over object size, so we can test cases where pod objects are larger than simple nginx pod. We could add it in another iteration. |
name: {{.Name}} | ||
labels: | ||
app: {{$group}} | ||
spec: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The goal is to measure the cost of Listing objects from apiserver. Pods are a different from configmaps as they cannot be passively created without side effects. Immediately after pod creation K8s controllers will start processing them for container to run, scheduler will assign node, Kubelet will pull it and start updating status etc. We would like to avoid that.
Could you propose some way to avoid those pods being processed? For example adding a node selector that prevents scheduling or find other ways to disable scheduler?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After checking how and where perfdash is deployed which led me to this link -https://github.com/kubernetes/k8s.io/blob/main/running-in-community-clusters.md
Adding nodeaffinity/antiaffinity didn't seem like the right option, hence chose to put a schedulerName which is simpler.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good
Just wanted get some clarification here, here object size maps to requests for cpu and memory for the pod right? for which I can add arguments in the job to make in configurable. |
No, the size of manifest in bytes. See the configmap. We just load random bytes. |
The simplest way I could think of increasing the pod manifest size was to set an env variable, whose value is random data whose size is configurable. I have updated the PR accordingly, please take a look. |
Will take a look tomorrow. |
Running the test locally fails for me with:
Please follow https://github.com/kubernetes/perf-tests/tree/master/clusterloader2/testing/list to test it locally yourself. |
It was an issue with brackets where podsNumber is defined, fixed now. Thank you! |
Great find! I totally missed that |
ok-to-test |
/ok-to-test |
namePrefix: "list-configmaps-" | ||
replicas: 0 | ||
|
||
- name: Create pods |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think you should delete configmaps and wait couple of minutes so that they are not counted in memory used when collecting metrics for pods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might need a little help with some reference or example within in this repo. Does the action - "delete" help? I tried looking into any existing examples, but not quite sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mborsz , could you please help me here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- module: *startMeasurements | ||
- name: Wait 5 minutes | ||
measurements: *waitMeasurements | ||
- module: *gatherMeasurements |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doing measurements this way will override file with configmap measurements. We need to put them in separate files.
cc @mborsz for ideas.
What type of PR is this?
Add one of the following kinds:
/kind feature
What this PR does / why we need it:
Adding support for benchmarking list request for pods and CR
Which issue(s) this PR fixes:
kubernetes/kubernetes#130169
Special notes for your reviewer:
Couple of things that aren't covered in this PR