You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently simulating a large-scale cluster using Kwok to test Horizontal Pod Autoscaling (HPA). My setup involves simulating a lots of Pods with resource usage fluctuating randomly within a defined range.
While configuring the ClusterResourceUsage expression, I noticed an issue where performing calculations on Quantity leads to incorrect results.
To address this, I have submitted a simple PR to address this issue.
What did you expect to happen?
The result of Quantity in the CEL expression should be correct.
How can we reproduce it (as minimally and precisely as possible)?
Normal Case
Directly specifying Quantity values in ClusterResourceUsage CRD for easy reproduction:
$ kubectl top pod
NAME CPU(cores) MEMORY(bytes)
fake-pod-5c449fc8f7-2rt7z 1000m 2048Mi
When the expression in ClusterResourceUsage does not involve any operations on Quantity, the results are as expected.
Abnormal Case
Editing ClusterResourceUsage to use multiplication in the expressions, multiplying Quantity by 1. The expected result should remain consistent with the normal case.
$ kubectl top pod
NAME CPU(cores) MEMORY(bytes)
fake-pod-5c449fc8f7-2rt7z 10001m -8796Mi
Observed that the CPU usage is multiplied by a factor of ten, and the Memory value appears to have overflowed.
Anything else we need to know?
After consistently reproducing the problem, I read the Kwok source code and identified the issue in the newQuantityFromFloat64 function of quantity.go:
The problem lies in the multiplication of the input float64 value v by $10 \times 10^{9}$. Since resource.Nano represents $1 \times 10^{-9}$, this results in the function returning a value that is 10 times greater than expected.
Unfortunately, the unit tests did not catch this problem, as they also performed the incorrect calculation. In the TestResourceEvaluation function of evaluator_test.go, the correct calculation should be:
How to use it?
What happened?
I am currently simulating a large-scale cluster using Kwok to test Horizontal Pod Autoscaling (HPA). My setup involves simulating a lots of Pods with resource usage fluctuating randomly within a defined range.
While configuring the ClusterResourceUsage expression, I noticed an issue where performing calculations on Quantity leads to incorrect results.
To address this, I have submitted a simple PR to address this issue.
What did you expect to happen?
The result of Quantity in the CEL expression should be correct.
How can we reproduce it (as minimally and precisely as possible)?
Normal Case
Directly specifying Quantity values in
ClusterResourceUsage
CRD for easy reproduction:Printing the resource usage of the Mock Pod:
When the expression in ClusterResourceUsage does not involve any operations on Quantity, the results are as expected.
Abnormal Case
Editing ClusterResourceUsage to use multiplication in the expressions, multiplying Quantity by 1. The expected result should remain consistent with the normal case.
Printing the resource usage of the Mock Pod:
Observed that the CPU usage is multiplied by a factor of ten, and the Memory value appears to have overflowed.
Anything else we need to know?
After consistently reproducing the problem, I read the Kwok source code and identified the issue in the
newQuantityFromFloat64
function ofquantity.go
:The problem lies in the multiplication of the input$10 \times 10^{9}$ . Since $1 \times 10^{-9}$ , this results in the function returning a value that is 10 times greater than expected.
float64
valuev
byresource.Nano
representsUnfortunately, the unit tests did not catch this problem, as they also performed the incorrect calculation. In the
TestResourceEvaluation
function ofevaluator_test.go
, the correct calculation should be:However, the unit tests compared the CEL expression's result to 18.
I have submitted a simple PR to address this issue.
Kwok version
OS version
The text was updated successfully, but these errors were encountered: