-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Fix scaleQuantityProportionallyCPU function #8946
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Fix scaleQuantityProportionallyCPU function #8946
Conversation
|
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
Hi @iamzili. Thanks for your PR. I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: iamzili The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
/ok-to-test |
What type of PR is this?
/kind bug
/area vertical-pod-autoscaler
What this PR does / why we need it:
The current implementation of scaleQuantityProportionallyCPU is broken (possibly the
scaleQuantityProportionallyMemis broken as well, but I want to fix it in an another PR).When a
LimitRangeobject with type Pod exists in the namespace, and the algorithm proportionally increases or decreases recommendations, the division may produce an implicit issue. When remainders occur, the sum of the new recommendations does not satisfy the constraints defined in theLimitRange. For example (only the cpu is controlled):LimitRangeof type Pod definesmin.cpuas150m.c1:15mc2:100mscaleQuantityProportionallyCPU(15m, 115m, 150m, noRounding)returns 19mscaleQuantityProportionallyCPU(100m, 115m, 150m, noRounding)returns 130m.The sum of new recommendations is below 150m > (19m + 130m), which means that the pod is not going to be scheduled.
The issue with
scaleQuantityProportionallyCPUis that when the division executes, the implementation ignores any remainder.Special notes for your reviewer:
The change removes the rounding parameter from the function signature because we only using the default behavior,
noRounding. Previously, the function behaved as follows:noRounding, if the formula produces1234567890m, the function returns1234567890m.roundUpToFullUnit, if the formula produces1234567890m, the function returns1234568.roundDownToFullUnit, if the formula produces1234567890m, the function returns1234567.Also the diffs in
vertical-pod-autoscaler/pkg/utils/vpa/capping_test.goare somewhat confusing. I removed one duplicated unit test from the file and added a completely new test.Does this PR introduce a user-facing change?
Yes, but the change is subtle and may not impact users at all. With the new implementation of
scaleQuantityProportionallyCPU, if the division produces a remainder, the function adds 1m to the quotient. Using the example specified earlier:the sum of new recommendations ends up
1mhigher (i.e. 151m) than themin.cpudefined in theLimitRange. So from 19m we get 20m, from the 130 we get 131m which results in 151m. So technically we waste 1m but, that shouldn't be a problem. Even the resource.Quantity documentation states that 0.1m rounds up to 1m.