-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Capacity quota integration #8949
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Capacity quota integration #8949
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: norbertcyran The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Hi @norbertcyran. Thanks for your PR. I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
5fc9ea2 to
92eece6
Compare
Start refactoring cluster autoscaler's main to use Manager from controller-runtime. For now, that's needed to use Client that's used by CapacityQuota provider. In the future, main should be refactored, so the manager handles leader election, metrics, healthcheck and pprof servers and graceful shutdown. Additionally, that change allows us to write controller-runtime style controllers.
92eece6 to
2370dbb
Compare
| capacitybufferPodInjectionEnabled = flag.Bool("capacity-buffer-pod-injection-enabled", false, "Whether to enable pod list processor that processes ready capacity buffers and injects fake pods accordingly") | ||
| nodeRemovalLatencyTrackingEnabled = flag.Bool("node-removal-latency-tracking-enabled", false, "Whether to track latency from when an unneeded node is eligible for scale down until it is removed or needed again.") | ||
| maxNodeSkipEvalTimeTrackerEnabled = flag.Bool("max-node-skip-eval-time-tracker-enabled", false, "Whether to enable the tracking of the maximum time of node being skipped during ScaleDown") | ||
| capacityQuotasEnabled = flag.Bool("capacity-quotas-enabled", false, "Whether to enable CapacityQuota CRD support.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like that's our common pattern for gating new features. Wouldn't it be better to introduce a k8s-like feature-gates flag? I think they would be easier to clean up. Also it would limit the number of flags we're introducing into CAS, which might be beneficial, considering how many are already there
|
/ok-to-test |
|
/assign BigDarkClown |
|
PR needs rebase. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Integrated CapacityQuota CRD with the new resource quotas system. With
--capacity-quotas-enabledflag, CAS will respect quotas defined in CapacityQuota resources during scale ups. Additionally, wrapped the main entrypoint in aManagerfromcontroller-runtmelibrary, so we can use the client from that library and we can start incremental refactor into a more controller-based architecture.Which issue(s) this PR fixes:
Part of #8703
Special notes for your reviewer:
I've split the PR into 3 separate commits, I suggest reviewing them one by one.
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: