Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: node group min size reached (current: 2, min: 3) #7545

Open
meysam81 opened this issue Nov 30, 2024 · 3 comments
Open

bug: node group min size reached (current: 2, min: 3) #7545

meysam81 opened this issue Nov 30, 2024 · 3 comments
Labels
area/cluster-autoscaler lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@meysam81
Copy link

Hey guys, I noticed this if conditional is not correct and I have placed the log to the title of this issue.

The gist of is that the conditional should be reversed, meaning, the size must be greater equal than minSize:

size >= minSize

@adrianmoisey
Copy link
Member

/area cluster-autoscaler

@meysam81
Copy link
Author

meysam81 commented Dec 1, 2024

this might have most likely been a misunderstanding from my part.

I see that the function name is GetScaleDownCandidates, as in, the nodes that will be destroyed.

https://github.com/kubernetes/autoscaler/blob/61eae9e5022f2f53f1c80a7006ec875bae3f7581/cluster-autoscaler/processors/nodes/pre_filtering_processor.go#L44C46-L44C68

My problem is remains though, the min=3 is never met.
I have a feeling this has to do with Hetzner resource creation limit, thought that can be ruled out knowing that a request to increase the limit has been submitted earlier to this and it has been approved.

It remains a mystery to me for now!

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

4 participants