-
Notifications
You must be signed in to change notification settings - Fork 219
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support new topologySpread scheduling constraints #430
Comments
I'll work on this issue |
/assign |
/remove-help |
What is current behaviour of the karpenter for |
Currently Karpenter doesn't recognize either |
Hey everyone! Thanks for putting in the effort to support these new constraints! We just hit this exact issue in our clusters, where we are using I was wondering if you currently have a timeline for getting this PR merged? It seems the code is already ready for some time and it is just waiting for a rebase/performance test? |
This has been backlogged for me for a little while, but I should have some bandwidth to get this wrapped out within the next week. Like you said, it really should only be performance testing and a rebase at this point that's left. |
Amazing, thanks for the fast response. Really appreciate the work you are doing with karpenter! |
We are also really awaiting for |
Hello everyone Could you please share the ETA of this feature? |
Hi, the lack of Is there anything we can do to help, test a custom build of Karpenter that supports this feature or something? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Hi, any update on when this feature may be available ? |
I've observed different behavior, similar to what was described in this closed issue. In my case, I have a zonal topologySpreadConstraint. There are nodes for all three zones (A, B, C) in the cluster, but only nodes in zones A and B (not C) match a label that's also set as a nodeSelector on a particular Deployment. This label is set by Karpenter by being included in NodePools, and those NodePools are associated exclusively with an EC2NodeClass that can only provision nodes in zones A and B. The zonal topologySpreadConstraint has maxSkew=1, and whenUnsatisfiable=DoNotSchedule. Suppose I scale up this Deployment from 2 to 3, and assume replica 1 is in zone A and replica 2 is in zone B - the new replica should qualify for either zone A or zone B, and zone C should be disregarded because no existing or provisionable instances in zone C will match the required nodeSelector label. It appears as if kube-scheduler understands this, but Karpenter does not. If there is available capacity in A or B, kube-scheduler will properly schedule the pod. If there is no available capacity in A or B, Karpenter should provision a new instance, but instead it emits an error event What's most confusing here is the difference in behavior between how kube-scheduler and Karpenter is applying the topologySpreadConstraint. aws/karpenter-provider-aws#3397 proposed adding an explicit zonal affinity rule as a workaround, but this should not be required since it's not required by kube-scheduler. Many users will not realize that the additional affinity rule is required (only for Karpenter), leading to surprise operational issues. |
Description
Observed Behavior:
Karpenter doesn't support:
These fields were introduced into beta in 1.27.
Expected Behavior:
Reproduction Steps (Please include YAML):
Versions:
kubectl version
):The text was updated successfully, but these errors were encountered: