Skip to content

Conversation

@44past4
Copy link
Contributor

@44past4 44past4 commented Dec 11, 2025

/sig scheduling

@k8s-ci-robot k8s-ci-robot added the sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. label Dec 11, 2025
@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Dec 11, 2025
@k8s-ci-robot k8s-ci-robot added kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Dec 11, 2025
@44past4
Copy link
Contributor Author

44past4 commented Dec 11, 2025

/cc @wojtek-t @erictune @johnbelamaric

type DRAConstraint struct {
// ResourceClaimName specifies the name of a specific ResourceClaim
// within the PodGroup's pods that this constraint applies to.
ResourceClaimName *string
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does ResourceClaimName mean if a given PodGroup is replicated (there are multiple podgroup instances/replicas)?

This would effectively mean sharing the same RC across multiple instances, which in many cases would be highly misleading.
However, arguably there can be usecases for it too, but then the algorithm effectively should consider all podgroup instances in a single round, but for that we don't know how many groups we even have.
@macsko - FYI (as this is slightly colliding with the kep-4671 update)

So thinking about that more, I'm wondering if we can introduce that without further enhancing the API now (i.e. adding the replicas field to PodGroup).

Another alternative would be to very explicitly split the pod-group-replica constraints from the constraints across all pod-group-replicas and (at least for Alpha) focus only on the former.
So something more like (exact names and structures to be refined):

type PodGroupAllReplicasSchedulingConstraints {
  ResourceClaimName *string  // This one is supported only if Replicas=1
}

type PodGroupReplicaSchedulingConstraints {
  ResourceClaimTemplateName *string // Separate RC is created from this template for every replica.
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case if the PodGroup is replicated the meaning of ResourceClaimName will depend on whether we will be scheduling those replicas together or not. If they will be scheduled separately then scheduling of the first replica will lock the referenced ResourceClaim and the subsequent replicas will not have any freedom when it comes to its allocation - there will be only one possible placement for them. When scheduling multiple replicas at once we can try to choose a DRA allocation which allows us to schedule the highest number of replicas (assuming that we do not provide all-or-nothing semantics for multiple replicas).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't asking about the implementation aspect.
I wanted to take a step back and understand what is the actual usecase we're trying to address and figure out if/how we should represent it to make it intuitive to users when they have replicated PodGroup. I feel that the API as currently described can be pretty confusing in this case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that the proposed API might have been confusing. Because of this as suggested in some other comments I have decided to move the DRA-aware scheduling implementation to the beta of this feature and wait for the KEP-5729: DRA: ResourceClaim Support for Workloads to define required API for the PodGroup level ResourceClaims. I hope that this will make the alpha scope of this feature clearer.


// ResourceClaimTemplateName specifies the name of a ResourceClaimTemplate.
// This applies to all ResourceClaim instances generated from this template.
ResourceClaimTemplateName *string
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Who creates and manages lifecycle of the RC created from that template?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we are assuming that the lifecycle of RC is managed outside of kube-scheduler. One option for this is to have it managed by the specific workload controller like for instance LeaderWorkerSet which could create a RC when creating a new replica. This would be very inconvenient so probably we should have a single controller which could do this just by watching Workload objects. We had a discussion with @johnbelamaric about this. Either way this should be outside of the scope of this KEP.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The lifecycle is what I plan to address in #5729.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK - so this matches my thinking.

But the primary question now is - why do we need it then?
If we have some external entity (whether it's dedicated controller or e.g. LWS controller) that will create RC whenever it is needed (it should create it before we will actually do the scheduling), then what scheduler really needs to be aware and is an input for it is that RC (that it will be finding a best allocation for) not the template itself. It doesn't care about the template.

So I think we're aligned on the intention, but I don't really understand how that will be used.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have updated the KEP and removed the support for DRA based constraints from the alpha version and for beta I have proposed to wait for the KEP-5729: DRA: ResourceClaim Support for Workloads to define required API and lifecycle for the PodGroup level ResourceClaims.

// PodGroupInfo holds information about a specific PodGroup within a Workload,
// including a reference to the Workload, the PodGroup's name, and its replica index.
// This struct is designed to be extensible with more fields in the future.
type PodGroupInfo struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PodGroupInfo was already introduced in scheduler as part of initial gang-scheduling implementation. However, this is now focused on the pods and their state:
https://github.com/kubernetes/kubernetes/blob/master/pkg/scheduler/backend/workloadmanager/podgroupinfo.go#L52

Do you suggest reuse that structure or create a second one for it?
@macsko

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The existing PodGroupInfo is much closer to the PodSetInfo proposed below so if we will consider using it we should probably rename the proposed PodGroupInfo to something like PodGroupMetadata or WorkloadPodGroupRefrence.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's closer regarding what kind of information it keeps, but the granularity is different (it may contain pods of different signatures).

So my point is - we need to align that. Having two different things with same names will be pretty misleading.

// PodSetAssignment represents the assignment of pods to nodes within a PodSet for a specific Placement.
type PodSetAssignment struct {
// PodToNodeMap maps a Pod name (string) to a Node name (string).
PodToNodeMap map[string]string
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need dra assignments too?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good question. We might need them for the PodGroup pods binding phase which comes after the selection of the placement for a PodGroup has been finished. So provided that we can capture those when we are checking the placement feasibility then yes, we should have DRA assignments here as well.

// DRA's AllocationResult from DRAAllocations.
// All pods within the PodSet, when being evaluated against this Placement,
// are restricted to the nodes matching this NodeAffinity.
NodeAffinity *corev1.NodeAffinity
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implementation detail - given NodeAffinity, finding the nodes that match it is O(N) operation with N being the set of all nodes in the cluster. We together with NodeAffinity here, we should probably also store the exact list of nodes to avoid recomputing it over and over again.

Block -> Rack). This would involve iterative placement generation and a
Parent field in the Placement struct.

4. **Pod Group Replicas Support:** Optimizing scheduling for identical
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The way it's described it's not really pod group replicas support - we need to support pod group replicas from the very beginning.
What you're saying here is that we can optimize the scheduling latency for them, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is correct. Replicas will be supported but they will be scheduled one by one. With some changes to the algorithm we can try to optimize this process but this is out of scope for this KEP. I will rephrase it to make it clear.

Comment on lines 265 to 271
// ResourceClaimName specifies the name of a specific ResourceClaim
// within the PodGroup's pods that this constraint applies to.
ResourceClaimName *string

// ResourceClaimTemplateName specifies the name of a ResourceClaimTemplate.
// This applies to all ResourceClaim instances generated from this template.
ResourceClaimTemplateName *string
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do these fields relate to the ResourceClaim references that Pods already have? What happens if the sets of claims referenced by a Workload and its Pods are different?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to this question, it needs to be answered here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have updated the KEP and removed the support for DRA based constraints from the alpha version and for beta I have proposed to wait for the KEP-5729: DRA: ResourceClaim Support for Workloads to define required API and lifecycle for the PodGroup level ResourceClaims.

type PodGroupSchedulingConstraints struct {
// TopologyConstraints specifies desired topological placements for all pods
// within this PodGroup.
TopologyConstraints []TopologyConstraint
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does multiple topology constraints actually make sense here? What would be the usecase for it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are 2 main use cases for defining multiple topology constraints which I can see right now:

  • when node label values are not unique among all nodes - for instance racks have indexes which are unique only within a given block - in this case we would like to be able to provide both of those labels as required constraints

  • when some constraints are optional / best effort - this would require to introduce another field to TopologyConstraint which would allow specifying a given constraint as optional / best-effort.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The later I would actually expect "TopologyPreferences" field (or something like that) so I don't think that convinces me.

But the first usecase is interesting - I would actually mention it in the KEP explicitly.

Maybe we should actually explicitly mention in the API comment that in huge majority of cases we expect exactly 1 item in this list and mention this example as a potential exception.

type DRAConstraint struct {
// ResourceClaimName specifies the name of a specific ResourceClaim
// within the PodGroup's pods that this constraint applies to.
ResourceClaimName *string
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't asking about the implementation aspect.
I wanted to take a step back and understand what is the actual usecase we're trying to address and figure out if/how we should represent it to make it intuitive to users when they have replicated PodGroup. I feel that the API as currently described can be pretty confusing in this case.


// ResourceClaimTemplateName specifies the name of a ResourceClaimTemplate.
// This applies to all ResourceClaim instances generated from this template.
ResourceClaimTemplateName *string
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK - so this matches my thinking.

But the primary question now is - why do we need it then?
If we have some external entity (whether it's dedicated controller or e.g. LWS controller) that will create RC whenever it is needed (it should create it before we will actually do the scheduling), then what scheduler really needs to be aware and is an input for it is that RC (that it will be finding a best allocation for) not the template itself. It doesn't care about the template.

So I think we're aligned on the intention, but I don't really understand how that will be used.

Comment on lines 265 to 271
// ResourceClaimName specifies the name of a specific ResourceClaim
// within the PodGroup's pods that this constraint applies to.
ResourceClaimName *string

// ResourceClaimTemplateName specifies the name of a ResourceClaimTemplate.
// This applies to all ResourceClaim instances generated from this template.
ResourceClaimTemplateName *string
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to this question, it needs to be answered here

// PodGroupInfo holds information about a specific PodGroup within a Workload,
// including a reference to the Workload, the PodGroup's name, and its replica index.
// This struct is designed to be extensible with more fields in the future.
type PodGroupInfo struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's closer regarding what kind of information it keeps, but the granularity is different (it may contain pods of different signatures).

So my point is - we need to align that. Having two different things with same names will be pretty misleading.


// DRAConstraints specifies constraints on how Dynamic Resources are allocated
// across the PodGroup.
DRAConstraints []DRAConstraint
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Continuing my thoughts from other comments here.

The primary goal that we wanted to ensure with this KEP are:

  • building the foundations for TAS and having the first version of the algorithm
  • proving that the algorithm is compatible with both DRA and topology-based requirements
    I think this KEP is achieving it.

However, the more I think about it, the more concerns I have about this kind of API. Up until now I thought that we can actually decouple and postpone the discussion of lifecycle of pod-group-owned (or workload-owned) RCs to later, but some of my comments below already suggest it's not that clear and may influence the API.

So I actually started thinking if (for the sake of faster and incremental progress), we shouldn't slightly revise the scope and goals of this KEP, in particular:

  • remove the "DRAConstraints" from the scope (and couple it the lifecycle of PodGroup/RC discussion we'll have in DRA: ResourceClaim Support for Workloads #5729 - @nojnhuh )
  • ensure that the proposal is compatible with DRA-based constraints at lower level;
    namely, scheduler should not really manage the lifecycle of RC and those RC should just be an input to scheduler (whether on PodGroup level, Workload-level or some to-be-introduced level).
    So what if instead we would prove that it works by simply:
  1. ensuring that some internal interface in scheduler (or maybe a scheduler-framework level one?) can actually accept RCs as an additional constraint to the WorkloadCycle
  2. we add a test at that level, that scheduling works if we pass topology constraints as RCs

That would allow us to decouple the core of the changes in that KEP from all the discussions about how to represent it in the API, how is it coupled with lifecycle etc. And hopefully unblock this KEP much faster and still proving the core of what we need.

@johnbelamaric @erictune @44past4 @dom4ha @sanposhiho @macsko - for your thoughts too

Copy link
Member

@johnbelamaric johnbelamaric Dec 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that makes sense. Decoupling can help execution. We would treat the lifecycle and allocation of RCs in #5729. Allocation implies the constraint. #5194 should also merge with #5729, I think. It was conceived prior to the existence of the Workload API and I think #5729 encompasses a more holistic set of functionality.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with decoupling.

It is possible to implement #5729 without #5732.
Even if we only implement one of the two for 1.36, we still learn something.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have updated the KEP and removed the support for DRA based constraints from the alpha version and for beta I have proposed to wait for the KEP-5729: DRA: ResourceClaim Support for Workloads to define required API and lifecycle for the PodGroup level ResourceClaims.

@sanposhiho
Copy link
Member

/assign

I'm a small bandwidth-ed these days, but will take a look at this one for sure..

Copy link
Contributor

@erictune erictune left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great overall!

- **State:** Temporarily assigns AllocationResults to ResourceClaims during
the Assume phase.

**PlacementBinPackingPlugin (New)** Implements `PlacementScorer`. Scores
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this plugin can prevent the current PodGroup from fragmenting larger Levels, but it cannot prevent the current PodGroup from fragmenting smaller levels. If the current podgroup uses fewer than all the nodes in this Placement, then there could be multiple podsAssignment options, and different options may have different fragmentation effects. Since pod-at-a-time scheduling within the Placement is greedy, we won't consider multiple podsAssignment options.

Its not clear to me that you can influence this enough using the per-pod Score plugins.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an important problem but in order to be able to define what does the lower/smaller levels fragmentation even mean we need to have at least two topology levels defined for a PodGroup - one lower/smaller which is a prefered/best-effort placement for a PodGroup and higher/larger which is a required placement for a PodGroup.

This is not in the scope of this KEP.

This being said when we will be adding support for multiple levels to address problem with lower/smaller levels fragmentation we will need to solve two subproblems - problem of scheduling pods within a higher/larger placement and problem of scoring of higher/larger placements.

When it comes to scheduling while generating potential placements we can keep track information about placements and their sub placements. When we go through all lower/smaller placements we can keep track of the number of pods which we were able to schedule in each placement. We can also extend the scoring function so that it would work with partial placements (placements which do not contain all pods within a PodGroup) so for each lower/smaller placements we can also get their scores together with the number of pods which we were able to fit in those. While checking the higher/larger placements instead of simply going pod by pod and checking all nodes within the placement for PodGroups which have only one PodSet we can:

  • Check if there is any lower/smaller sub placement which can fit remaining pods from the PodGroup.
  • If there are such lower/smaller sub placements we can select the one which can fit the fewest of pods and has the highest score and we can try scheduling there as many of the remaining pods as possible.
  • If none of the lower/smaller sub placements can fit all remaining pods we can choose lower/smaller sub placements which can fit the highest number of pods and has the highest score and we can try to schedule there as many of the remaining pods as possible and repeat this process again.

This should lead to creating a pod assignment which uses as few of the sub placements as possible.

When it comes to scoring of those higher/larger placements we can extend the PodGroupAssignment struct to contain the information about the number of sub placements used by a given placement and their scores. This information could be used instead of normal bin packing logic to score such placements.

Apart from the API to define the multi-level placements all proposed interfaces should be able to support this logic but their implementation may need to change. All this should be considered in the future KEP for the multi-lever scheduling support.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not necessary to have two levels on one PodGroup for this to be a problem. It is only necessary that unrelated PodGroups can have different levels. It is also sufficient to have a PodGroup asking for 1 level, and some plain pods (e.g. from a Deployment).

We will need some place to score a placement's effect on each Level that it touches.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to Eric
I think two-level scheduling is another problem, but we don't need at all to talk about fragmentation.

The simplest example is the following:

topology:
superblock

   block                      block

node1 node2 node3 node4

  1. workload 1 that requests superblock and has 2 pods
  2. workload 2 that requests block and has 2 pods

If we give workload1 node1 and node3 (which is a valid placement), then we no longer have a full block for workload2.

That being said, I would like to decouple two things:

  • whether the proposed framework plugins enable addressing the problem
  • having appropriate plugins with logic that address that

I want this KEP to be focused on the first one and keep the second for a follow-up.

I think that given the ScorePlacement includes PodPlacement it is in the position to assess the fragmentation resulting from that placement. So the missing bit is how to generate best placements and it sounds to me that we can do that in a followup.

Name() string

// GeneratePlacements generates a list of potential Placements for the given PodGroup and PodSet.
// Each Placement represents a candidate set of resources (e.g., nodes matching a selector)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider saying that the GeneratePlacements interface does not have any compatibility guarantees across versions. If/when we later add Prioritized Placement Scheduling, or Multi-level Scheduling Constraints, we will want to change GeneratePlacements.


5. **Explicit Topology Definition:** Using a Custom Resource (NodeTopology) to
define and alias topology levels, removing the need for users to know exact
node label keys.
Copy link
Contributor

@erictune erictune Dec 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Explicit Topology Information also provided these things:

  • An explicit total order on levels within one Topology object (needed for Multi-level and Prioritized Placement Scheduling)
  • An implicit label hierarchy requirement
    • A level n label's nodes must be a subset of only one level n+1 label.
    • Useful for Multi-level placement, and for the hierarchical aggregated capacity optimization.
  • A way to limit the number of levels
    • Limit by validating the list length in a Topology object.
    • Limiting levels limits one term of algorithm complexity.
  • A way to discourage creation of too many Topology objects
    • Only admins or cloud providers should create these usually.

Taken together, these properties make it easier to avoid the case where there are many more TAS-relevant labels (key/value pairs) than there are nodes.

Also, while the initial algorithm is going to be greedy, in the sense that it examines one workload at a time, future algorithms may want to examine multiple workloads at once to find jointly optimal placements. By allowing excess complexity in the structure of topology labels at the outset, we will limit our ability to do future global optimizations.

I think it is fine to leave Explicit Topology Definition out of Alpha. However, before GA, we should either have beta Explicit Topology Definition, or have documented requirement for (1) the maximum number of label keys used for TAS, (2) partial order requirement over all TAS keys, and (3) nesting requirement for TAS labels.

Otherwise, it will be hard to enforce those later.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One more thing about explicit topology levels:

  • By defining levels, it is implied that we may wish to start workload of a size which uses all nodes of a given level member (nodes with label level:value). I would say it is a statement that PodGroup with sizes equal to the size of a level-member are going to be statistically more likely than other sizes. And it is an implicit request to therefore avoid fragmenting (partially allocating) all level-members of any level

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Playing a bit devil's advocate - I'm not sure that all these arguments are convincing enough to me. In particular:

  1. we want to support DRA-based constraints eventually too, and these implicitly also imply certain topologies. They will not be defined by Topology definition anyway, so it will by definition cover only a subset of potential constraints. @johnbelamaric - for your thoughts too

  2. Nodes are not objects that arbitrary users can access (and thus add arbitrary labels to them). So we're effectively limited to labels that only cluster administrators can set anyway.

So despite the fact that I see potential benefits from having explicit Topology definition, especially the point (1) above makes me suspicious that we will be able to fully utilize its consequences.

But assuming for now we will create the Topology object, doesn't that mean that the API for TopologyConstraint should actually be different and explicitly reference the Topology object?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we want to support DRA-based constraints eventually too, and these implicitly also imply certain topologies. They will not be defined by Topology definition anyway, so it will by definition cover only a subset of potential constraints. @johnbelamaric - for your thoughts too

If I understand what you mean, I would argue that DRA devices can actually serve as the way to define these explicit topologies.


// DRAConstraints specifies constraints on how Dynamic Resources are allocated
// across the PodGroup.
DRAConstraints []DRAConstraint
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with decoupling.

It is possible to implement #5729 without #5732.
Even if we only implement one of the two for 1.36, we still learn something.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: 44past4
Once this PR has been reviewed and has the lgtm label, please ask for approval from sanposhiho. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Member

@wojtek-t wojtek-t left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great now - I'm pretty aligned with the proposal now.

type PodGroupSchedulingConstraints struct {
// TopologyConstraints specifies desired topological placements for all pods
// within this PodGroup.
TopologyConstraints []TopologyConstraint
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The later I would actually expect "TopologyPreferences" field (or something like that) so I don't think that convinces me.

But the first usecase is interesting - I would actually mention it in the KEP explicitly.

Maybe we should actually explicitly mention in the API comment that in huge majority of cases we expect exactly 1 item in this list and mention this example as a potential exception.

@wojtek-t wojtek-t self-assigned this Dec 18, 2025

- **Input:** PodGroupInfo.

- **Action:** Iterate over distinct values of the topology label (TAS) or
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, this is where we would look for example at aggregate resource availability in each topology domain or DRA device to prune out ones that clearly have insufficient resources?


#### Phase 3: Placement Scoring and Selection

- **Action:** Call `ScorePlacement` for all feasible placements.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What new information do we have here, such that we need to iterate over all of them again? Wouldn't we be able to publish a score at step 6 in phase 2?


- **Binding:** Proceed to bind pods to the assigned nodes and resources using
pod-by-pod scheduling logic with each pod prebound to the selected node
by seting `nominatedNodeName` value.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/seting/setting/


5. **Explicit Topology Definition:** Using a Custom Resource (NodeTopology) to
define and alias topology levels, removing the need for users to know exact
node label keys.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we want to support DRA-based constraints eventually too, and these implicitly also imply certain topologies. They will not be defined by Topology definition anyway, so it will by definition cover only a subset of potential constraints. @johnbelamaric - for your thoughts too

If I understand what you mean, I would argue that DRA devices can actually serve as the way to define these explicit topologies.

Comment on lines +446 to +448
- **Heterogeneous PodGroup Handling**: Sequential processing will be used
initially. Pods are processed sequentially; if any fail, the placement is
rejected.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we have more pods than gang's minCount? Shouldn't this phase just try to schedule as many pods from a pod group as possible (while fulfilling minCount constraint), even if they are not subsequent?


- Feature implemented behind a feature flag.
- PodGroupSchedulingConstraints API defined.
- Basic topology (Node Label) and DRA constraints working.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move DRA constraints to beta?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

8 participants