Skip to content

Conversation

@ehearne-redhat
Copy link

@ehearne-redhat ehearne-redhat commented Nov 28, 2025

What

  • Add dedicated service account to CVO and version pods in openshift-cluster-version namespace.
  • Add cluster role binding and attach dedicated service account to it.

Why

  • Default service account should not be used on OpenShift components.

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 28, 2025
@coderabbitai
Copy link

coderabbitai bot commented Nov 28, 2025

Important

Review skipped

Auto reviews are limited based on label configuration.

🚫 Excluded labels (none allowed) (1)
  • do-not-merge/work-in-progress

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 28, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: ehearne-redhat
Once this PR has been reviewed and has the lgtm label, please assign wking for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ehearne-redhat
Copy link
Author

/retest

@ehearne-redhat
Copy link
Author

/test e2e-aws-ovn-upgrade

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 28, 2025

@ehearne-redhat: The specified target(s) for /test were not found.
The following commands are available to trigger required jobs:

/test e2e-agnostic-operator
/test e2e-agnostic-operator-techpreview
/test e2e-agnostic-ovn
/test e2e-agnostic-ovn-upgrade-into-change
/test e2e-agnostic-ovn-upgrade-into-change-techpreview
/test e2e-agnostic-ovn-upgrade-out-of-change
/test e2e-agnostic-ovn-upgrade-out-of-change-techpreview
/test e2e-aws-ovn-techpreview
/test e2e-hypershift
/test e2e-hypershift-conformance
/test gofmt
/test images
/test lint
/test okd-scos-images
/test unit
/test verify-deps
/test verify-update
/test verify-yaml

The following commands are available to trigger optional jobs:

/test e2e-agnostic-operator-devpreview
/test e2e-extended-tests
/test okd-scos-e2e-aws-ovn

Use /test all to run the following jobs that were automatically triggered:

pull-ci-openshift-cluster-version-operator-main-e2e-agnostic-operator
pull-ci-openshift-cluster-version-operator-main-e2e-agnostic-ovn
pull-ci-openshift-cluster-version-operator-main-e2e-agnostic-ovn-upgrade-into-change
pull-ci-openshift-cluster-version-operator-main-e2e-agnostic-ovn-upgrade-out-of-change
pull-ci-openshift-cluster-version-operator-main-e2e-aws-ovn-techpreview
pull-ci-openshift-cluster-version-operator-main-e2e-hypershift
pull-ci-openshift-cluster-version-operator-main-e2e-hypershift-conformance
pull-ci-openshift-cluster-version-operator-main-gofmt
pull-ci-openshift-cluster-version-operator-main-images
pull-ci-openshift-cluster-version-operator-main-lint
pull-ci-openshift-cluster-version-operator-main-okd-scos-images
pull-ci-openshift-cluster-version-operator-main-unit
pull-ci-openshift-cluster-version-operator-main-verify-deps
pull-ci-openshift-cluster-version-operator-main-verify-update
pull-ci-openshift-cluster-version-operator-main-verify-yaml

In response to this:

/test e2e-aws-ovn-upgrade

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@ehearne-redhat
Copy link
Author

/test e2e-aws-ovn-techpreview

@ehearne-redhat
Copy link
Author

/retest

Comment on lines 24 to 27
roleRef:
kind: ClusterRole
name: system:openshift:scc:privileged
apiGroup: rbac.authorization.k8s.io
Copy link
Author

@ehearne-redhat ehearne-redhat Nov 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm unsure whether this is needed or not... I initially added it because I was looking at the wrong test e2e-upgrade-into-change, and did not see the update-payload-dedicated-sa service account on version- pod in must gather files. However, I did see them in e2e-upgrade-out-of-change .

I'm thinking it might be necessary as the updatepayload pod contains the openshift.io/required-scc: privileged annotation . So, I'll leave it for the reviewer to decide. I can test without too if that suits.

@ehearne-redhat
Copy link
Author

/retest

@ehearne-redhat ehearne-redhat changed the title WIP: add dedicated service account to crb, cvo and version pod [WIP] OCPBUGS-65621: add dedicated service account to crb, cvo and version pod Dec 1, 2025
@openshift-ci-robot openshift-ci-robot added jira/severity-critical Referenced Jira bug's severity is critical for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Dec 1, 2025
@openshift-ci-robot
Copy link
Contributor

@ehearne-redhat: This pull request references Jira Issue OCPBUGS-65621, which is invalid:

  • expected the bug to target the "4.21.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

WIP - do not merge.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@ehearne-redhat
Copy link
Author

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Dec 1, 2025
@openshift-ci-robot
Copy link
Contributor

@ehearne-redhat: This pull request references Jira Issue OCPBUGS-65621, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.21.0) matches configured target version for branch (4.21.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @dis016

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested a review from dis016 December 1, 2025 09:20
@openshift-ci-robot
Copy link
Contributor

@ehearne-redhat: This pull request references Jira Issue OCPBUGS-65621, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.21.0) matches configured target version for branch (4.21.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @dis016

In response to this:

What

  • Add dedicated service account to CVO and version pods in openshift-cluster-version namespace.
  • Add cluster role binding and attach dedicated service account to it.

Why

  • Default service account should not be used on OpenShift components.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

},
},
Spec: corev1.PodSpec{
ServiceAccountName: "update-payload-dedicated-sa",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Default service account should not be used on OpenShift components

I think "Don't attach privileges to the default service account in OpenShift namespaces" makes a lot of sense, but I'm less clear on the downsides of Pods using the default service account if that account comes with no privileges. This version Pod does not need any Kube API privileges. In fact, it doesn't need any network communication at all, it's just shoveling bits around between the local container filesystem and a volume-mounted host directory. Can we leave it running in the default service account, or is there a way to request no-service-account-at-all? Maybe that's effectively what automountServiceAccountToken: false does?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe this is what you're talking about in this thread, and some layer is using the system:openshift:scc:privileged ClusterRole to decide if a Pod is allowed to have the privileged Security Context Constraint? Not clear to me how that would be enforced though. Are there docs on the system:openshift:scc:* ClusterRoles walking through this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Poking around in docs turned up these sections dancing in this space, but I'm still not clear if system:openshift:scc:* ClusterRole access is checked on something in the Pod-creation path, or checked against the ServiceAccount that is about to be bound to the Pod being created. I also turned up a directory with release-image manifests for many of these ClusterRoles, but no README.md or other docs there explaining who is expected to use them how, or exactly how the guard that enforces them works.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wking sorry for the late response - service accounts need to be attached to a pod, whether we specify one or not. If we specified no service account, the default one would be used in its place.

Thanks for letting me know about minimum permissions here. We can create a service account without specifying permissions. This way, we don't elevate permissions but we also don't allow default service account usage. I can see if this is all that is required for this deployment.

Thanks for looking into further docs on this topic. We can test without to see if the pod genuinely requires elevated permissions or not.

@@ -8,7 +8,21 @@ metadata:
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

huh, did we not need this before for some reason? Maybe we'd been relying on there only being one group that contained a ClusterRole kind? 😅 If you have any context on how this worked without this line, having that context in a commit message for future devs would be nice. Seems like there should be something in-cluster alerting on (Cluster)RoleBindings that leave off such an important property.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have too much context on how it worked without this line. I added it to see if it would resolve test errors I got from cluster-version-operator pods. It seemed like an important property, so I added it back. I can also test without this line too.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

namespace/openshift-cluster-version node/ci-op-yy1vhb3d-0bd36-c4ngb-master-2 pod/cluster-version-operator-5b954c7498-xprpt uid/51a23f9d-39b8-4a29-bebd-30fb7de19b57 container/cluster-version-operator restarted 30 times at:
non-zero exit at 2025-12-01 15:56:52.358476306 +0000 UTC m=+612.085122721: cause/Error code/255 reason/ContainerExit ernalversions/factory.go:125" type="*v1.FeatureGate"
E1201 15:56:22.959953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.FeatureGate: featuregates.config.openshift.io is forbidden: User \"system:serviceaccount:openshift-cluster-version:cvo-dedicated-sa\" cannot list resource \"featuregates\" in API group \"config.openshift.io\" at the cluster scope" logger="UnhandledError" reflector="github.com/openshift/client-go/config/informers/externalversions/factory.go:125" type="*v1.FeatureGate"

I keep getting this error, for example, in e2e-agnostic-ovn-upgrade-into-change and e2e-agnostic-ovn-upgrade-out-of-change .

namespace: openshift-cluster-version
annotations:
kubernetes.io/description: Dedicated Service Account for the Cluster Version Operator.
include.release.openshift.io/self-managed-high-availability: "true"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't looked into HyperShift, but I expect we'll need a ServiceAccount in the hosted-Kube-API there too?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can look further into this and get back to you. :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I can't get my changes to work it might not be a bad idea to replicate their own setup. Thanks for pointing me in the right direction! :)

@ehearne-redhat
Copy link
Author

/test e2e-hypershift

@ehearne-redhat
Copy link
Author

Will address failing tests tomorrow.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 5, 2025

@ehearne-redhat: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-agnostic-ovn-upgrade-out-of-change 0c875cc link true /test e2e-agnostic-ovn-upgrade-out-of-change
ci/prow/e2e-agnostic-ovn-upgrade-into-change 0c875cc link true /test e2e-agnostic-ovn-upgrade-into-change

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. jira/severity-critical Referenced Jira bug's severity is critical for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants