Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 0 additions & 14 deletions install/0000_00_cluster-version-operator_02_roles.yaml

This file was deleted.

17 changes: 17 additions & 0 deletions install/0000_00_cluster-version-operator_02_service_account.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: cluster-version-operator
namespace: openshift-cluster-version
annotations:
kubernetes.io/description: Dedicated Service Account for the Cluster Version Operator.
include.release.openshift.io/self-managed-high-availability: "true"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't looked into HyperShift, but I expect we'll need a ServiceAccount in the hosted-Kube-API there too?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can look further into this and get back to you. :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I can't get my changes to work it might not be a bad idea to replicate their own setup. Thanks for pointing me in the right direction! :)

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: update-payload-dedicated-sa
namespace: openshift-cluster-version
annotations:
kubernetes.io/description: Dedicated Service Account for the Update Payload.
include.release.openshift.io/self-managed-high-availability: "true"
113 changes: 113 additions & 0 deletions install/0000_00_cluster-version-operator_03_1_roles.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cvo-leader-election
rules:
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cvo-leader-election-binding
namespace: openshift-cluster-version
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cvo-leader-election
subjects:
- kind: ServiceAccount
name: cluster-version-operator
namespace: openshift-cluster-version
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cvo-required-config-reader
rules:
- apiGroups: ["config.openshift.io"]
resources: ["featuregates", "clusteroperators", "clusterversions", "proxies", "infrastructures"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cvo-configmap-reader
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cvo-config-configmap-binding
namespace: openshift-config
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cvo-configmap-reader
subjects:
- kind: ServiceAccount
name: cluster-version-operator
namespace: openshift-cluster-version
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cvo-managed-configmap-binding
namespace: openshift-config-managed
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cvo-configmap-reader
subjects:
- kind: ServiceAccount
name: cluster-version-operator
namespace: openshift-cluster-version
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cvo-required-config-binding
annotations:
kubernetes.io/description: Grant the cluster-version operator featuregate specific permissions.
include.release.openshift.io/self-managed-high-availability: "true"
roleRef:
kind: ClusterRole
name: cvo-required-config-reader
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
namespace: openshift-cluster-version
name: cluster-version-operator
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-version-operator
annotations:
kubernetes.io/description: Grant the cluster-version operator permission to perform cluster-admin actions while managing the OpenShift core.
include.release.openshift.io/self-managed-high-availability: "true"
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
namespace: openshift-cluster-version
name: cluster-version-operator
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-version-operator-scc-privileged-access
subjects:
- kind: ServiceAccount
name: update-payload-dedicated-sa
namespace: openshift-cluster-version
roleRef:
kind: ClusterRole
name: system:openshift:scc:privileged
apiGroup: rbac.authorization.k8s.io
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ spec:
k8s-app: cluster-version-operator
spec:
automountServiceAccountToken: false
serviceAccountName: cluster-version-operator
containers:
- name: cluster-version-operator
image: '{{.ReleaseImage}}'
Expand Down
1 change: 1 addition & 0 deletions pkg/cvo/updatepayload.go
Original file line number Diff line number Diff line change
Expand Up @@ -232,6 +232,7 @@ func (r *payloadRetriever) fetchUpdatePayloadToDir(ctx context.Context, dir stri
},
},
Spec: corev1.PodSpec{
ServiceAccountName: "update-payload-dedicated-sa",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Default service account should not be used on OpenShift components

I think "Don't attach privileges to the default service account in OpenShift namespaces" makes a lot of sense, but I'm less clear on the downsides of Pods using the default service account if that account comes with no privileges. This version Pod does not need any Kube API privileges. In fact, it doesn't need any network communication at all, it's just shoveling bits around between the local container filesystem and a volume-mounted host directory. Can we leave it running in the default service account, or is there a way to request no-service-account-at-all? Maybe that's effectively what automountServiceAccountToken: false does?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe this is what you're talking about in this thread, and some layer is using the system:openshift:scc:privileged ClusterRole to decide if a Pod is allowed to have the privileged Security Context Constraint? Not clear to me how that would be enforced though. Are there docs on the system:openshift:scc:* ClusterRoles walking through this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Poking around in docs turned up these sections dancing in this space, but I'm still not clear if system:openshift:scc:* ClusterRole access is checked on something in the Pod-creation path, or checked against the ServiceAccount that is about to be bound to the Pod being created. I also turned up a directory with release-image manifests for many of these ClusterRoles, but no README.md or other docs there explaining who is expected to use them how, or exactly how the guard that enforces them works.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wking sorry for the late response - service accounts need to be attached to a pod, whether we specify one or not. If we specified no service account, the default one would be used in its place.

Thanks for letting me know about minimum permissions here. We can create a service account without specifying permissions. This way, we don't elevate permissions but we also don't allow default service account usage. I can see if this is all that is required for this deployment.

Thanks for looking into further docs on this topic. We can test without to see if the pod genuinely requires elevated permissions or not.

ActiveDeadlineSeconds: deadline,
InitContainers: []corev1.Container{
setContainerDefaults(corev1.Container{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ spec:
k8s-app: cluster-version-operator
spec:
automountServiceAccountToken: false
serviceAccountName: cluster-version-operator
containers:
- name: cluster-version-operator
image: 'quay.io/cvo/release:latest'
Expand Down