Skip to content

Conversation

@josephtate
Copy link

Extends mount_trusted_ca to support mounting CA certificates from ConfigMaps on vanilla Kubernetes while maintaining OpenShift CNO compatibility.

Added mount_trusted_ca_configmap_key field (format: "configmap-name:key") to specify the ConfigMap and key containing CA bundles. The ConfigMap can be managed manually or kept up to date using cert-manager's trust-manager. Required on vanilla K8s when mount_trusted_ca is enabled; optional on OpenShift which uses CNO injection.

Enhanced MountCASpec (exported from mountCASpec) to handle both modes:

  • ConfigMap mode: parses "name:key" format, maps key to tls-ca-bundle.pem
  • OpenShift mode: uses operator-created ConfigMap with CNO injection Both mount to /etc/pki/ca-trust/extracted/pem.

Added defaultsForVanillaDeployment to configure CA mounting for vanilla K8s deployments (API, Content, Worker). Validates configuration and mounts only when both mount_trusted_ca and mount_trusted_ca_configmap_key are set.

Refactored deployment hash calculation by introducing CalculateDeploymentHash to strip DeprecatedServiceAccount field before hashing, preventing reconciliation loops. Updated CheckDeploymentSpec, AddHashLabel, and HashFromMutated to use new function. Removed redundant AddHashLabel call in updateObject.

Fixed mount_trusted_ca CSV description by consolidating multi-line comment to single-line format that operator-sdk recognizes.

Includes unit tests, documentation, and example configuration showing trust-manager integration.

Fixes #1469

…ubernetes

Extends mount_trusted_ca to support mounting CA certificates from ConfigMaps
on vanilla Kubernetes while maintaining OpenShift CNO compatibility.

Added mount_trusted_ca_configmap_key field (format: "configmap-name:key") to
specify the ConfigMap and key containing CA bundles. The ConfigMap can be
managed manually or kept up to date using cert-manager's trust-manager.
Required on vanilla K8s when mount_trusted_ca is enabled; optional on OpenShift
which uses CNO injection.

Enhanced MountCASpec (exported from mountCASpec) to handle both modes:
- ConfigMap mode: parses "name:key" format, maps key to tls-ca-bundle.pem
- OpenShift mode: uses operator-created ConfigMap with CNO injection
Both mount to /etc/pki/ca-trust/extracted/pem.

Added defaultsForVanillaDeployment to configure CA mounting for vanilla K8s
deployments (API, Content, Worker). Validates configuration and mounts only
when both mount_trusted_ca and mount_trusted_ca_configmap_key are set.

Refactored deployment hash calculation by introducing CalculateDeploymentHash
to strip DeprecatedServiceAccount field before hashing, preventing
reconciliation loops. Updated CheckDeploymentSpec, AddHashLabel, and
HashFromMutated to use new function. Removed redundant AddHashLabel call
in updateObject.

Fixed mount_trusted_ca CSV description by consolidating multi-line comment
to single-line format that operator-sdk recognizes.

Includes unit tests, documentation, and example configuration showing
trust-manager integration.
@openshift-ci openshift-ci bot requested review from dkliban and ipanova December 1, 2025 22:48
@openshift-ci
Copy link

openshift-ci bot commented Dec 1, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: josephtate
Once this PR has been reviewed and has the lgtm label, please assign ipanova for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci
Copy link

openshift-ci bot commented Dec 1, 2025

Hi @josephtate. Thanks for your PR.

I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@git-hyagi
Copy link
Collaborator

/ok-to-test

@openshift-merge-robot
Copy link
Collaborator

PR needs rebase.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@git-hyagi
Copy link
Collaborator

Hi @josephtate

Thank you for your contribution!!
I ran a test with the following Pulp CR

spec:
  api:
    replicas: 1
    pulp_labels:
      test: "it-works"
    hpa:
      enabled: true
      min_replicas: 1
      max_replicas: 2
      target_cpu_utilization_percentage: 70
...
  mount_trusted_ca: true
  mount_trusted_ca_configmap_key: "my-ca-bundle:ca.crt"
...

and the controller got stuck in an infinite loop
controllers/utils.go:741 The Spec from Deployment pulp-api has been modified! Reconciling ....

I believe this is happening because in https://github.com/pulp/pulp-operator/pull/1554/files#diff-4772d4d22e96ad7ded951cfaf8aeec535468a2fa2499fb8775568c891479afffR108 we are modifying the Deployment and we are not re-calculating the hash, which is done in

func (d CommonDeployment) Deploy(resources any, pulpcoreType settings.PulpcoreType) client.Object {
pulp := resources.(FunctionResources).Pulp
d.build(resources, pulpcoreType)
// deployment definition
dep := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: pulpcoreType.DeploymentName(pulp.Name),
Namespace: pulp.Namespace,
Annotations: d.deploymentAnnotations,
Labels: d.deploymentLabels,
},
Spec: appsv1.DeploymentSpec{
Replicas: d.replicas,
Strategy: d.strategy,
Selector: &metav1.LabelSelector{
MatchLabels: d.podSelectorLabels,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: d.podLabels,
Annotations: d.podAnnotations,
},
Spec: corev1.PodSpec{
Affinity: d.affinity,
SecurityContext: d.podSecurityContext,
NodeSelector: d.nodeSelector,
Tolerations: d.toleration,
Volumes: d.volumes,
ServiceAccountName: settings.PulpServiceAccount(pulp.Name),
TopologySpreadConstraints: d.topologySpreadConstraint,
InitContainers: d.initContainers,
Containers: d.containers,
RestartPolicy: d.restartPolicy,
TerminationGracePeriodSeconds: d.terminationPeriod,
DNSPolicy: d.dnsPolicy,
SchedulerName: d.schedulerName,
},
},
},
}
AddHashLabel(resources.(FunctionResources), dep)

One idea that I had was to move the instructions from:
https://github.com/pulp/pulp-operator/pull/1554/files#diff-4772d4d22e96ad7ded951cfaf8aeec535468a2fa2499fb8775568c891479afffR91
https://github.com/pulp/pulp-operator/pull/1554/files#diff-4772d4d22e96ad7ded951cfaf8aeec535468a2fa2499fb8775568c891479afffR95-R96
to:

func (d *CommonDeployment) setVolumes(resources any, pulpcoreType settings.PulpcoreType) {

and from:
https://github.com/pulp/pulp-operator/pull/1554/files#diff-4772d4d22e96ad7ded951cfaf8aeec535468a2fa2499fb8775568c891479afffR92
https://github.com/pulp/pulp-operator/pull/1554/files#diff-4772d4d22e96ad7ded951cfaf8aeec535468a2fa2499fb8775568c891479afffR97
to:

func (d *CommonDeployment) setVolumeMounts(pulp pulpv1.Pulp, pulpcoreType settings.PulpcoreType) {

What do you think about this?

@josephtate
Copy link
Author

Hi @josephtate

Thank you for your contribution!! I ran a test with the following Pulp CR

spec:
  api:
    replicas: 1
    pulp_labels:
      test: "it-works"
    hpa:
      enabled: true
      min_replicas: 1
      max_replicas: 2
      target_cpu_utilization_percentage: 70
...
  mount_trusted_ca: true
  mount_trusted_ca_configmap_key: "my-ca-bundle:ca.crt"
...

and the controller got stuck in an infinite loop controllers/utils.go:741 The Spec from Deployment pulp-api has been modified! Reconciling ....

So this infinite loop is endemic to the original code. Once I switched to mount_trusted_ca, even with no code changes, this infinite reconcile loop started happening. I worked really hard to try to resolve it.

I'd like to note that the final resolution is not a one pass: it actually takes 2-3 reconcile steps to get to steady state. My testing shows this still. Does yours eventually settle down?

Let me take another look over the next couple of days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

There is no way to pass CA when S3 https enables

3 participants