-
Notifications
You must be signed in to change notification settings - Fork 321
fix: correct ServiceMonitor selectors after OLM to Helm migration #10345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix: correct ServiceMonitor selectors after OLM to Helm migration #10345
Conversation
🤖 Gemini AI Assistant AvailableHi @oswcab! I'm here to help with your pull request. You can interact with me using the following commands: Available Commands
How to Use
PermissionsOnly OWNER, MEMBER, or COLLABORATOR users can trigger my responses. This ensures secure and appropriate usage. This message was automatically added to help you get started with the Gemini AI assistant. Feel free to delete this comment if you don't need assistance. |
|
🤖 Hi @oswcab, I've received your request, and I'm working on it now! You can track my progress in the logs for more details. |
303e63f to
3f354df
Compare
hugares
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
|
/hold |
After migrating from OLM to Helm deployment, custom-resources.yaml was retained but never updated with correct selectors. This caused the Service and ServiceMonitor selecting no pods because of a selector mismatch. The root cause was the OLM deployment used the instance label 'app.kubernetes.io/instance: cluster' while Helm uses instance label 'app.kubernetes.io/instance: external-secrets-operator'. To maximize maintainability, this commit uses a hybrid approach where the metric services are enabled on the values.yaml file so that Helm creates the services with correct selectors automatically. For the ServiceMonitors, they are created using a patch as 'kustomize --enable-helm' doesn't support Capabilities.APIVersions check, the condition in the Helm Charts to be able to enable them. Contributes to: KFLUXINFRA-2513
3f354df to
d3c98e3
Compare
hugares
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: hugares, oswcab The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
🤖 Pipeline Failure AnalysisCategory: Infrastructure The pipeline failed due to a DNS resolution error preventing the 📋 Technical DetailsImmediate CauseThe Contributing FactorsSeveral "gather" steps, including ImpactThe inability to connect to the Kubernetes API server prevented the necessary diagnostic data from being collected by the 🔍 Evidenceappstudio-e2e-tests/gather-audit-logsCategory: Logs:
|
|
/unhold |
|
/retest |
🤖 Pipeline Failure AnalysisCategory: Timeout The 📋 Technical DetailsImmediate CauseThe Contributing FactorsAnalysis of the provided context reveals several potential contributing factors to the test execution failure:
ImpactThe timeout of the 🔍 Evidenceappstudio-e2e-tests/redhat-appstudio-e2eCategory: Logs:
|
|
/test appstudio-e2e-tests |
🤖 Pipeline Failure AnalysisCategory: Timeout The 📋 Technical DetailsImmediate CauseThe Contributing FactorsThe
ImpactThe timeout failure prevented the completion of the end-to-end tests for the AppStudio infrastructure deployments. This means that the pipeline could not validate the functionality and stability of the deployed components, potentially delaying the integration of changes from PR #10345. 🔍 Evidenceappstudio-e2e-tests/redhat-appstudio-e2eCategory: Logs:
|
|
@oswcab: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
After migrating from OLM to Helm deployment, custom-resources.yaml was retained but never updated with correct selectors. This caused the Service and ServiceMonitor selecting no pods because of a selector mismatch.
The root cause was the OLM deployment used the instance label 'app.kubernetes.io/instance: cluster' while Helm uses instance label 'app.kubernetes.io/instance: external-secrets-operator'.
To maximize maintainability, this commit uses a hybrid approach where the metric services are enabled on the values.yaml file so that Helm creates the services with correct selectors automatically. For the ServiceMonitors, they are created using a patch as 'kustomize --enable-helm' doesn't support Capabilities.APIVersions check, the condition in the Helm Charts to be able to enable them.
Contributes to: KFLUXINFRA-2513