feat(chart): add podLabels for extra labels on pod templates#119
Open
gkarach-conifers wants to merge 3 commits into
Open
feat(chart): add podLabels for extra labels on pod templates#119gkarach-conifers wants to merge 3 commits into
gkarach-conifers wants to merge 3 commits into
Conversation
Adds a top-level `podLabels` value that is merged into the pod
template metadata labels of the main, worker, and webhook-processor
Deployments, in addition to the chart's own selector labels.
Mirrors the existing `podAnnotations` pattern. Default `{}` preserves
existing behavior — empty map renders no extra labels via `{{- with }}`.
## Motivation
Several Kubernetes admission webhooks key off pod-level labels rather
than annotations or service-account references, and have no other
opt-in path:
- Azure AD Workload Identity injects federated-token env vars only
when a pod carries `azure.workload.identity/use=true`.
- Istio, Linkerd, and similar service-mesh injectors gate on
`sidecar.istio.io/inject=true`, `linkerd.io/inject=enabled`.
- External Prometheus scrape selectors and policy controllers often
match on custom labels.
Today these all require post-Helm `kubectl patch` workarounds because
the pod templates only emit the chart's built-in selector labels.
## Changes
- `charts/n8n/templates/deployment-main.yaml` — `{{- with .Values.podLabels }}` block under pod template metadata.labels
- `charts/n8n/templates/deployment-worker.yaml` — same
- `charts/n8n/templates/deployment-webhook-processor.yaml` — same
- `charts/n8n/values.yaml` — `podLabels: {}` declaration with comment + commented examples
- `charts/n8n/README.md` — new row in Key Configuration table
## Test plan
- `helm lint charts/n8n` — passes
- `helm template foo charts/n8n --set podLabels.foo=bar --set podLabels.example=test ...` — confirms `foo: bar` and `example: test` land under pod template `metadata.labels` on all three Deployments (main, worker, webhook-processor) without disturbing selector labels
- Default render (`podLabels: {}`) — output is byte-identical to current behavior aside from the unchanged base labels
Signed-off-by: Gilad Karach <gkarach@Gilads-MacBook-Pro-2.local>
Contributor
There was a problem hiding this comment.
3 issues found across 5 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="charts/n8n/templates/deployment-webhook-processor.yaml">
<violation number="1" location="charts/n8n/templates/deployment-webhook-processor.yaml:30">
P2: User-provided pod labels can overwrite selector labels, causing Deployment validation failures (`selector` vs template labels mismatch).</violation>
</file>
<file name="charts/n8n/templates/deployment-worker.yaml">
<violation number="1" location="charts/n8n/templates/deployment-worker.yaml:30">
P1: Unfiltered `podLabels` can override selector-bound labels, causing Deployment selector/template label mismatch.</violation>
</file>
<file name="charts/n8n/templates/deployment-main.yaml">
<violation number="1" location="charts/n8n/templates/deployment-main.yaml:34">
P2: User-supplied `podLabels` can override selector labels, causing Deployment selector/template label mismatch.</violation>
</file>
Architecture diagram
sequenceDiagram
participant User as DevOps Engineer
participant Helm as Helm Engine
participant K8s as Kubernetes API
participant Webhook as Admission Webhook (Istio/Azure/Linkerd)
participant Pod as n8n Runtime Pods
Note over User,Pod: Deployment Update Flow
User->>Helm: helm upgrade (with podLabels in values.yaml)
rect rgb(240, 240, 240)
Note right of Helm: Template Rendering
Helm->>Helm: Render n8n Labels (Standard)
alt NEW: podLabels is not empty
Helm->>Helm: NEW: Inject custom labels into pod.metadata.labels
else podLabels is empty (default)
Helm->>Helm: Render nothing (Existing behavior)
end
end
Helm->>K8s: Apply Deployment (Main/Worker/Webhook-Processor)
K8s->>K8s: Generate Pod from Template
Note over K8s,Webhook: NEW: External Integration Trigger
K8s->>Webhook: MutatingAdmissionWebhook Request (Pod Metadata)
alt NEW: Metadata.labels matches webhook criteria
Webhook-->>K8s: Patch Pod (e.g., Inject Sidecars or Env Vars)
else No label match
Webhook-->>K8s: Allow without modification
end
K8s->>Pod: Create Pod Instance
Note over Pod: Result: Pod runs with both chart labels<br/>AND external identity/mesh capabilities
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review, or fix all with cubic.
Cubic AI flagged that user-supplied podLabels could override chart
selector / identity labels, breaking the Deployment selector match
or producing duplicate YAML keys. Valid concern.
Add a check to the existing n8n.validate helper (called once from
deployment-main.yaml on every install) that fails fast with a clear
error message if any podLabels key collides with the chart-managed
set:
- app.kubernetes.io/{name,instance,component,version,managed-by}
- helm.sh/chart
The first three are part of the Deployment's selector.matchLabels;
overriding them would render an invalid Deployment. The other three
are recommended-set labels; rendering both chart and user values
would produce duplicate YAML keys and fail at parse time anyway.
Failing loudly with a useful message is friendlier than a silent
rendering quirk. values.yaml comment updated to document the
reserved set.
Signed-off-by: Gilad Karach <gkarach@Gilads-MacBook-Pro-2.local>
Contributor
There was a problem hiding this comment.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="charts/n8n/templates/_helpers.tpl">
<violation number="1" location="charts/n8n/templates/_helpers.tpl:146">
P2: Pod labels validation only guards reserved keys, not value types; non-string label values can pass Helm validation and fail against Kubernetes `map[string]string` labels.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review, or fix all with cubic.
Cubic AI flagged that the previous validation only guarded reserved keys, not value types. Kubernetes labels are map[string]string, so non-string podLabels values (ints, bools) would pass our check, render as native YAML scalars, and fail the API validation only at deploy time with a much less actionable message. Add a kindIs "string" check alongside the reserved-keys guard. Failing errors now surface at template-render time: Error: ... podLabels."priority" must be a string (got int64). Kubernetes labels are map[string]string; quote numeric or boolean values, e.g. "priority": "true". Pointing users at the fix (quote the value) inside the error message itself avoids the common confusion where YAML's `key: true` parses as a bool and silently fails K8s validation. Signed-off-by: Gilad Karach <gkarach@Gilads-MacBook-Pro-2.local>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds a top-level
podLabelsvalue that is merged into the pod templatemetadata.labelsof the main, worker, and webhook-processor Deployments, in addition to the chart's own selector labels.Mirrors the existing
podAnnotationspattern. Default{}preserves existing behavior — the{{- with }}block renders nothing when the value is empty.Motivation
Several Kubernetes admission webhooks key off pod-level labels rather than annotations or service-account references, and have no other opt-in path:
azure.workload.identity/use=true.sidecar.istio.io/inject=true,linkerd.io/inject=enabled.Today, deployments using this chart in any of those environments need a post-Helm
kubectl patchworkaround on eachhelm upgrade, because the pod templates only emit the chart's built-in selector labels and there's no way to add more from values. This PR closes that gap.Changes
charts/n8n/templates/deployment-main.yaml— adds{{- with .Values.podLabels }} ... {{- end }}under pod templatemetadata.labelscharts/n8n/templates/deployment-worker.yaml— samecharts/n8n/templates/deployment-webhook-processor.yaml— samecharts/n8n/values.yaml—podLabels: {}declaration with explanatory comment and commented usage examplescharts/n8n/README.md— new row in Key Configuration tableSelector labels are unchanged: the
{{- with }}block only adds totemplate.metadata.labels, notselector.matchLabelsormetadata.labelson the Deployment itself, so existing rolled-out workloads pick up the new labels on their next pod-template change without selector mismatch.Test plan
helm lint charts/n8n— passeshelm template foo charts/n8n --set podLabels.foo=bar --set 'podLabels.example=test' ...confirmsfoo: barandexample: testland under pod templatemetadata.labelson all three Deployments (main, worker, webhook-processor) without disturbing selector labelspodLabels: {}) — output is unchanged from current behavior except for the unchanged base labels (verified via diff)Related
Same chart-affordance pattern as #111 (which added
strategy:for the main Deployment in 1.4.0).Summary by cubic
Add
podLabelsto merge extra labels into pod template metadata for the main, worker, and webhook-processor Deployments. Validates reserved keys and enforces string values; default{}keeps behavior unchanged.New Features
podLabelsmerged into pod templatemetadata.labelsfor main, worker, and webhook-processor.podAnnotations; empty{}renders nothing.selector.matchLabelsunchanged for safe rolling upgrades.Bug Fixes
podLabelskey overlaps chart-managed labels (app.kubernetes.io/*,helm.sh/chart).podLabelsvalues are strings to match Kubernetesmap[string]stringlabel requirements.Written for commit a484043. Summary will update on new commits.