Skip to content

feat(chart): add podLabels for extra labels on pod templates#119

Open
gkarach-conifers wants to merge 3 commits into
n8n-io:mainfrom
gkarach-conifers:feat/chart-add-podLabels-support
Open

feat(chart): add podLabels for extra labels on pod templates#119
gkarach-conifers wants to merge 3 commits into
n8n-io:mainfrom
gkarach-conifers:feat/chart-add-podLabels-support

Conversation

@gkarach-conifers
Copy link
Copy Markdown

@gkarach-conifers gkarach-conifers commented Apr 26, 2026

Summary

Adds a top-level podLabels value that is merged into the pod template metadata.labels of the main, worker, and webhook-processor Deployments, in addition to the chart's own selector labels.

Mirrors the existing podAnnotations pattern. Default {} preserves existing behavior — the {{- with }} block renders nothing when the value is empty.

Motivation

Several Kubernetes admission webhooks key off pod-level labels rather than annotations or service-account references, and have no other opt-in path:

  • Azure AD Workload Identity injects federated-token env vars only when a pod carries azure.workload.identity/use=true.
  • Istio, Linkerd, and similar service-mesh injectors gate on sidecar.istio.io/inject=true, linkerd.io/inject=enabled.
  • External Prometheus scrape selectors and policy-as-code controllers often match on custom labels.

Today, deployments using this chart in any of those environments need a post-Helm kubectl patch workaround on each helm upgrade, because the pod templates only emit the chart's built-in selector labels and there's no way to add more from values. This PR closes that gap.

Changes

  • charts/n8n/templates/deployment-main.yaml — adds {{- with .Values.podLabels }} ... {{- end }} under pod template metadata.labels
  • charts/n8n/templates/deployment-worker.yaml — same
  • charts/n8n/templates/deployment-webhook-processor.yaml — same
  • charts/n8n/values.yamlpodLabels: {} declaration with explanatory comment and commented usage examples
  • charts/n8n/README.md — new row in Key Configuration table

Selector labels are unchanged: the {{- with }} block only adds to template.metadata.labels, not selector.matchLabels or metadata.labels on the Deployment itself, so existing rolled-out workloads pick up the new labels on their next pod-template change without selector mismatch.

Test plan

  • helm lint charts/n8n — passes
  • helm template foo charts/n8n --set podLabels.foo=bar --set 'podLabels.example=test' ... confirms foo: bar and example: test land under pod template metadata.labels on all three Deployments (main, worker, webhook-processor) without disturbing selector labels
  • Default render (podLabels: {}) — output is unchanged from current behavior except for the unchanged base labels (verified via diff)

Related

Same chart-affordance pattern as #111 (which added strategy: for the main Deployment in 1.4.0).


Summary by cubic

Add podLabels to merge extra labels into pod template metadata for the main, worker, and webhook-processor Deployments. Validates reserved keys and enforces string values; default {} keeps behavior unchanged.

  • New Features

    • podLabels merged into pod template metadata.labels for main, worker, and webhook-processor.
    • Mirrors podAnnotations; empty {} renders nothing.
    • Leaves selector.matchLabels unchanged for safe rolling upgrades.
  • Bug Fixes

    • Fail fast if any podLabels key overlaps chart-managed labels (app.kubernetes.io/*, helm.sh/chart).
    • Validate all podLabels values are strings to match Kubernetes map[string]string label requirements.

Written for commit a484043. Summary will update on new commits.

Adds a top-level `podLabels` value that is merged into the pod
template metadata labels of the main, worker, and webhook-processor
Deployments, in addition to the chart's own selector labels.

Mirrors the existing `podAnnotations` pattern. Default `{}` preserves
existing behavior — empty map renders no extra labels via `{{- with }}`.

## Motivation

Several Kubernetes admission webhooks key off pod-level labels rather
than annotations or service-account references, and have no other
opt-in path:

- Azure AD Workload Identity injects federated-token env vars only
  when a pod carries `azure.workload.identity/use=true`.
- Istio, Linkerd, and similar service-mesh injectors gate on
  `sidecar.istio.io/inject=true`, `linkerd.io/inject=enabled`.
- External Prometheus scrape selectors and policy controllers often
  match on custom labels.

Today these all require post-Helm `kubectl patch` workarounds because
the pod templates only emit the chart's built-in selector labels.

## Changes

- `charts/n8n/templates/deployment-main.yaml` — `{{- with .Values.podLabels }}` block under pod template metadata.labels
- `charts/n8n/templates/deployment-worker.yaml` — same
- `charts/n8n/templates/deployment-webhook-processor.yaml` — same
- `charts/n8n/values.yaml` — `podLabels: {}` declaration with comment + commented examples
- `charts/n8n/README.md` — new row in Key Configuration table

## Test plan

- `helm lint charts/n8n` — passes
- `helm template foo charts/n8n --set podLabels.foo=bar --set podLabels.example=test ...` — confirms `foo: bar` and `example: test` land under pod template `metadata.labels` on all three Deployments (main, worker, webhook-processor) without disturbing selector labels
- Default render (`podLabels: {}`) — output is byte-identical to current behavior aside from the unchanged base labels

Signed-off-by: Gilad Karach <gkarach@Gilads-MacBook-Pro-2.local>
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 issues found across 5 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="charts/n8n/templates/deployment-webhook-processor.yaml">

<violation number="1" location="charts/n8n/templates/deployment-webhook-processor.yaml:30">
P2: User-provided pod labels can overwrite selector labels, causing Deployment validation failures (`selector` vs template labels mismatch).</violation>
</file>

<file name="charts/n8n/templates/deployment-worker.yaml">

<violation number="1" location="charts/n8n/templates/deployment-worker.yaml:30">
P1: Unfiltered `podLabels` can override selector-bound labels, causing Deployment selector/template label mismatch.</violation>
</file>

<file name="charts/n8n/templates/deployment-main.yaml">

<violation number="1" location="charts/n8n/templates/deployment-main.yaml:34">
P2: User-supplied `podLabels` can override selector labels, causing Deployment selector/template label mismatch.</violation>
</file>
Architecture diagram
sequenceDiagram
    participant User as DevOps Engineer
    participant Helm as Helm Engine
    participant K8s as Kubernetes API
    participant Webhook as Admission Webhook (Istio/Azure/Linkerd)
    participant Pod as n8n Runtime Pods

    Note over User,Pod: Deployment Update Flow

    User->>Helm: helm upgrade (with podLabels in values.yaml)
    
    rect rgb(240, 240, 240)
    Note right of Helm: Template Rendering
    Helm->>Helm: Render n8n Labels (Standard)
    alt NEW: podLabels is not empty
        Helm->>Helm: NEW: Inject custom labels into pod.metadata.labels
    else podLabels is empty (default)
        Helm->>Helm: Render nothing (Existing behavior)
    end
    end

    Helm->>K8s: Apply Deployment (Main/Worker/Webhook-Processor)
    K8s->>K8s: Generate Pod from Template

    Note over K8s,Webhook: NEW: External Integration Trigger
    K8s->>Webhook: MutatingAdmissionWebhook Request (Pod Metadata)
    
    alt NEW: Metadata.labels matches webhook criteria
        Webhook-->>K8s: Patch Pod (e.g., Inject Sidecars or Env Vars)
    else No label match
        Webhook-->>K8s: Allow without modification
    end

    K8s->>Pod: Create Pod Instance
    
    Note over Pod: Result: Pod runs with both chart labels<br/>AND external identity/mesh capabilities
Loading

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review, or fix all with cubic.

Comment thread charts/n8n/templates/deployment-worker.yaml
Comment thread charts/n8n/templates/deployment-webhook-processor.yaml
Comment thread charts/n8n/templates/deployment-main.yaml
Cubic AI flagged that user-supplied podLabels could override chart
selector / identity labels, breaking the Deployment selector match
or producing duplicate YAML keys. Valid concern.

Add a check to the existing n8n.validate helper (called once from
deployment-main.yaml on every install) that fails fast with a clear
error message if any podLabels key collides with the chart-managed
set:

- app.kubernetes.io/{name,instance,component,version,managed-by}
- helm.sh/chart

The first three are part of the Deployment's selector.matchLabels;
overriding them would render an invalid Deployment. The other three
are recommended-set labels; rendering both chart and user values
would produce duplicate YAML keys and fail at parse time anyway.

Failing loudly with a useful message is friendlier than a silent
rendering quirk. values.yaml comment updated to document the
reserved set.

Signed-off-by: Gilad Karach <gkarach@Gilads-MacBook-Pro-2.local>
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 2 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="charts/n8n/templates/_helpers.tpl">

<violation number="1" location="charts/n8n/templates/_helpers.tpl:146">
P2: Pod labels validation only guards reserved keys, not value types; non-string label values can pass Helm validation and fail against Kubernetes `map[string]string` labels.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review, or fix all with cubic.

Comment thread charts/n8n/templates/_helpers.tpl Outdated
Cubic AI flagged that the previous validation only guarded reserved
keys, not value types. Kubernetes labels are map[string]string, so
non-string podLabels values (ints, bools) would pass our check, render
as native YAML scalars, and fail the API validation only at deploy
time with a much less actionable message.

Add a kindIs "string" check alongside the reserved-keys guard. Failing
errors now surface at template-render time:

  Error: ... podLabels."priority" must be a string (got int64).
  Kubernetes labels are map[string]string; quote numeric or boolean
  values, e.g. "priority": "true".

Pointing users at the fix (quote the value) inside the error message
itself avoids the common confusion where YAML's `key: true` parses
as a bool and silently fails K8s validation.

Signed-off-by: Gilad Karach <gkarach@Gilads-MacBook-Pro-2.local>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant