Skip to content

Conversation

@akshat5302
Copy link
Member

@akshat5302 akshat5302 commented Jul 21, 2025

Description

  • Configured HPA with default CPU threshold and min/max replica limits for plane-enterprise helm chart

Summary by CodeRabbit

  • New Features
    • Added autoscaling support for all major services, enabling automatic scaling of workloads based on CPU and memory usage.
    • Introduced configuration options for autoscaling parameters, including minimum and maximum replicas and utilization thresholds, for each service.
  • Chores
    • Updated chart version to 1.3.0.
    • Increased default CPU and memory resource requests and limits across multiple services for improved performance.
    • Removed resource constraints from the monitor service to streamline its configuration.

…scaler (HPA) configurations for multiple services in values.yaml and deployment templates. The HPA settings include minimum and maximum replicas along with CPU and memory utilization targets.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 21, 2025

"""

Walkthrough

This change introduces conditional HorizontalPodAutoscaler (HPA) resource definitions for all major workloads in the plane-enterprise Helm chart. It adds a helper to detect metrics-server presence, updates default values to include autoscaling parameters for each service, and bumps the chart version to 1.3.0.

Changes

File(s) Change Summary
charts/plane-enterprise/Chart.yaml Updated chart version from 1.2.7 to 1.3.0
charts/plane-enterprise/templates/_helpers.tpl Added enable.hpa Helm template helper to detect metrics-server via ClusterRole
charts/plane-enterprise/templates/workloads/*.deployment.yaml Increased resource requests and limits; added conditional HPA resource manifests for admin, api, beat-worker, live, silo, space, web, worker, email workloads
charts/plane-enterprise/templates/workloads/monitor.stateful.yaml Removed resource requests and limits from monitor StatefulSet container
charts/plane-enterprise/values.yaml Added or updated resource requests, limits, and added autoscaling configuration blocks for main services (web, space, admin, live, silo, api, worker, beatworker, email_service)
charts/plane-enterprise/questions.yml Reordered and updated resource request/limit variables; added autoscaling config variables for main services; removed resource questions for monitor service

Sequence Diagram(s)

sequenceDiagram
    participant Helm as Helm Chart
    participant K8s as Kubernetes API
    participant Cluster as Cluster (with/without metrics-server)
    participant Workload as Workload Deployment

    Helm->>K8s: Render templates
    Helm->>Cluster: Check for ClusterRole "system:metrics-server"
    alt ClusterRole exists
        Helm->>K8s: Deploy HPA for each workload with autoscaling config
        K8s->>Workload: Monitor resource usage and scale as needed
    else ClusterRole missing
        Helm->>K8s: Skip HPA resource creation
    end
Loading

Estimated code review effort

3 (30–60 minutes)

Possibly related PRs

Suggested reviewers

  • mguptahub

Poem

🐇
In the meadow of code where workloads grow,
Autoscalers hop in, ready to show.
With helpers to check if metrics abound,
Each service now scales when traffic is found.
Chart version hops up—what a delight!
The cluster feels lighter, workloads just right.
—A happy rabbit, scaling in flight!

"""


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@akshat5302 akshat5302 changed the title Plane-EE: Add Horizontal Pod Autoscaler (HPA) to Plane Enterprise Deployments [INFRA-215] Plane-EE: Add Horizontal Pod Autoscaler (HPA) to Plane Enterprise Deployments Jul 21, 2025
@makeplane
Copy link

makeplane bot commented Jul 21, 2025

Pull Request Linked with Plane Work Items

Comment Automatically Generated by Plane

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (4)
charts/plane-enterprise/templates/workloads/silo.deployment.yaml (1)

92-128: Same DRY concern as beat-worker

The HPA stanza here is identical to the one reviewed in beat-worker.deployment.yaml; please see that comment for the suggested refactor.

charts/plane-enterprise/templates/workloads/web.deployment.yaml (1)

63-99: Same DRY concern as beat-worker

The HPA stanza here is identical to the one reviewed in beat-worker.deployment.yaml; please see that comment for the suggested refactor.

charts/plane-enterprise/templates/workloads/live.deployment.yaml (1)

70-105: Same DRY concern as beat-worker

The HPA stanza here is identical to the one reviewed in beat-worker.deployment.yaml; please see that comment for the suggested refactor.

charts/plane-enterprise/templates/workloads/api.deployment.yaml (1)

85-121: Same DRY concern as beat-worker

The HPA stanza here is identical to the one reviewed in beat-worker.deployment.yaml; please see that comment for the suggested refactor.

🧹 Nitpick comments (5)
charts/plane-enterprise/values.yaml (2)

80-84: Consider workload-specific autoscaling parameters.

The autoscaling configuration uses consistent defaults across all services, which is a good starting point. However, consider whether different workloads might benefit from service-specific tuning of these parameters based on their resource usage patterns and scaling characteristics.

Also applies to: 105-109, 120-124, 135-139, 150-154, 162-166, 173-177, 188-192


219-219: Fix trailing whitespace.

Remove the trailing spaces to resolve the YAMLlint warning.

-    
+
charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml (1)

53-89: Consolidate the HPA block into a reusable partial to eliminate duplication

This entire HPA fragment is copy-pasted (with only the service name differing) across every workload template. Copy-paste will quickly go out of sync the next time you tweak fields such as behavior, labels, or extra metrics.

Helm already gives you tpl/include—move the generic YAML into, say, _hpa.tpl and call it with a dict containing workload, valuesPath, etc.:

+{{- include "plane-enterprise.renderHPA" (dict
+     "ctx"   .
+     "svc"   "beatworker") }}

That removes ~150 duplicated lines and keeps future changes in one place.

charts/plane-enterprise/templates/workloads/space.deployment.yaml (2)

78-80: Add safe defaults for replica bounds

If the chart consumer omits services.space.autoscaling.{minReplicas,maxReplicas}, Helm will render <no value> and fail the install/upgrade.
Wrap the look-ups with default to keep the template resilient:

-  minReplicas: {{ .Values.services.space.autoscaling.minReplicas }}
-  maxReplicas: {{ .Values.services.space.autoscaling.maxReplicas }}
+  minReplicas: {{ .Values.services.space.autoscaling.minReplicas | default 1 }}
+  maxReplicas: {{ .Values.services.space.autoscaling.maxReplicas | default 5 }}

(Adjust the fallback numbers to match project conventions.)


71-73: Align labels with Kubernetes conventions

app.name is custom and duplicates information available in the standard
app.kubernetes.io/name, app.kubernetes.io/instance, etc. Using the
well-known labels improves compatibility with tooling and selectors:

labels:
  app.kubernetes.io/name: space
  app.kubernetes.io/instance: {{ .Release.Name }}
  app.kubernetes.io/component: hpa
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3701a11 and 0ed0315.

📒 Files selected for processing (11)
  • charts/plane-enterprise/Chart.yaml (1 hunks)
  • charts/plane-enterprise/templates/_helpers.tpl (1 hunks)
  • charts/plane-enterprise/templates/workloads/admin.deployment.yaml (1 hunks)
  • charts/plane-enterprise/templates/workloads/api.deployment.yaml (1 hunks)
  • charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml (1 hunks)
  • charts/plane-enterprise/templates/workloads/live.deployment.yaml (1 hunks)
  • charts/plane-enterprise/templates/workloads/silo.deployment.yaml (1 hunks)
  • charts/plane-enterprise/templates/workloads/space.deployment.yaml (1 hunks)
  • charts/plane-enterprise/templates/workloads/web.deployment.yaml (1 hunks)
  • charts/plane-enterprise/templates/workloads/worker.deployment.yaml (1 hunks)
  • charts/plane-enterprise/values.yaml (7 hunks)
🧰 Additional context used
🪛 YAMLlint (1.37.1)
charts/plane-enterprise/values.yaml

[error] 219-219: trailing spaces

(trailing-spaces)

🔇 Additional comments (4)
charts/plane-enterprise/templates/_helpers.tpl (1)

9-16: LGTM! Well-implemented metrics-server detection.

The enable.hpa helper function correctly uses the Kubernetes lookup API to detect the presence of the metrics-server ClusterRole, which is essential for HPA functionality. This approach ensures HPAs are only created when the required infrastructure is available.

charts/plane-enterprise/Chart.yaml (1)

8-8: Appropriate version bump for new functionality.

The minor version increment from 1.2.7 to 1.3.0 correctly reflects the addition of HPA functionality as a new, backwards-compatible feature.

charts/plane-enterprise/templates/workloads/worker.deployment.yaml (1)

54-89: Well-implemented HPA resource.

The HPA configuration is correctly structured with:

  • Proper conditional rendering based on metrics-server availability
  • Current autoscaling/v2 API version usage
  • Correct target reference to the worker deployment
  • Flexible metrics configuration supporting both CPU and memory utilization
  • Values properly sourced from Helm configuration
charts/plane-enterprise/templates/workloads/admin.deployment.yaml (1)

64-99: Consistent and correct HPA implementation.

The admin HPA resource follows the same well-structured pattern as other workloads, with proper conditional rendering, correct target references, and flexible metrics configuration. The implementation maintains consistency across all service HPAs.

Comment on lines +63 to +65
---
{{- if eq (include "enable.hpa" . | trim) "true" }}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid emitting an empty YAML document when HPA is disabled

The --- delimiter sits outside the {{- if ... }} guard.
When enable.hpa resolves to false, Helm will still render a standalone ---, producing an empty document that some linters (kube-lint, kube-score, etc.) flag as a warning and that can break kubectl apply -k pipelines expecting non-empty manifests.

----
-{{- if eq (include "enable.hpa" . | trim) "true" }}
+{{- if eq (include "enable.hpa" . | trim) "true" }}
+---

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In charts/plane-enterprise/templates/workloads/space.deployment.yaml around
lines 63 to 65, the YAML document start delimiter `---` is outside the
conditional block that checks if HPA is enabled, causing an empty document to be
emitted when HPA is disabled. Move the `---` delimiter inside the `{{- if eq
(include "enable.hpa" . | trim) "true" }}` block to ensure it only appears when
HPA is enabled, preventing empty YAML documents and avoiding linter warnings or
pipeline errors.

…, maxReplicas, and averageUtilization for autoscaling metrics across multiple services.
…ues.yaml for multiple services. Update CPU and memory requests, limits, and autoscaling configurations to improve performance and resource management.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🔭 Outside diff range comments (1)
charts/plane-enterprise/templates/workloads/worker.deployment.yaml (1)

21-23: Wrong values key referenced for worker image & pullPolicy

The worker container pulls its image and pullPolicy from .Values.services.api.*, not .Values.services.worker.*. This looks like a copy-paste slip and prevents per-worker overrides.

-        imagePullPolicy: {{ .Values.services.api.pullPolicy | default "Always" }}
-        image: {{ .Values.services.api.image | default "artifacts.plane.so/makeplane/backend-commercial" }}:{{ .Values.planeVersion }}
+        imagePullPolicy: {{ .Values.services.worker.pullPolicy | default "Always" }}
+        image: {{ .Values.services.worker.image | default "artifacts.plane.so/makeplane/backend-commercial" }}:{{ .Values.planeVersion }}
🧹 Nitpick comments (1)
charts/plane-enterprise/questions.yml (1)

97-128: Default requests/limits & autoscaling targets are duplicated across every service

Every stanza repeats the same six keys (cpuRequest, memoryRequest, cpuLimit, memoryLimit, autoscaling.*).
This is hard to maintain and makes accidental drift inevitable.

Consider:

  1. Defining a YAML anchor / alias for the common block, or
  2. Moving the defaults to values.yaml and referencing them here only when you truly need to expose the knob in Rancher UI.

This reduces ~300 duplicated lines and keeps all services consistent in the future.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b0d8c75 and 8b0e97e.

📒 Files selected for processing (12)
  • charts/plane-enterprise/questions.yml (8 hunks)
  • charts/plane-enterprise/templates/workloads/admin.deployment.yaml (2 hunks)
  • charts/plane-enterprise/templates/workloads/api.deployment.yaml (2 hunks)
  • charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml (2 hunks)
  • charts/plane-enterprise/templates/workloads/email.deployment.yaml (2 hunks)
  • charts/plane-enterprise/templates/workloads/live.deployment.yaml (2 hunks)
  • charts/plane-enterprise/templates/workloads/monitor.stateful.yaml (0 hunks)
  • charts/plane-enterprise/templates/workloads/silo.deployment.yaml (2 hunks)
  • charts/plane-enterprise/templates/workloads/space.deployment.yaml (2 hunks)
  • charts/plane-enterprise/templates/workloads/web.deployment.yaml (2 hunks)
  • charts/plane-enterprise/templates/workloads/worker.deployment.yaml (2 hunks)
  • charts/plane-enterprise/values.yaml (2 hunks)
💤 Files with no reviewable changes (1)
  • charts/plane-enterprise/templates/workloads/monitor.stateful.yaml
🚧 Files skipped from review as they are similar to previous changes (4)
  • charts/plane-enterprise/templates/workloads/live.deployment.yaml
  • charts/plane-enterprise/values.yaml
  • charts/plane-enterprise/templates/workloads/silo.deployment.yaml
  • charts/plane-enterprise/templates/workloads/space.deployment.yaml

Comment on lines +50 to 55
cpu: {{ .Values.services.api.cpuRequest | default "1000m" | quote }}
memory: {{ .Values.services.api.memoryRequest | default "1000Mi" | quote }}
limits:
memory: {{ .Values.services.api.memoryLimit | default "1000Mi" | quote }}
cpu: {{ .Values.services.api.cpuLimit | default "500m" | quote}}
cpu: {{ .Values.services.api.cpuLimit | default "2000m" | quote}}
memory: {{ .Values.services.api.memoryLimit | default "2000Mi" | quote }}
command:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Default resource bump is very aggressive

Jumping from 50 m / 50 Mi to 1000 m / 1000 Mi will make the chart unschedulable on many development clusters (e.g. k3d, kind, 2-core nodes) and will silently fail before the HPA even has a chance to scale down. Consider retaining the previous low defaults and letting users opt-in to larger footprints via values.yaml.

🤖 Prompt for AI Agents
In charts/plane-enterprise/templates/workloads/api.deployment.yaml between lines
50 and 55, the default CPU and memory requests and limits are set too high
(1000m CPU and 1000Mi memory), which can cause scheduling failures on small
development clusters. Adjust the default values to the previous lower settings
(e.g., 50m CPU and 50Mi memory) to ensure compatibility with lightweight
environments, and allow users to increase these values explicitly via
values.yaml if needed.

Comment on lines 85 to 121
---
{{- if eq (include "enable.hpa" . | trim) "true" }}

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ .Release.Name }}-api-hpa
namespace: {{ .Release.Namespace }}
labels:
app.name: {{ .Release.Namespace }}-{{ .Release.Name }}-api-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .Release.Name }}-api-wl
minReplicas: {{ .Values.services.api.autoscaling.minReplicas | default 1 | quote }}
maxReplicas: {{ .Values.services.api.autoscaling.maxReplicas | default 5 | quote }}
{{- if or .Values.services.api.autoscaling.targetCPUUtilizationPercentage .Values.services.api.autoscaling.targetMemoryUtilizationPercentage }}
metrics:
{{- if .Values.services.api.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.services.api.autoscaling.targetCPUUtilizationPercentage | default 90 | quote }}
{{- end }}
{{- if .Values.services.api.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.services.api.autoscaling.targetMemoryUtilizationPercentage | default 90 | quote }}
{{- end }}
{{- end }}
{{- end }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

minReplicas, maxReplicas, and averageUtilization rendered as strings will break kubectl apply

The | quote filter forces the integers to YAML strings:

minReplicas: "1"

Kubernetes rejects this with
cannot unmarshal string into Go value of type int32.

Same for averageUtilization.

-  minReplicas: {{ .Values.services.api.autoscaling.minReplicas | default 1 | quote }}
-  maxReplicas: {{ .Values.services.api.autoscaling.maxReplicas | default 5 | quote }}
+  minReplicas: {{ .Values.services.api.autoscaling.minReplicas | default 1 | int }}
+  maxReplicas: {{ .Values.services.api.autoscaling.maxReplicas | default 5 | int }}

-          averageUtilization: {{ .Values.services.api.autoscaling.targetCPUUtilizationPercentage | default 90 | quote }}
+          averageUtilization: {{ .Values.services.api.autoscaling.targetCPUUtilizationPercentage | default 90 | int }}

Repeat the | int change for the memory metric block as well.

🤖 Prompt for AI Agents
In charts/plane-enterprise/templates/workloads/api.deployment.yaml between lines
85 and 121, the minReplicas, maxReplicas, and averageUtilization fields are
incorrectly rendered as strings due to the use of the | quote filter, which
causes kubectl apply to fail. Remove the | quote filter from these fields and
instead use the | int filter to ensure they are rendered as integers. Apply this
fix consistently to both the CPU and memory averageUtilization values.

…uests, limits, and autoscaling configurations for various services. Update default values for autoscaling parameters to improve deployment efficiency.
…ault values for minReplicas, maxReplicas, and averageUtilization settings across multiple services, enhancing clarity and consistency in autoscaling configurations.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
charts/plane-enterprise/templates/workloads/api.deployment.yaml (2)

50-55: Aggressive resource bump previously flagged, still present
Same concern as earlier review: defaults of 1000 m / 1000 Mi requests will break light clusters; please reconsider.


100-121: Ensure HPA numeric fields are rendered as integers

minReplicas, maxReplicas, and both averageUtilization fields lack an explicit | int cast; any string values in values.yaml will surface as invalid YAML for the autoscaling API.

-  minReplicas: {{ .Values.services.api.autoscaling.minReplicas | default 1 }}
-  maxReplicas: {{ .Values.services.api.autoscaling.maxReplicas | default 5 }}
+  minReplicas: {{ .Values.services.api.autoscaling.minReplicas | default 1 | int }}
+  maxReplicas: {{ .Values.services.api.autoscaling.maxReplicas | default 5 | int }}
...
-          averageUtilization: {{ .Values.services.api.autoscaling.targetCPUUtilizationPercentage | default 90 }}
+          averageUtilization: {{ .Values.services.api.autoscaling.targetCPUUtilizationPercentage | default 90 | int }}

Apply the same to the memory metric section.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0356f6c and 333c897.

📒 Files selected for processing (9)
  • charts/plane-enterprise/templates/workloads/admin.deployment.yaml (2 hunks)
  • charts/plane-enterprise/templates/workloads/api.deployment.yaml (2 hunks)
  • charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml (2 hunks)
  • charts/plane-enterprise/templates/workloads/email.deployment.yaml (2 hunks)
  • charts/plane-enterprise/templates/workloads/live.deployment.yaml (2 hunks)
  • charts/plane-enterprise/templates/workloads/silo.deployment.yaml (2 hunks)
  • charts/plane-enterprise/templates/workloads/space.deployment.yaml (2 hunks)
  • charts/plane-enterprise/templates/workloads/web.deployment.yaml (2 hunks)
  • charts/plane-enterprise/templates/workloads/worker.deployment.yaml (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
  • charts/plane-enterprise/templates/workloads/email.deployment.yaml
  • charts/plane-enterprise/templates/workloads/web.deployment.yaml
  • charts/plane-enterprise/templates/workloads/worker.deployment.yaml
  • charts/plane-enterprise/templates/workloads/silo.deployment.yaml
  • charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml
  • charts/plane-enterprise/templates/workloads/space.deployment.yaml
  • charts/plane-enterprise/templates/workloads/admin.deployment.yaml

Comment on lines +50 to 55
cpu: {{ .Values.services.live.cpuRequest | default "500m" | quote }}
memory: {{ .Values.services.live.memoryRequest | default "1000Mi" | quote }}
limits:
memory: {{ .Values.services.live.memoryLimit | default "1000Mi" | quote }}
cpu: {{ .Values.services.live.cpuLimit | default "500m" | quote}}
cpu: {{ .Values.services.live.cpuLimit | default "1000m" | quote}}
memory: {{ .Values.services.live.memoryLimit | default "2000Mi" | quote }}
command:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Default requests/limits multiplied by 10×–20× risk unschedulable pods on dev clusters

Jumping from 50 m / 50 Mi to 500 m / 1000 Mi requests (and 1 CPU / 2 Gi limits) makes the live pod impossible to schedule on single-vCPU KIND/k3d nodes and many default CI runners. Consider keeping the previous conservative defaults and letting operators override via values.yaml, or at least document the implication very clearly.

🤖 Prompt for AI Agents
In charts/plane-enterprise/templates/workloads/live.deployment.yaml around lines
50 to 55, the default CPU and memory requests and limits are set too high,
causing pods to be unschedulable on small dev clusters. Reduce the default
values for cpuRequest and memoryRequest to more conservative levels similar to
the previous defaults (e.g., 50m CPU and 50Mi memory) and update cpuLimit and
memoryLimit accordingly. Also, ensure these defaults can be overridden via
values.yaml and add clear documentation about the resource implications for dev
environments.

Comment on lines 69 to +105
---
{{- if eq (include "enable.hpa" . | trim) "true" }}

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ .Release.Name }}-live-hpa
namespace: {{ .Release.Namespace }}
labels:
app.name: {{ .Release.Namespace }}-{{ .Release.Name }}-live-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .Release.Name }}-live-wl
minReplicas: {{ .Values.services.live.autoscaling.minReplicas | default 1 }}
maxReplicas: {{ .Values.services.live.autoscaling.maxReplicas | default 5 }}
{{- if or .Values.services.live.autoscaling.targetCPUUtilizationPercentage .Values.services.live.autoscaling.targetMemoryUtilizationPercentage }}
metrics:
{{- if .Values.services.live.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.services.live.autoscaling.targetCPUUtilizationPercentage | default 90 }}
{{- end }}
{{- if .Values.services.live.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.services.live.autoscaling.targetMemoryUtilizationPercentage | default 90 }}
{{- end }}
{{- end }}
{{- end }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Cast numeric knobs to int to avoid “cannot unmarshal string into int32”

If a user sets these values as strings in values.yaml (common Helm pattern) the rendered YAML will emit quoted scalars, and kubectl apply will fail. Pipe through | int to guarantee correct type:

-  minReplicas: {{ .Values.services.live.autoscaling.minReplicas | default 1 }}
-  maxReplicas: {{ .Values.services.live.autoscaling.maxReplicas | default 5 }}
+  minReplicas: {{ .Values.services.live.autoscaling.minReplicas | default 1 | int }}
+  maxReplicas: {{ .Values.services.live.autoscaling.maxReplicas | default 5 | int }}

-          averageUtilization: {{ .Values.services.live.autoscaling.targetCPUUtilizationPercentage | default 90 }}
+          averageUtilization: {{ .Values.services.live.autoscaling.targetCPUUtilizationPercentage | default 90 | int }}

Repeat the | int addition for the memory metric block as well.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
---
{{- if eq (include "enable.hpa" . | trim) "true" }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ .Release.Name }}-live-hpa
namespace: {{ .Release.Namespace }}
labels:
app.name: {{ .Release.Namespace }}-{{ .Release.Name }}-live-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .Release.Name }}-live-wl
minReplicas: {{ .Values.services.live.autoscaling.minReplicas | default 1 }}
maxReplicas: {{ .Values.services.live.autoscaling.maxReplicas | default 5 }}
{{- if or .Values.services.live.autoscaling.targetCPUUtilizationPercentage .Values.services.live.autoscaling.targetMemoryUtilizationPercentage }}
metrics:
{{- if .Values.services.live.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.services.live.autoscaling.targetCPUUtilizationPercentage | default 90 }}
{{- end }}
{{- if .Values.services.live.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.services.live.autoscaling.targetMemoryUtilizationPercentage | default 90 }}
{{- end }}
{{- end }}
{{- end }}
---
{{- if eq (include "enable.hpa" . | trim) "true" }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ .Release.Name }}-live-hpa
namespace: {{ .Release.Namespace }}
labels:
app.name: {{ .Release.Namespace }}-{{ .Release.Name }}-live-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .Release.Name }}-live-wl
minReplicas: {{ .Values.services.live.autoscaling.minReplicas | default 1 | int }}
maxReplicas: {{ .Values.services.live.autoscaling.maxReplicas | default 5 | int }}
{{- if or .Values.services.live.autoscaling.targetCPUUtilizationPercentage .Values.services.live.autoscaling.targetMemoryUtilizationPercentage }}
metrics:
{{- if .Values.services.live.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.services.live.autoscaling.targetCPUUtilizationPercentage | default 90 | int }}
{{- end }}
{{- if .Values.services.live.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.services.live.autoscaling.targetMemoryUtilizationPercentage | default 90 }}
{{- end }}
{{- end }}
{{- end }}
🤖 Prompt for AI Agents
In charts/plane-enterprise/templates/workloads/live.deployment.yaml between
lines 69 and 105, the minReplicas, maxReplicas, and averageUtilization values
for CPU and memory metrics may be rendered as strings if set as strings in
values.yaml, causing kubectl apply to fail. Fix this by piping these values
through the int function (| int) to ensure they are rendered as integers. Apply
this change to minReplicas, maxReplicas, targetCPUUtilizationPercentage, and
targetMemoryUtilizationPercentage fields.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants