-
Notifications
You must be signed in to change notification settings - Fork 21
[INFRA-215] Plane-EE: Add Horizontal Pod Autoscaler (HPA) to Plane Enterprise Deployments #133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
…scaler (HPA) configurations for multiple services in values.yaml and deployment templates. The HPA settings include minimum and maximum replicas along with CPU and memory utilization targets.
|
""" WalkthroughThis change introduces conditional HorizontalPodAutoscaler (HPA) resource definitions for all major workloads in the plane-enterprise Helm chart. It adds a helper to detect metrics-server presence, updates default values to include autoscaling parameters for each service, and bumps the chart version to 1.3.0. Changes
Sequence Diagram(s)sequenceDiagram
participant Helm as Helm Chart
participant K8s as Kubernetes API
participant Cluster as Cluster (with/without metrics-server)
participant Workload as Workload Deployment
Helm->>K8s: Render templates
Helm->>Cluster: Check for ClusterRole "system:metrics-server"
alt ClusterRole exists
Helm->>K8s: Deploy HPA for each workload with autoscaling config
K8s->>Workload: Monitor resource usage and scale as needed
else ClusterRole missing
Helm->>K8s: Skip HPA resource creation
end
Estimated code review effort3 (30–60 minutes) Possibly related PRs
Suggested reviewers
Poem
""" Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
|
Pull Request Linked with Plane Work Items Comment Automatically Generated by Plane |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (4)
charts/plane-enterprise/templates/workloads/silo.deployment.yaml (1)
92-128: Same DRY concern as beat-workerThe HPA stanza here is identical to the one reviewed in
beat-worker.deployment.yaml; please see that comment for the suggested refactor.charts/plane-enterprise/templates/workloads/web.deployment.yaml (1)
63-99: Same DRY concern as beat-workerThe HPA stanza here is identical to the one reviewed in
beat-worker.deployment.yaml; please see that comment for the suggested refactor.charts/plane-enterprise/templates/workloads/live.deployment.yaml (1)
70-105: Same DRY concern as beat-workerThe HPA stanza here is identical to the one reviewed in
beat-worker.deployment.yaml; please see that comment for the suggested refactor.charts/plane-enterprise/templates/workloads/api.deployment.yaml (1)
85-121: Same DRY concern as beat-workerThe HPA stanza here is identical to the one reviewed in
beat-worker.deployment.yaml; please see that comment for the suggested refactor.
🧹 Nitpick comments (5)
charts/plane-enterprise/values.yaml (2)
80-84: Consider workload-specific autoscaling parameters.The autoscaling configuration uses consistent defaults across all services, which is a good starting point. However, consider whether different workloads might benefit from service-specific tuning of these parameters based on their resource usage patterns and scaling characteristics.
Also applies to: 105-109, 120-124, 135-139, 150-154, 162-166, 173-177, 188-192
219-219: Fix trailing whitespace.Remove the trailing spaces to resolve the YAMLlint warning.
- +charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml (1)
53-89: Consolidate the HPA block into a reusable partial to eliminate duplicationThis entire HPA fragment is copy-pasted (with only the service name differing) across every workload template. Copy-paste will quickly go out of sync the next time you tweak fields such as
behavior, labels, or extra metrics.Helm already gives you
tpl/include—move the generic YAML into, say,_hpa.tpland call it with a dict containingworkload,valuesPath, etc.:+{{- include "plane-enterprise.renderHPA" (dict + "ctx" . + "svc" "beatworker") }}That removes ~150 duplicated lines and keeps future changes in one place.
charts/plane-enterprise/templates/workloads/space.deployment.yaml (2)
78-80: Add safe defaults for replica boundsIf the chart consumer omits
services.space.autoscaling.{minReplicas,maxReplicas}, Helm will render<no value>and fail the install/upgrade.
Wrap the look-ups withdefaultto keep the template resilient:- minReplicas: {{ .Values.services.space.autoscaling.minReplicas }} - maxReplicas: {{ .Values.services.space.autoscaling.maxReplicas }} + minReplicas: {{ .Values.services.space.autoscaling.minReplicas | default 1 }} + maxReplicas: {{ .Values.services.space.autoscaling.maxReplicas | default 5 }}(Adjust the fallback numbers to match project conventions.)
71-73: Align labels with Kubernetes conventions
app.nameis custom and duplicates information available in the standard
app.kubernetes.io/name,app.kubernetes.io/instance, etc. Using the
well-known labels improves compatibility with tooling and selectors:labels: app.kubernetes.io/name: space app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/component: hpa
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
charts/plane-enterprise/Chart.yaml(1 hunks)charts/plane-enterprise/templates/_helpers.tpl(1 hunks)charts/plane-enterprise/templates/workloads/admin.deployment.yaml(1 hunks)charts/plane-enterprise/templates/workloads/api.deployment.yaml(1 hunks)charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml(1 hunks)charts/plane-enterprise/templates/workloads/live.deployment.yaml(1 hunks)charts/plane-enterprise/templates/workloads/silo.deployment.yaml(1 hunks)charts/plane-enterprise/templates/workloads/space.deployment.yaml(1 hunks)charts/plane-enterprise/templates/workloads/web.deployment.yaml(1 hunks)charts/plane-enterprise/templates/workloads/worker.deployment.yaml(1 hunks)charts/plane-enterprise/values.yaml(7 hunks)
🧰 Additional context used
🪛 YAMLlint (1.37.1)
charts/plane-enterprise/values.yaml
[error] 219-219: trailing spaces
(trailing-spaces)
🔇 Additional comments (4)
charts/plane-enterprise/templates/_helpers.tpl (1)
9-16: LGTM! Well-implemented metrics-server detection.The
enable.hpahelper function correctly uses the Kubernetes lookup API to detect the presence of the metrics-server ClusterRole, which is essential for HPA functionality. This approach ensures HPAs are only created when the required infrastructure is available.charts/plane-enterprise/Chart.yaml (1)
8-8: Appropriate version bump for new functionality.The minor version increment from 1.2.7 to 1.3.0 correctly reflects the addition of HPA functionality as a new, backwards-compatible feature.
charts/plane-enterprise/templates/workloads/worker.deployment.yaml (1)
54-89: Well-implemented HPA resource.The HPA configuration is correctly structured with:
- Proper conditional rendering based on metrics-server availability
- Current autoscaling/v2 API version usage
- Correct target reference to the worker deployment
- Flexible metrics configuration supporting both CPU and memory utilization
- Values properly sourced from Helm configuration
charts/plane-enterprise/templates/workloads/admin.deployment.yaml (1)
64-99: Consistent and correct HPA implementation.The admin HPA resource follows the same well-structured pattern as other workloads, with proper conditional rendering, correct target references, and flexible metrics configuration. The implementation maintains consistency across all service HPAs.
| --- | ||
| {{- if eq (include "enable.hpa" . | trim) "true" }} | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid emitting an empty YAML document when HPA is disabled
The --- delimiter sits outside the {{- if ... }} guard.
When enable.hpa resolves to false, Helm will still render a standalone ---, producing an empty document that some linters (kube-lint, kube-score, etc.) flag as a warning and that can break kubectl apply -k pipelines expecting non-empty manifests.
----
-{{- if eq (include "enable.hpa" . | trim) "true" }}
+{{- if eq (include "enable.hpa" . | trim) "true" }}
+---Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In charts/plane-enterprise/templates/workloads/space.deployment.yaml around
lines 63 to 65, the YAML document start delimiter `---` is outside the
conditional block that checks if HPA is enabled, causing an empty document to be
emitted when HPA is disabled. Move the `---` delimiter inside the `{{- if eq
(include "enable.hpa" . | trim) "true" }}` block to ensure it only appears when
HPA is enabled, preventing empty YAML documents and avoiding linter warnings or
pipeline errors.
…, maxReplicas, and averageUtilization for autoscaling metrics across multiple services.
…ues.yaml for multiple services. Update CPU and memory requests, limits, and autoscaling configurations to improve performance and resource management.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🔭 Outside diff range comments (1)
charts/plane-enterprise/templates/workloads/worker.deployment.yaml (1)
21-23: Wrong values key referenced for worker image & pullPolicyThe worker container pulls its image and pullPolicy from
.Values.services.api.*, not.Values.services.worker.*. This looks like a copy-paste slip and prevents per-worker overrides.- imagePullPolicy: {{ .Values.services.api.pullPolicy | default "Always" }} - image: {{ .Values.services.api.image | default "artifacts.plane.so/makeplane/backend-commercial" }}:{{ .Values.planeVersion }} + imagePullPolicy: {{ .Values.services.worker.pullPolicy | default "Always" }} + image: {{ .Values.services.worker.image | default "artifacts.plane.so/makeplane/backend-commercial" }}:{{ .Values.planeVersion }}
🧹 Nitpick comments (1)
charts/plane-enterprise/questions.yml (1)
97-128: Default requests/limits & autoscaling targets are duplicated across every serviceEvery stanza repeats the same six keys (
cpuRequest,memoryRequest,cpuLimit,memoryLimit,autoscaling.*).
This is hard to maintain and makes accidental drift inevitable.Consider:
- Defining a YAML anchor / alias for the common block, or
- Moving the defaults to
values.yamland referencing them here only when you truly need to expose the knob in Rancher UI.This reduces ~300 duplicated lines and keeps all services consistent in the future.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
charts/plane-enterprise/questions.yml(8 hunks)charts/plane-enterprise/templates/workloads/admin.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/api.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/email.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/live.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/monitor.stateful.yaml(0 hunks)charts/plane-enterprise/templates/workloads/silo.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/space.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/web.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/worker.deployment.yaml(2 hunks)charts/plane-enterprise/values.yaml(2 hunks)
💤 Files with no reviewable changes (1)
- charts/plane-enterprise/templates/workloads/monitor.stateful.yaml
🚧 Files skipped from review as they are similar to previous changes (4)
- charts/plane-enterprise/templates/workloads/live.deployment.yaml
- charts/plane-enterprise/values.yaml
- charts/plane-enterprise/templates/workloads/silo.deployment.yaml
- charts/plane-enterprise/templates/workloads/space.deployment.yaml
charts/plane-enterprise/templates/workloads/admin.deployment.yaml
Outdated
Show resolved
Hide resolved
| cpu: {{ .Values.services.api.cpuRequest | default "1000m" | quote }} | ||
| memory: {{ .Values.services.api.memoryRequest | default "1000Mi" | quote }} | ||
| limits: | ||
| memory: {{ .Values.services.api.memoryLimit | default "1000Mi" | quote }} | ||
| cpu: {{ .Values.services.api.cpuLimit | default "500m" | quote}} | ||
| cpu: {{ .Values.services.api.cpuLimit | default "2000m" | quote}} | ||
| memory: {{ .Values.services.api.memoryLimit | default "2000Mi" | quote }} | ||
| command: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Default resource bump is very aggressive
Jumping from 50 m / 50 Mi to 1000 m / 1000 Mi will make the chart unschedulable on many development clusters (e.g. k3d, kind, 2-core nodes) and will silently fail before the HPA even has a chance to scale down. Consider retaining the previous low defaults and letting users opt-in to larger footprints via values.yaml.
🤖 Prompt for AI Agents
In charts/plane-enterprise/templates/workloads/api.deployment.yaml between lines
50 and 55, the default CPU and memory requests and limits are set too high
(1000m CPU and 1000Mi memory), which can cause scheduling failures on small
development clusters. Adjust the default values to the previous lower settings
(e.g., 50m CPU and 50Mi memory) to ensure compatibility with lightweight
environments, and allow users to increase these values explicitly via
values.yaml if needed.
| --- | ||
| {{- if eq (include "enable.hpa" . | trim) "true" }} | ||
|
|
||
| apiVersion: autoscaling/v2 | ||
| kind: HorizontalPodAutoscaler | ||
| metadata: | ||
| name: {{ .Release.Name }}-api-hpa | ||
| namespace: {{ .Release.Namespace }} | ||
| labels: | ||
| app.name: {{ .Release.Namespace }}-{{ .Release.Name }}-api-hpa | ||
| spec: | ||
| scaleTargetRef: | ||
| apiVersion: apps/v1 | ||
| kind: Deployment | ||
| name: {{ .Release.Name }}-api-wl | ||
| minReplicas: {{ .Values.services.api.autoscaling.minReplicas | default 1 | quote }} | ||
| maxReplicas: {{ .Values.services.api.autoscaling.maxReplicas | default 5 | quote }} | ||
| {{- if or .Values.services.api.autoscaling.targetCPUUtilizationPercentage .Values.services.api.autoscaling.targetMemoryUtilizationPercentage }} | ||
| metrics: | ||
| {{- if .Values.services.api.autoscaling.targetCPUUtilizationPercentage }} | ||
| - type: Resource | ||
| resource: | ||
| name: cpu | ||
| target: | ||
| type: Utilization | ||
| averageUtilization: {{ .Values.services.api.autoscaling.targetCPUUtilizationPercentage | default 90 | quote }} | ||
| {{- end }} | ||
| {{- if .Values.services.api.autoscaling.targetMemoryUtilizationPercentage }} | ||
| - type: Resource | ||
| resource: | ||
| name: memory | ||
| target: | ||
| type: Utilization | ||
| averageUtilization: {{ .Values.services.api.autoscaling.targetMemoryUtilizationPercentage | default 90 | quote }} | ||
| {{- end }} | ||
| {{- end }} | ||
| {{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minReplicas, maxReplicas, and averageUtilization rendered as strings will break kubectl apply
The | quote filter forces the integers to YAML strings:
minReplicas: "1"Kubernetes rejects this with
cannot unmarshal string into Go value of type int32.
Same for averageUtilization.
- minReplicas: {{ .Values.services.api.autoscaling.minReplicas | default 1 | quote }}
- maxReplicas: {{ .Values.services.api.autoscaling.maxReplicas | default 5 | quote }}
+ minReplicas: {{ .Values.services.api.autoscaling.minReplicas | default 1 | int }}
+ maxReplicas: {{ .Values.services.api.autoscaling.maxReplicas | default 5 | int }}
- averageUtilization: {{ .Values.services.api.autoscaling.targetCPUUtilizationPercentage | default 90 | quote }}
+ averageUtilization: {{ .Values.services.api.autoscaling.targetCPUUtilizationPercentage | default 90 | int }}Repeat the | int change for the memory metric block as well.
🤖 Prompt for AI Agents
In charts/plane-enterprise/templates/workloads/api.deployment.yaml between lines
85 and 121, the minReplicas, maxReplicas, and averageUtilization fields are
incorrectly rendered as strings due to the use of the | quote filter, which
causes kubectl apply to fail. Remove the | quote filter from these fields and
instead use the | int filter to ensure they are rendered as integers. Apply this
fix consistently to both the CPU and memory averageUtilization values.
charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml
Outdated
Show resolved
Hide resolved
charts/plane-enterprise/templates/workloads/email.deployment.yaml
Outdated
Show resolved
Hide resolved
charts/plane-enterprise/templates/workloads/web.deployment.yaml
Outdated
Show resolved
Hide resolved
charts/plane-enterprise/templates/workloads/worker.deployment.yaml
Outdated
Show resolved
Hide resolved
…uests, limits, and autoscaling configurations for various services. Update default values for autoscaling parameters to improve deployment efficiency.
…ault values for minReplicas, maxReplicas, and averageUtilization settings across multiple services, enhancing clarity and consistency in autoscaling configurations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
charts/plane-enterprise/templates/workloads/api.deployment.yaml (2)
50-55: Aggressive resource bump previously flagged, still present
Same concern as earlier review: defaults of 1000 m / 1000 Mi requests will break light clusters; please reconsider.
100-121: Ensure HPA numeric fields are rendered as integers
minReplicas,maxReplicas, and bothaverageUtilizationfields lack an explicit| intcast; any string values invalues.yamlwill surface as invalid YAML for the autoscaling API.- minReplicas: {{ .Values.services.api.autoscaling.minReplicas | default 1 }} - maxReplicas: {{ .Values.services.api.autoscaling.maxReplicas | default 5 }} + minReplicas: {{ .Values.services.api.autoscaling.minReplicas | default 1 | int }} + maxReplicas: {{ .Values.services.api.autoscaling.maxReplicas | default 5 | int }} ... - averageUtilization: {{ .Values.services.api.autoscaling.targetCPUUtilizationPercentage | default 90 }} + averageUtilization: {{ .Values.services.api.autoscaling.targetCPUUtilizationPercentage | default 90 | int }}Apply the same to the memory metric section.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
charts/plane-enterprise/templates/workloads/admin.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/api.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/email.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/live.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/silo.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/space.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/web.deployment.yaml(2 hunks)charts/plane-enterprise/templates/workloads/worker.deployment.yaml(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
- charts/plane-enterprise/templates/workloads/email.deployment.yaml
- charts/plane-enterprise/templates/workloads/web.deployment.yaml
- charts/plane-enterprise/templates/workloads/worker.deployment.yaml
- charts/plane-enterprise/templates/workloads/silo.deployment.yaml
- charts/plane-enterprise/templates/workloads/beat-worker.deployment.yaml
- charts/plane-enterprise/templates/workloads/space.deployment.yaml
- charts/plane-enterprise/templates/workloads/admin.deployment.yaml
| cpu: {{ .Values.services.live.cpuRequest | default "500m" | quote }} | ||
| memory: {{ .Values.services.live.memoryRequest | default "1000Mi" | quote }} | ||
| limits: | ||
| memory: {{ .Values.services.live.memoryLimit | default "1000Mi" | quote }} | ||
| cpu: {{ .Values.services.live.cpuLimit | default "500m" | quote}} | ||
| cpu: {{ .Values.services.live.cpuLimit | default "1000m" | quote}} | ||
| memory: {{ .Values.services.live.memoryLimit | default "2000Mi" | quote }} | ||
| command: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Default requests/limits multiplied by 10×–20× risk unschedulable pods on dev clusters
Jumping from 50 m / 50 Mi to 500 m / 1000 Mi requests (and 1 CPU / 2 Gi limits) makes the live pod impossible to schedule on single-vCPU KIND/k3d nodes and many default CI runners. Consider keeping the previous conservative defaults and letting operators override via values.yaml, or at least document the implication very clearly.
🤖 Prompt for AI Agents
In charts/plane-enterprise/templates/workloads/live.deployment.yaml around lines
50 to 55, the default CPU and memory requests and limits are set too high,
causing pods to be unschedulable on small dev clusters. Reduce the default
values for cpuRequest and memoryRequest to more conservative levels similar to
the previous defaults (e.g., 50m CPU and 50Mi memory) and update cpuLimit and
memoryLimit accordingly. Also, ensure these defaults can be overridden via
values.yaml and add clear documentation about the resource implications for dev
environments.
| --- | ||
| {{- if eq (include "enable.hpa" . | trim) "true" }} | ||
|
|
||
| apiVersion: autoscaling/v2 | ||
| kind: HorizontalPodAutoscaler | ||
| metadata: | ||
| name: {{ .Release.Name }}-live-hpa | ||
| namespace: {{ .Release.Namespace }} | ||
| labels: | ||
| app.name: {{ .Release.Namespace }}-{{ .Release.Name }}-live-hpa | ||
| spec: | ||
| scaleTargetRef: | ||
| apiVersion: apps/v1 | ||
| kind: Deployment | ||
| name: {{ .Release.Name }}-live-wl | ||
| minReplicas: {{ .Values.services.live.autoscaling.minReplicas | default 1 }} | ||
| maxReplicas: {{ .Values.services.live.autoscaling.maxReplicas | default 5 }} | ||
| {{- if or .Values.services.live.autoscaling.targetCPUUtilizationPercentage .Values.services.live.autoscaling.targetMemoryUtilizationPercentage }} | ||
| metrics: | ||
| {{- if .Values.services.live.autoscaling.targetCPUUtilizationPercentage }} | ||
| - type: Resource | ||
| resource: | ||
| name: cpu | ||
| target: | ||
| type: Utilization | ||
| averageUtilization: {{ .Values.services.live.autoscaling.targetCPUUtilizationPercentage | default 90 }} | ||
| {{- end }} | ||
| {{- if .Values.services.live.autoscaling.targetMemoryUtilizationPercentage }} | ||
| - type: Resource | ||
| resource: | ||
| name: memory | ||
| target: | ||
| type: Utilization | ||
| averageUtilization: {{ .Values.services.live.autoscaling.targetMemoryUtilizationPercentage | default 90 }} | ||
| {{- end }} | ||
| {{- end }} | ||
| {{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cast numeric knobs to int to avoid “cannot unmarshal string into int32”
If a user sets these values as strings in values.yaml (common Helm pattern) the rendered YAML will emit quoted scalars, and kubectl apply will fail. Pipe through | int to guarantee correct type:
- minReplicas: {{ .Values.services.live.autoscaling.minReplicas | default 1 }}
- maxReplicas: {{ .Values.services.live.autoscaling.maxReplicas | default 5 }}
+ minReplicas: {{ .Values.services.live.autoscaling.minReplicas | default 1 | int }}
+ maxReplicas: {{ .Values.services.live.autoscaling.maxReplicas | default 5 | int }}
- averageUtilization: {{ .Values.services.live.autoscaling.targetCPUUtilizationPercentage | default 90 }}
+ averageUtilization: {{ .Values.services.live.autoscaling.targetCPUUtilizationPercentage | default 90 | int }}Repeat the | int addition for the memory metric block as well.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| --- | |
| {{- if eq (include "enable.hpa" . | trim) "true" }} | |
| apiVersion: autoscaling/v2 | |
| kind: HorizontalPodAutoscaler | |
| metadata: | |
| name: {{ .Release.Name }}-live-hpa | |
| namespace: {{ .Release.Namespace }} | |
| labels: | |
| app.name: {{ .Release.Namespace }}-{{ .Release.Name }}-live-hpa | |
| spec: | |
| scaleTargetRef: | |
| apiVersion: apps/v1 | |
| kind: Deployment | |
| name: {{ .Release.Name }}-live-wl | |
| minReplicas: {{ .Values.services.live.autoscaling.minReplicas | default 1 }} | |
| maxReplicas: {{ .Values.services.live.autoscaling.maxReplicas | default 5 }} | |
| {{- if or .Values.services.live.autoscaling.targetCPUUtilizationPercentage .Values.services.live.autoscaling.targetMemoryUtilizationPercentage }} | |
| metrics: | |
| {{- if .Values.services.live.autoscaling.targetCPUUtilizationPercentage }} | |
| - type: Resource | |
| resource: | |
| name: cpu | |
| target: | |
| type: Utilization | |
| averageUtilization: {{ .Values.services.live.autoscaling.targetCPUUtilizationPercentage | default 90 }} | |
| {{- end }} | |
| {{- if .Values.services.live.autoscaling.targetMemoryUtilizationPercentage }} | |
| - type: Resource | |
| resource: | |
| name: memory | |
| target: | |
| type: Utilization | |
| averageUtilization: {{ .Values.services.live.autoscaling.targetMemoryUtilizationPercentage | default 90 }} | |
| {{- end }} | |
| {{- end }} | |
| {{- end }} | |
| --- | |
| {{- if eq (include "enable.hpa" . | trim) "true" }} | |
| apiVersion: autoscaling/v2 | |
| kind: HorizontalPodAutoscaler | |
| metadata: | |
| name: {{ .Release.Name }}-live-hpa | |
| namespace: {{ .Release.Namespace }} | |
| labels: | |
| app.name: {{ .Release.Namespace }}-{{ .Release.Name }}-live-hpa | |
| spec: | |
| scaleTargetRef: | |
| apiVersion: apps/v1 | |
| kind: Deployment | |
| name: {{ .Release.Name }}-live-wl | |
| minReplicas: {{ .Values.services.live.autoscaling.minReplicas | default 1 | int }} | |
| maxReplicas: {{ .Values.services.live.autoscaling.maxReplicas | default 5 | int }} | |
| {{- if or .Values.services.live.autoscaling.targetCPUUtilizationPercentage .Values.services.live.autoscaling.targetMemoryUtilizationPercentage }} | |
| metrics: | |
| {{- if .Values.services.live.autoscaling.targetCPUUtilizationPercentage }} | |
| - type: Resource | |
| resource: | |
| name: cpu | |
| target: | |
| type: Utilization | |
| averageUtilization: {{ .Values.services.live.autoscaling.targetCPUUtilizationPercentage | default 90 | int }} | |
| {{- end }} | |
| {{- if .Values.services.live.autoscaling.targetMemoryUtilizationPercentage }} | |
| - type: Resource | |
| resource: | |
| name: memory | |
| target: | |
| type: Utilization | |
| averageUtilization: {{ .Values.services.live.autoscaling.targetMemoryUtilizationPercentage | default 90 }} | |
| {{- end }} | |
| {{- end }} | |
| {{- end }} |
🤖 Prompt for AI Agents
In charts/plane-enterprise/templates/workloads/live.deployment.yaml between
lines 69 and 105, the minReplicas, maxReplicas, and averageUtilization values
for CPU and memory metrics may be rendered as strings if set as strings in
values.yaml, causing kubectl apply to fail. Fix this by piping these values
through the int function (| int) to ensure they are rendered as integers. Apply
this change to minReplicas, maxReplicas, targetCPUUtilizationPercentage, and
targetMemoryUtilizationPercentage fields.
Description
Summary by CodeRabbit