-
Notifications
You must be signed in to change notification settings - Fork 45
feature:verl #173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature:verl #173
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces a new Helm chart for deploying VERL (Volcano Engine Reinforcement Learning) on TKE (Tencent Kubernetes Engine) and updates an existing LLaMA-Factory chart configuration. The changes create a complete Kubernetes deployment structure for VERL with GPU support and shared memory configuration.
- Adds a complete VERL-on-TKE Helm chart with deployment, service, ingress, and autoscaling templates
- Updates LLaMA-Factory chart to use Tencent mirrors and improved resource allocation
- Configures both charts with optimized GPU resource requests and shared memory volumes
Reviewed Changes
Copilot reviewed 12 out of 12 changed files in this pull request and generated 2 comments.
Show a summary per file
File | Description |
---|---|
incubator/verl-on-tke/values.yaml | Configuration values for VERL deployment with GPU resources and environment variables |
incubator/verl-on-tke/templates/serviceaccount.yaml | ServiceAccount template for VERL deployment |
incubator/verl-on-tke/templates/service.yaml | Service template exposing VERL application |
incubator/verl-on-tke/templates/ingress.yaml | Ingress template for external access to VERL |
incubator/verl-on-tke/templates/hpa.yaml | Horizontal Pod Autoscaler template for VERL scaling |
incubator/verl-on-tke/templates/deployment.yaml | Main deployment template for VERL containers |
incubator/verl-on-tke/templates/_helpers.tpl | Helm template helpers for naming and labeling |
incubator/verl-on-tke/templates/NOTES.txt | Post-installation notes for accessing the VERL application |
incubator/verl-on-tke/Chart.yaml | Chart metadata and version information for VERL |
incubator/verl-on-tke/.helmignore | File patterns to ignore during chart packaging |
incubator/llama-factory-on-tke/values.yaml | Updated resource limits and PIP index configuration |
incubator/llama-factory-on-tke/Chart.yaml | Improved chart description and removed unused sources |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
{{- end }} | ||
ports: | ||
- name: http | ||
containerPort: {{ .Values.service.port }} |
Copilot
AI
Oct 8, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The containerPort is using the service port value, but these should be independent. The container should expose its own port (typically 8080 or 3000), and the service should map to that container port. This creates a tight coupling between service and container configuration.
containerPort: {{ .Values.service.port }} | |
containerPort: {{ .Values.containerPort }} |
Copilot uses AI. Check for mistakes.
args: | ||
- trap 'exit 0' TERM; sleep infinity & wait |
Copilot
AI
Oct 8, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current configuration runs an infinite sleep loop which prevents the container from serving actual traffic. This setup appears to be for development/debugging purposes but may not be suitable for production deployments.
args: | |
- trap 'exit 0' TERM; sleep infinity & wait | |
# args: | |
# - <your-app-command> # Specify the actual application command here if needed |
Copilot uses AI. Check for mistakes.
lgtm |
No description provided.