diff --git a/src/lib/static/blogcontent/blogcontent.json b/src/lib/static/blogcontent/blogcontent.json
index 9969fb8..49317ff 100644
--- a/src/lib/static/blogcontent/blogcontent.json
+++ b/src/lib/static/blogcontent/blogcontent.json
@@ -4,7 +4,7 @@
"description": null,
"date": "2025-05-23",
"path": "articles/introduction",
- "body": "We started building **Lifecycle** at GoodRx in 2019 because managing our lower environments like staging, development, QA had become a daily headache. As our architecture shifted from a monolith to microservices, our internal channels were flooded with messages like \"Is anyone using staging?\" \"Staging is broken again,\" and \"Who just overwrote my changes?\" Waiting in line for hours (sometimes days) to test code in a real-world-like environment was the norm.\n\nWe simply couldn't scale with our engineering growth. So, as a proof of concept, we spun up **Lifecycle**: a tool that lets you create on-demand, ephemeral environments off of github pull request.\n\nAt first, only a handful of services were onboarded, but our engineers immediately saw the difference, no more static staging servers, no more pipeline gymnastics, and no more accidental overwrites. They wanted Lifecycle wherever they touched code, so we built a simple lifecycle.yaml configuration, replaced our manual database entries, and baked Lifecycle support into every new service template. Before long.\n\nAfter ironing out early scaling kinks, we realized Lifecycle had become more than an internal convenience, it was a game-changer for us.\n\nToday (June 5, 2025), we're thrilled to open-source five years of collective effort under the Apache 2.0 license. This project represents countless late-night brainstorming sessions, pull requests, and \"aha\" moments, and we can't wait to see how you'll make it your own: adding integrations, optimizing performance, or finding novel workflows we never imagined.\n\nBy sharing Lifecycle, we hope to help teams stuck in the same limited environment limbo we once were and build a community of passionate likeminded developers who'll shape the the future of Lifecycle.\n\nWe look forward to learning from you, growing together, and making shipping high-quality software faster and more enjoyable for everyone!\n\nJoin our Discord server [here](https://discord.gg/TEtKgCs8T8) to connect!!"
+ "body": "We started building **Lifecycle** at GoodRx in 2019 because managing our lower environments like staging, development, QA had become a daily headache. As our architecture shifted from a monolith to microservices, our internal channels were flooded with messages like \"Is anyone using staging?\" \"Staging is broken again,\" and \"Who just overwrote my changes?\" Waiting in line for hours (sometimes days) to test code in a real-world-like environment was the norm.\n\nWe simply couldn't scale with our engineering growth. So, as a proof of concept, we spun up **Lifecycle**: a tool that lets you create on-demand, ephemeral environments off of github pull request.\n\nAt first, only a handful of services were onboarded, but our engineers immediately saw the difference, no more static staging servers, no more pipeline gymnastics, and no more accidental overwrites. They wanted Lifecycle wherever they touched code, so we built a simple lifecycle.yaml configuration, replaced our manual database entries, and baked Lifecycle support into every new service template.\n\nAfter ironing out early scaling kinks, we realized Lifecycle had become more than an internal convenience, it was a game-changer for us.\n\nToday (June 5, 2025), we're thrilled to open-source five years of collective effort under the Apache 2.0 license. This project represents countless late-night brainstorming sessions, pull requests, and \"aha\" moments, and we can't wait to see how you'll make it your own: adding integrations, optimizing performance, or finding novel workflows we never imagined.\n\nBy sharing Lifecycle, we hope to help teams stuck in the same limited environment limbo we once were and build a community of passionate likeminded developers who'll shape the the future of Lifecycle.\n\nWe look forward to learning from you, growing together, and making shipping high-quality software faster and more enjoyable for everyone!\n\nJoin our Discord server [here](https://discord.gg/TEtKgCs8T8) to connect!!"
},
{
"title": "What is Lifecycle?",
@@ -27,6 +27,13 @@
"path": "docs/features/webhooks",
"body": "Lifecycle can invoke **third-party services** when a build state changes. Currently, only **Codefresh pipeline triggers** are supported. Webhooks allow users to automate external processes such as running tests or performing cleanup tasks based on environment build states.\n\n## Common Use Cases\n\n- When a build status is `deployed`, trigger **end-to-end tests**.\n- When a build status is `error`, trigger **infrastructure cleanup** or alert the team.\n\n## Webhook Configuration\n\nWebhooks are defined in the `lifecycle.yaml` under the `environment.webhooks` section.\n\nBelow is an example configuration for triggering end-to-end tests when the `deployed` state is reached.\n\n### **Examples**\n\n```yaml\n# Trigger End-to-End Tests on Deployment\nenvironment:\n # ...\n defaultServices:\n - name: \"frontend\"\n optionalServices:\n - name: \"backend\"\n repository: \"lifecycle/backend\"\n branch: \"main\"\n webhooks:\n - state: deployed\n type: codefresh\n name: \"End to End Tests\"\n pipelineId: 64598362453cc650c0c9cd4d\n trigger: tests\n env:\n branch: \"{{frontend_branchName}}\"\n TEST_URL: \"https://{{frontend_publicUrl}}\"\n # ...\n```\n\n- **`state: deployed`** → Triggers the webhook when the build reaches the `deployed` state.\n- **`type: codefresh`** → Specifies that this webhook triggers a **Codefresh pipeline**.\n- **`name`** → A human-readable name for the webhook.\n- **`pipelineId`** → The unique Codefresh pipeline ID.\n- **`trigger`** → Codefresh pipeline's trigger to execute.\n- **`env`** → Passes relevant environment variables (e.g., `branch` and `TEST_URL`).\n\n---\n\n```yaml\n# Trigger Cleanup on Build Error\nenvironment:\n # ...\n webhooks:\n - state: error\n type: codefresh\n name: \"Cleanup Failed Deployment\"\n pipelineId: 74283905723ff650c0d9ab7e\n trigger: cleanup\n env:\n branch: \"{{frontend_branchName}}\"\n CLEANUP_TARGET: \"frontend\"\n # ...\n```\n\n- **`state: error`** → Triggers the webhook when the build fails.\n- **`type: codefresh`** → Invokes a Codefresh cleanup pipeline.\n- **`trigger: cleanup`** → Codefresh pipeline's trigger to execute.\n- **`env`** → Includes necessary variables, such as `branch` and `CLEANUP_TARGET`.\n\n## Limitations\n\n- **Currently, Lifecycle only supports Codefresh pipeline triggers.**\n- In need of support for other webhook types? please **submit a pull request or an issue**.\n\nBy leveraging webhooks, teams can **automate workflows, run tests, and clean up failed deployments** seamlessly within Lifecycle."
},
+ {
+ "title": "Native Helm Deployment",
+ "description": "Deploy services using Helm directly in Kubernetes without external CI/CD dependencies",
+ "date": "2025-01-29",
+ "path": "docs/features/native-helm-deployment",
+ "body": "This feature is still in alpha and might change with breaking changes.\n\n\n**Native Helm** is an alternative deployment method that runs Helm deployments directly within Kubernetes jobs, eliminating the need for external CI/CD systems. This provides a more self-contained and portable deployment solution.\n\n\n Native Helm deployment is an opt-in feature that can be enabled globally or\n per-service.\n\n\n## Overview\n\nWhen enabled, Native Helm:\n\n- Creates Kubernetes jobs to execute Helm deployments\n- Runs in ephemeral namespaces with proper RBAC\n- Provides real-time deployment logs via WebSocket\n- Handles concurrent deployments automatically\n- Supports all standard Helm chart types\n\n## Quickstart\n\nWant to try native Helm deployment? Here's the fastest way to get started:\n\n```yaml filename=\"lifecycle.yaml\" {5}\nservices:\n - name: my-api\n defaultUUID: \"dev-0\"\n helm:\n deploymentMethod: \"native\" # That's it!\n chart:\n name: \"local\"\n valueFiles:\n - \"./helm/values.yaml\"\n```\n\nThis configuration:\n\n1. Enables native Helm for the `my-api` service\n2. Uses a local Helm chart from your repository\n3. Applies values from `./helm/values.yaml`\n4. Runs deployment as a Kubernetes job\n\n\n To enable native Helm for all services at once, see [Global\n Configuration](#enabling-native-helm).\n\n\n## Configuration\n\n### Enabling Native Helm\n\nThere are two ways to enable native Helm deployment:\n\n\n \n ```yaml {4} filename=\"lifecycle.yaml\" services: - name: my-service helm:\n deploymentMethod: \"native\" # Enable for this service only chart: name:\n my-chart ```\n \n \n ```yaml {3} filename=\"lifecycle.yaml\" helm: nativeHelm: enabled: true #\n Enable for all services ```\n \n\n\n### Configuration Precedence\n\nLifecycle uses a hierarchical configuration system with three levels of precedence:\n\n1. **helmDefaults** - Base defaults for all deployments (database: `global_config` table)\n2. **Chart-specific config** - Per-chart defaults (database: `global_config` table)\n3. **Service YAML config** - Service-specific overrides (highest priority)\n\n\n Service-level configuration always takes precedence over global defaults.\n\n\n### Global Configuration (Database)\n\nGlobal configurations are stored in the `global_config` table in the database. Each configuration is stored as a row with:\n\n- **key**: The configuration name (e.g., 'helmDefaults', 'postgresql', 'redis')\n- **config**: JSON object containing the configuration\n\n#### helmDefaults Configuration\n\nStored in database with key `helmDefaults`:\n\n```json\n{\n \"nativeHelm\": {\n \"enabled\": true,\n \"defaultArgs\": \"--wait --timeout 30m\",\n \"defaultHelmVersion\": \"3.12.0\"\n }\n}\n```\n\n**Field Descriptions**:\n\n- `enabled`: When `true`, enables native Helm deployment for all services unless they explicitly set `deploymentMethod: \"ci\"`\n- `defaultArgs`: Arguments automatically appended to every Helm command (appears before service-specific args)\n- `defaultHelmVersion`: The Helm version to use when not specified at the service or chart level\n\n#### Chart-specific Configuration\n\nExample: PostgreSQL configuration stored with key `postgresql`:\n\n```json\n{\n \"version\": \"3.13.0\",\n \"args\": \"--force --timeout 60m0s --wait\",\n \"chart\": {\n \"name\": \"postgresql\",\n \"repoUrl\": \"https://charts.bitnami.com/bitnami\",\n \"version\": \"12.9.0\",\n \"values\": [\"auth.username=postgres_user\", \"auth.database=postgres_db\"]\n }\n}\n```\n\n\n These global configurations are managed by administrators and stored in the\n database. They provide consistent defaults across all environments and can be\n overridden at the service level.\n\n\n## Usage Examples\n\n### Quick Experiment: Deploy Jenkins!\n\nWant to see native Helm in action? Let's deploy everyone's favorite CI/CD tool - Jenkins! This example shows how easy it is to deploy popular applications using native Helm.\n\n```yaml filename=\"lifecycle.yaml\"\nenvironment:\n defaultServices:\n - name: \"my-app\"\n - name: \"jenkins\" # Add Jenkins to your default services\n\nservices:\n - name: \"jenkins\"\n helm:\n deploymentMethod: \"native\"\n repository: \"myorg/apps\"\n branchName: \"main\"\n chart:\n name: \"jenkins\"\n repoUrl: \"https://charts.bitnami.com/bitnami\"\n version: \"13.6.8\"\n values:\n - \"service.type=NodePort\" # Override default LoadBalancer to avoid cloud resource costs\n```\n\n\n 🎉 That's it! With just a few lines of configuration, you'll have Jenkins\n running in your Kubernetes cluster. The `service.type=NodePort` override is\n important - the Bitnami Jenkins chart defaults to `LoadBalancer` which would\n consume cloud resources like an AWS ELB or GCP Load Balancer.\n\n\nTo access your Jenkins instance:\n\n1. Check the deployment status in your PR comment\n2. Click the **Deploy Logs** link to monitor the deployment\n3. Once deployed, Jenkins will be available at the internal hostname\n\n\n For more Jenkins configuration options and values, check out the [Bitnami\n Jenkins chart\n documentation](https://github.com/bitnami/charts/tree/main/bitnami/jenkins).\n This same pattern works for any Bitnami chart (PostgreSQL, Redis, MongoDB) or\n any other public Helm chart!\n\n\n### Basic Service Deployment\n\n```yaml filename=\"lifecycle.yaml\"\nservices:\n - name: web-api\n helm:\n deploymentMethod: \"native\"\n chart:\n name: web-app\n version: \"1.2.0\"\n```\n\n### PostgreSQL with Overrides\n\n```yaml filename=\"lifecycle.yaml\"\nservices:\n - name: database\n helm:\n deploymentMethod: \"native\"\n version: \"3.14.0\" # Override Helm version\n args: \"--atomic\" # Override deployment args\n chart:\n name: postgresql\n values: # Additional values merged with defaults\n - \"persistence.size=20Gi\"\n - \"replicaCount=2\"\n```\n\n### Custom Environment Variables\n\nLifecycle supports flexible environment variable formatting through the `envMapping` configuration. This feature allows you to control how environment variables from your service configuration are passed to your Helm chart.\n\n\n **Why envMapping?** Different Helm charts expect environment variables in\n different formats. Some expect an array of objects with `name` and `value`\n fields (Kubernetes standard), while others expect a simple key-value map. The\n `envMapping` feature lets you adapt to your chart's requirements.\n\n\n#### Default envMapping Configuration\n\nYou can define default `envMapping` configurations in the `global_config` database table. These defaults apply to all services using that chart unless overridden at the service level.\n\n**Example: Setting defaults for your organization's chart**\n\n```json\n// In global_config table, key: \"myorg-web-app\"\n{\n \"chart\": {\n \"name\": \"myorg-web-app\",\n \"repoUrl\": \"https://charts.myorg.com\"\n },\n \"envMapping\": {\n \"app\": {\n \"format\": \"array\",\n \"path\": \"deployment.containers[0].env\"\n }\n }\n}\n```\n\nWith this configuration, any service using the `myorg-web-app` chart will automatically use array format for environment variables:\n\n```yaml filename=\"lifecycle.yaml\"\nservices:\n - name: api\n helm:\n deploymentMethod: \"native\"\n chart:\n name: \"myorg-web-app\" # Inherits envMapping from global_config\n docker:\n app:\n env:\n API_KEY: \"secret\"\n # These will be formatted as array automatically\n```\n\n\n Setting `envMapping` in global_config is particularly useful when: - You have\n a standard organizational chart used by many services - You want consistent\n environment variable handling across services - You're migrating multiple\n services and want to reduce configuration duplication\n\n\n\n \n#### Array Format\nBest for charts that expect Kubernetes-style env arrays.\n\n```yaml {7-9} filename=\"lifecycle.yaml\"\nservices:\n - name: api\n helm:\n deploymentMethod: \"native\"\n chart:\n name: local\n envMapping:\n app:\n format: \"array\"\n path: \"env\"\n docker:\n app:\n env:\n DATABASE_URL: \"postgres://localhost:5432/mydb\"\n API_KEY: \"secret-key-123\"\n NODE_ENV: \"production\"\n```\n\n**This produces the following Helm values:**\n\n```bash\n--set env[0].name=DATABASE_URL\n--set env[0].value=postgres://localhost:5432/mydb\n--set env[1].name=API_KEY\n--set env[1].value=secret-key-123\n--set env[2].name=NODE_ENV\n--set env[2].value=production\n```\n\n**Your chart's values.yaml would use it like:**\n\n```yaml\nenv:\n - name: DATABASE_URL\n value: postgres://localhost:5432/mydb\n - name: API_KEY\n value: secret-key-123\n - name: NODE_ENV\n value: production\n```\n\n \n \n#### Map Format\nBest for charts that expect a simple key-value object.\n\n```yaml {7-9} filename=\"lifecycle.yaml\"\nservices:\n - name: api\n helm:\n deploymentMethod: \"native\"\n chart:\n name: local\n envMapping:\n app:\n format: \"map\"\n path: \"envVars\"\n docker:\n app:\n env:\n DATABASE_URL: \"postgres://localhost:5432/mydb\"\n API_KEY: \"secret-key-123\"\n NODE_ENV: \"production\"\n```\n\n**This produces the following Helm values:**\n\n```bash\n--set envVars.DATABASE__URL=postgres://localhost:5432/mydb\n--set envVars.API__KEY=secret-key-123\n--set envVars.NODE__ENV=production\n```\n\n\n Note: Underscores in environment variable names are converted to double\n underscores (`__`) in map format to avoid Helm parsing issues.\n\n\n**Your chart's values.yaml would use it like:**\n\n```yaml\nenvVars:\n DATABASE__URL: postgres://localhost:5432/mydb\n API__KEY: secret-key-123\n NODE__ENV: production\n```\n\n \n\n\n#### Complete Example with Multiple Services\n\n```yaml filename=\"lifecycle.yaml\"\nservices:\n # Service using array format (common for standard Kubernetes deployments)\n - name: frontend\n helm:\n deploymentMethod: \"native\"\n repository: \"myorg/apps\"\n branchName: \"main\"\n envMapping:\n app:\n format: \"array\"\n path: \"deployment.env\"\n chart:\n name: \"./charts/web-app\"\n docker:\n app:\n dockerfilePath: \"frontend/Dockerfile\"\n env:\n REACT_APP_API_URL: \"https://api.example.com\"\n REACT_APP_VERSION: \"{{build.uuid}}\"\n\n # Service using map format (common for custom charts)\n - name: backend\n helm:\n deploymentMethod: \"native\"\n repository: \"myorg/apps\"\n branchName: \"main\"\n envMapping:\n app:\n format: \"map\"\n path: \"config.environment\"\n chart:\n name: \"./charts/api\"\n docker:\n builder:\n engine: \"buildkit\"\n defaultTag: \"main\"\n app:\n dockerfilePath: \"docker/backend.dockerfile\"\n ports:\n - 3000\n env:\n NODE_ENV: \"production\"\n SERVICE_NAME: \"backend\"\n\n - name: \"mysql-database\"\n helm:\n deploymentMethod: \"native\"\n repository: \"myorg/api-services\"\n branchName: \"main\"\n chart:\n name: \"mysql\" # Using public Helm chart\n version: \"9.14.1\"\n repoUrl: \"https://charts.bitnami.com/bitnami\"\n valueFiles:\n - \"deploy/helm/mysql-values.yaml\"\n```\n\n## Templated Variables\n\nLifecycle supports template variables in Helm values that are resolved at deployment time. These variables allow you to reference dynamic values like build UUIDs, docker tags, and internal hostnames.\n\n### Available Variables\n\nTemplate variables use the format `{{{variableName}}}` and are replaced with actual values during deployment:\n\n| Variable | Description | Example Value |\n| ------------------------------------ | ------------------------- | ---------------------------------------- |\n| `{{{serviceName_dockerTag}}}` | Docker tag for a service | `main-abc123` |\n| `{{{serviceName_dockerImage}}}` | Full docker image path | `registry.com/org/repo:main-abc123` |\n| `{{{serviceName_internalHostname}}}` | Internal service hostname | `api-service.env-uuid.svc.cluster.local` |\n| `{{{build.uuid}}}` | Build UUID | `env-12345` |\n| `{{{build.namespace}}}` | Kubernetes namespace | `env-12345` |\n\n### Usage in Values\n\n```yaml filename=\"lifecycle.yaml\"\nservices:\n - name: web-api\n helm:\n deploymentMethod: \"native\"\n chart:\n name: \"./charts/app\"\n values:\n - \"image.tag={{{web-api_dockerTag}}}\"\n - \"backend.url=http://{{{backend-service_internalHostname}}}:8080\"\n - \"env.BUILD_ID={{{build.uuid}}}\"\n```\n\n\n**Docker Image Mapping**: When using custom charts, you'll need to map `{{{serviceName_dockerImage}}}` or `{{{serviceName_dockerTag}}}` to your chart's expected value path. Common patterns include:\n- `image.repository` and `image.tag` (most common)\n- `deployment.image` (single image string)\n- `app.image` or `application.image`\n- Custom paths specific to your chart\n\nCheck your chart's `values.yaml` to determine the correct path.\n\n\n\n#### Image Mapping Examples\n\n```yaml filename=\"lifecycle.yaml\"\n# Example 1: Separate repository and tag (most common)\nservices:\n - name: web-api\n helm:\n chart:\n name: \"./charts/standard\"\n values:\n - \"image.repository=registry.com/org/web-api\" # Static repository\n - \"image.tag={{{web-api_dockerTag}}}\" # Dynamic tag only\n\n# Example 2: Combined image string\nservices:\n - name: worker\n helm:\n chart:\n name: \"./charts/custom\"\n values:\n - \"deployment.image={{{worker_dockerImage}}}\" # Full image with tag\n\n# Example 3: Nested structure\nservices:\n - name: backend\n helm:\n chart:\n name: \"./charts/microservice\"\n values:\n - \"app.container.image={{{backend_dockerImage}}}\" # Full image with tag\n```\n\n\n**Important**: Always use triple braces `{{{variable}}}` instead of double braces `{{variable}}` for Lifecycle template variables. This prevents Helm from trying to process them as Helm template functions and ensures they are passed through correctly for Lifecycle to resolve.\n\n\n### Template Resolution Order\n\n1. Lifecycle resolves `{{{variables}}}` before passing values to Helm\n2. The resolved values are then passed to Helm using `--set` flags\n3. Helm processes its own template functions (if any) after receiving the resolved values\n\n### Example with Service Dependencies\n\n```yaml filename=\"lifecycle.yaml\"\nservices:\n - name: api-gateway\n helm:\n chart:\n name: \"./charts/gateway\"\n values:\n - \"config.authServiceUrl=http://{{{auth-service_internalHostname}}}:3000\"\n - \"config.userServiceUrl=http://{{{user-service_internalHostname}}}:3000\"\n - \"image.tag={{{api-gateway_dockerTag}}}\"\n\n - name: auth-service\n helm:\n chart:\n name: \"./charts/microservice\"\n values:\n - \"image.tag={{{auth-service_dockerTag}}}\"\n - \"database.host={{{postgres-db_internalHostname}}}\"\n```\n\n## Deployment Process\n\n\n 1. **Job Creation**: A Kubernetes job is created in the ephemeral namespace 2.\n **RBAC Setup**: Service account with namespace-scoped permissions is created\n 3. **Git Clone**: Init container clones the repository 4. **Helm Deploy**:\n Main container executes the Helm deployment 5. **Monitoring**: Logs are\n streamed in real-time via WebSocket\n\n\n### Concurrent Deployment Handling\n\nNative Helm automatically handles concurrent deployments by:\n\n- Detecting existing deployment jobs\n- Force-deleting the old job\n- Starting the new deployment\n\nThis ensures the newest deployment always takes precedence.\n\n## Monitoring Deployments\n\n### Deploy Logs Access\n\nFor services using native Helm deployment, you can access deployment logs through the Lifecycle PR comment:\n\n1. Add the `lifecycle-status-comments!` label to your PR\n2. In the status comment that appears, you'll see a **Deploy Logs** link for each service using native Helm\n3. Click the link to view real-time deployment logs\n\n### Log Contents\n\nThe deployment logs show:\n\n- Git repository cloning progress (`clone-repo` container)\n- Helm deployment execution (`helm-deploy` container)\n- Real-time streaming of all deployment output\n- Success or failure status\n\n## Chart Types\n\nLifecycle automatically detects and handles three chart types:\n\n| Type | Detection | Features |\n| ------------- | -------------------------------------------- | ---------------------------------------------- |\n| **ORG_CHART** | Matches `orgChartName` AND has `helm.docker` | Docker image injection, env var transformation |\n| **LOCAL** | Name is \"local\" or starts with \"./\" or \"../\" | Flexible `envMapping` support |\n| **PUBLIC** | Everything else | Standard labels and tolerations |\n\n\n The `orgChartName` is configured in the database's `global_config` table with\n key `orgChart`. This allows organizations to define their standard internal\n Helm chart.\n\n\n## Troubleshooting\n\n### Deployment Fails with \"Another Operation in Progress\"\n\n**Symptom**: Helm reports an existing operation is blocking deployment\n\n**Solution**: Native Helm automatically handles this by killing existing jobs. If the issue persists:\n\n```bash\n# Check for stuck jobs\nkubectl get jobs -n env-{uuid} -l service={serviceName}\n\n# Force delete if needed\nkubectl delete job {jobName} -n env-{uuid} --force --grace-period=0\n```\n\n### Environment Variables Not Working\n\n**Symptom**: Environment variables not passed to the deployment\n\n**Common Issues**:\n\n1. `envMapping` placed under `chart` instead of directly under `helm`\n2. Incorrect format specification (array vs map)\n3. Missing path configuration\n\n**Correct Configuration**:\n\n```yaml {4-7}\nhelm:\n deploymentMethod: \"native\"\n chart:\n name: local\n envMapping: # Correct: directly under helm\n app:\n format: \"array\"\n path: \"env\"\n```\n\n## Migration Example\n\nHere's a complete example showing how to migrate from GitHub-type services to Helm-type services:\n\n### Before: GitHub-type Services\n\n```yaml filename=\"lifecycle.yaml\"\nservices:\n - name: \"api-gateway\"\n github:\n repository: \"myorg/api-services\"\n branchName: \"main\"\n docker:\n builder:\n engine: \"buildkit\"\n defaultTag: \"main\"\n app:\n dockerfilePath: \"docker/api.dockerfile\"\n env:\n BACKEND_URL: \"{{backend-service_internalHostname}}:3000\"\n LOG_LEVEL: \"info\"\n ENV_NAME: \"production\"\n ports:\n - 8080\n deployment:\n public: true\n resource:\n cpu:\n request: \"100m\"\n memory:\n request: \"256Mi\"\n readiness:\n tcpSocketPort: 8080\n hostnames:\n host: \"example.com\"\n defaultInternalHostname: \"api-gateway-prod\"\n defaultPublicUrl: \"api.example.com\"\n\n - name: \"backend-service\"\n github:\n repository: \"myorg/api-services\"\n branchName: \"main\"\n docker:\n builder:\n engine: \"buildkit\"\n defaultTag: \"main\"\n app:\n dockerfilePath: \"docker/backend.dockerfile\"\n ports:\n - 3000\n env:\n NODE_ENV: \"production\"\n SERVICE_NAME: \"backend\"\n deployment:\n public: false\n resource:\n cpu:\n request: \"50m\"\n memory:\n request: \"128Mi\"\n readiness:\n tcpSocketPort: 3000\n\n - name: \"mysql-database\"\n docker:\n dockerImage: \"mysql\"\n defaultTag: \"8.0-debian\"\n ports:\n - 3306\n env:\n MYSQL_ROOT_PASSWORD: \"strongpassword123\"\n MYSQL_DATABASE: \"app_database\"\n MYSQL_USER: \"app_user\"\n MYSQL_PASSWORD: \"apppassword456\"\n deployment:\n public: false\n resource:\n cpu:\n request: \"100m\"\n memory:\n request: \"512Mi\"\n readiness:\n tcpSocketPort: 3306\n serviceDisks:\n - name: \"mysql-data\"\n mountPath: \"/var/lib/mysql\"\n accessModes: \"ReadWriteOnce\"\n storageSize: \"10Gi\"\n```\n\n### After: Helm-type Services with Native Deployment\n\n```yaml filename=\"lifecycle.yaml\"\nservices:\n - name: \"api-gateway\"\n helm:\n deploymentMethod: \"native\" # Enable native Helm\n version: \"3.14.0\"\n repository: \"myorg/api-services\"\n branchName: \"main\"\n args: \"--wait --timeout 10m\"\n envMapping:\n app:\n format: \"array\"\n path: \"containers.api.env\"\n chart:\n name: \"./charts/microservices\"\n values:\n - 'image.tag=\"{{{api-gateway_dockerTag}}}\"'\n - \"service.type=LoadBalancer\"\n - \"ingress.enabled=true\"\n valueFiles:\n - \"deploy/helm/base-values.yaml\"\n - \"deploy/helm/api-gateway-values.yaml\"\n docker:\n builder:\n engine: \"buildkit\"\n defaultTag: \"main\"\n app:\n dockerfilePath: \"docker/api.dockerfile\"\n env:\n BACKEND_URL: \"{{backend-service_internalHostname}}:3000\"\n LOG_LEVEL: \"info\"\n ENV_NAME: \"production\"\n ports:\n - 8080\n\n - name: \"backend-service\"\n helm:\n deploymentMethod: \"native\"\n version: \"3.14.0\"\n repository: \"myorg/api-services\"\n branchName: \"main\"\n envMapping:\n app:\n format: \"map\" # Using map format for this service\n path: \"env\"\n chart:\n name: \"./charts/microservices\"\n values:\n - 'image.tag=\"{{{backend-service_dockerTag}}}\"'\n - \"replicaCount=2\"\n valueFiles:\n - \"deploy/helm/base-values.yaml\"\n - \"deploy/helm/backend-values.yaml\"\n docker:\n builder:\n engine: \"buildkit\"\n defaultTag: \"main\"\n app:\n dockerfilePath: \"docker/backend.dockerfile\"\n ports:\n - 3000\n env:\n NODE_ENV: \"production\"\n SERVICE_NAME: \"backend\"\n\n - name: \"mysql-database\"\n helm:\n deploymentMethod: \"native\"\n repository: \"myorg/api-services\"\n branchName: \"main\"\n chart:\n name: \"mysql\" # Using public Helm chart\n version: \"9.14.1\"\n repoUrl: \"https://charts.bitnami.com/bitnami\"\n valueFiles:\n - \"deploy/helm/mysql-values.yaml\"\n```\n\n### Key Migration Points\n\n1. **Service Type Change**: Changed from `github:` to `helm:` configuration\n2. **Repository Location**: `repository` and `branchName` move from under `github:` to directly under `helm:`\n3. **Deployment Method**: Added `deploymentMethod: \"native\"` to enable native Helm\n4. **Chart Configuration**: Added `chart:` section with local or public charts\n5. **Environment Mapping**: Added `envMapping:` to control how environment variables are passed\n6. **Helm Arguments**: Added `args:` for Helm command customization\n7. **Docker Configuration**: Kept existing `docker:` config for build process\n\n\n Note that when converting from GitHub-type to Helm-type services, the\n `repository` and `branchName` fields move from being nested under `github:` to\n being directly under `helm:`.\n\n\n\n Many configuration options (like Helm version, args, and chart details) can be\n defined in the `global_config` database table, making the service YAML\n cleaner. Only override when needed."
+ },
{
"title": "Template Variables",
"description": null,
diff --git a/src/lib/static/blogcontent/blogcontent.ts b/src/lib/static/blogcontent/blogcontent.ts
index 58c7530..9b715d3 100644
--- a/src/lib/static/blogcontent/blogcontent.ts
+++ b/src/lib/static/blogcontent/blogcontent.ts
@@ -4,7 +4,7 @@ export const blogContent = [
description: null,
date: "2025-05-23",
path: "articles/introduction",
- body: 'We started building **Lifecycle** at GoodRx in 2019 because managing our lower environments like staging, development, QA had become a daily headache. As our architecture shifted from a monolith to microservices, our internal channels were flooded with messages like "Is anyone using staging?" "Staging is broken again," and "Who just overwrote my changes?" Waiting in line for hours (sometimes days) to test code in a real-world-like environment was the norm.\n\nWe simply couldn\'t scale with our engineering growth. So, as a proof of concept, we spun up **Lifecycle**: a tool that lets you create on-demand, ephemeral environments off of github pull request.\n\nAt first, only a handful of services were onboarded, but our engineers immediately saw the difference, no more static staging servers, no more pipeline gymnastics, and no more accidental overwrites. They wanted Lifecycle wherever they touched code, so we built a simple lifecycle.yaml configuration, replaced our manual database entries, and baked Lifecycle support into every new service template. Before long.\n\nAfter ironing out early scaling kinks, we realized Lifecycle had become more than an internal convenience, it was a game-changer for us.\n\nToday (June 5, 2025), we\'re thrilled to open-source five years of collective effort under the Apache 2.0 license. This project represents countless late-night brainstorming sessions, pull requests, and "aha" moments, and we can\'t wait to see how you\'ll make it your own: adding integrations, optimizing performance, or finding novel workflows we never imagined.\n\nBy sharing Lifecycle, we hope to help teams stuck in the same limited environment limbo we once were and build a community of passionate likeminded developers who\'ll shape the the future of Lifecycle.\n\nWe look forward to learning from you, growing together, and making shipping high-quality software faster and more enjoyable for everyone!\n\nJoin our Discord server [here](https://discord.gg/TEtKgCs8T8) to connect!!',
+ body: 'We started building **Lifecycle** at GoodRx in 2019 because managing our lower environments like staging, development, QA had become a daily headache. As our architecture shifted from a monolith to microservices, our internal channels were flooded with messages like "Is anyone using staging?" "Staging is broken again," and "Who just overwrote my changes?" Waiting in line for hours (sometimes days) to test code in a real-world-like environment was the norm.\n\nWe simply couldn\'t scale with our engineering growth. So, as a proof of concept, we spun up **Lifecycle**: a tool that lets you create on-demand, ephemeral environments off of github pull request.\n\nAt first, only a handful of services were onboarded, but our engineers immediately saw the difference, no more static staging servers, no more pipeline gymnastics, and no more accidental overwrites. They wanted Lifecycle wherever they touched code, so we built a simple lifecycle.yaml configuration, replaced our manual database entries, and baked Lifecycle support into every new service template.\n\nAfter ironing out early scaling kinks, we realized Lifecycle had become more than an internal convenience, it was a game-changer for us.\n\nToday (June 5, 2025), we\'re thrilled to open-source five years of collective effort under the Apache 2.0 license. This project represents countless late-night brainstorming sessions, pull requests, and "aha" moments, and we can\'t wait to see how you\'ll make it your own: adding integrations, optimizing performance, or finding novel workflows we never imagined.\n\nBy sharing Lifecycle, we hope to help teams stuck in the same limited environment limbo we once were and build a community of passionate likeminded developers who\'ll shape the the future of Lifecycle.\n\nWe look forward to learning from you, growing together, and making shipping high-quality software faster and more enjoyable for everyone!\n\nJoin our Discord server [here](https://discord.gg/TEtKgCs8T8) to connect!!',
},
{
title: "What is Lifecycle?",
@@ -29,6 +29,14 @@ export const blogContent = [
path: "docs/features/webhooks",
body: 'Lifecycle can invoke **third-party services** when a build state changes. Currently, only **Codefresh pipeline triggers** are supported. Webhooks allow users to automate external processes such as running tests or performing cleanup tasks based on environment build states.\n\n## Common Use Cases\n\n- When a build status is `deployed`, trigger **end-to-end tests**.\n- When a build status is `error`, trigger **infrastructure cleanup** or alert the team.\n\n## Webhook Configuration\n\nWebhooks are defined in the `lifecycle.yaml` under the `environment.webhooks` section.\n\nBelow is an example configuration for triggering end-to-end tests when the `deployed` state is reached.\n\n### **Examples**\n\n```yaml\n# Trigger End-to-End Tests on Deployment\nenvironment:\n # ...\n defaultServices:\n - name: "frontend"\n optionalServices:\n - name: "backend"\n repository: "lifecycle/backend"\n branch: "main"\n webhooks:\n - state: deployed\n type: codefresh\n name: "End to End Tests"\n pipelineId: 64598362453cc650c0c9cd4d\n trigger: tests\n env:\n branch: "{{frontend_branchName}}"\n TEST_URL: "https://{{frontend_publicUrl}}"\n # ...\n```\n\n- **`state: deployed`** → Triggers the webhook when the build reaches the `deployed` state.\n- **`type: codefresh`** → Specifies that this webhook triggers a **Codefresh pipeline**.\n- **`name`** → A human-readable name for the webhook.\n- **`pipelineId`** → The unique Codefresh pipeline ID.\n- **`trigger`** → Codefresh pipeline\'s trigger to execute.\n- **`env`** → Passes relevant environment variables (e.g., `branch` and `TEST_URL`).\n\n---\n\n```yaml\n# Trigger Cleanup on Build Error\nenvironment:\n # ...\n webhooks:\n - state: error\n type: codefresh\n name: "Cleanup Failed Deployment"\n pipelineId: 74283905723ff650c0d9ab7e\n trigger: cleanup\n env:\n branch: "{{frontend_branchName}}"\n CLEANUP_TARGET: "frontend"\n # ...\n```\n\n- **`state: error`** → Triggers the webhook when the build fails.\n- **`type: codefresh`** → Invokes a Codefresh cleanup pipeline.\n- **`trigger: cleanup`** → Codefresh pipeline\'s trigger to execute.\n- **`env`** → Includes necessary variables, such as `branch` and `CLEANUP_TARGET`.\n\n## Limitations\n\n- **Currently, Lifecycle only supports Codefresh pipeline triggers.**\n- In need of support for other webhook types? please **submit a pull request or an issue**.\n\nBy leveraging webhooks, teams can **automate workflows, run tests, and clean up failed deployments** seamlessly within Lifecycle.',
},
+ {
+ title: "Native Helm Deployment",
+ description:
+ "Deploy services using Helm directly in Kubernetes without external CI/CD dependencies",
+ date: "2025-01-29",
+ path: "docs/features/native-helm-deployment",
+ body: 'This feature is still in alpha and might change with breaking changes.\n\n\n**Native Helm** is an alternative deployment method that runs Helm deployments directly within Kubernetes jobs, eliminating the need for external CI/CD systems. This provides a more self-contained and portable deployment solution.\n\n\n Native Helm deployment is an opt-in feature that can be enabled globally or\n per-service.\n\n\n## Overview\n\nWhen enabled, Native Helm:\n\n- Creates Kubernetes jobs to execute Helm deployments\n- Runs in ephemeral namespaces with proper RBAC\n- Provides real-time deployment logs via WebSocket\n- Handles concurrent deployments automatically\n- Supports all standard Helm chart types\n\n## Quickstart\n\nWant to try native Helm deployment? Here\'s the fastest way to get started:\n\n```yaml filename="lifecycle.yaml" {5}\nservices:\n - name: my-api\n defaultUUID: "dev-0"\n helm:\n deploymentMethod: "native" # That\'s it!\n chart:\n name: "local"\n valueFiles:\n - "./helm/values.yaml"\n```\n\nThis configuration:\n\n1. Enables native Helm for the `my-api` service\n2. Uses a local Helm chart from your repository\n3. Applies values from `./helm/values.yaml`\n4. Runs deployment as a Kubernetes job\n\n\n To enable native Helm for all services at once, see [Global\n Configuration](#enabling-native-helm).\n\n\n## Configuration\n\n### Enabling Native Helm\n\nThere are two ways to enable native Helm deployment:\n\n\n \n ```yaml {4} filename="lifecycle.yaml" services: - name: my-service helm:\n deploymentMethod: "native" # Enable for this service only chart: name:\n my-chart ```\n \n \n ```yaml {3} filename="lifecycle.yaml" helm: nativeHelm: enabled: true #\n Enable for all services ```\n \n\n\n### Configuration Precedence\n\nLifecycle uses a hierarchical configuration system with three levels of precedence:\n\n1. **helmDefaults** - Base defaults for all deployments (database: `global_config` table)\n2. **Chart-specific config** - Per-chart defaults (database: `global_config` table)\n3. **Service YAML config** - Service-specific overrides (highest priority)\n\n\n Service-level configuration always takes precedence over global defaults.\n\n\n### Global Configuration (Database)\n\nGlobal configurations are stored in the `global_config` table in the database. Each configuration is stored as a row with:\n\n- **key**: The configuration name (e.g., \'helmDefaults\', \'postgresql\', \'redis\')\n- **config**: JSON object containing the configuration\n\n#### helmDefaults Configuration\n\nStored in database with key `helmDefaults`:\n\n```json\n{\n "nativeHelm": {\n "enabled": true,\n "defaultArgs": "--wait --timeout 30m",\n "defaultHelmVersion": "3.12.0"\n }\n}\n```\n\n**Field Descriptions**:\n\n- `enabled`: When `true`, enables native Helm deployment for all services unless they explicitly set `deploymentMethod: "ci"`\n- `defaultArgs`: Arguments automatically appended to every Helm command (appears before service-specific args)\n- `defaultHelmVersion`: The Helm version to use when not specified at the service or chart level\n\n#### Chart-specific Configuration\n\nExample: PostgreSQL configuration stored with key `postgresql`:\n\n```json\n{\n "version": "3.13.0",\n "args": "--force --timeout 60m0s --wait",\n "chart": {\n "name": "postgresql",\n "repoUrl": "https://charts.bitnami.com/bitnami",\n "version": "12.9.0",\n "values": ["auth.username=postgres_user", "auth.database=postgres_db"]\n }\n}\n```\n\n\n These global configurations are managed by administrators and stored in the\n database. They provide consistent defaults across all environments and can be\n overridden at the service level.\n\n\n## Usage Examples\n\n### Quick Experiment: Deploy Jenkins!\n\nWant to see native Helm in action? Let\'s deploy everyone\'s favorite CI/CD tool - Jenkins! This example shows how easy it is to deploy popular applications using native Helm.\n\n```yaml filename="lifecycle.yaml"\nenvironment:\n defaultServices:\n - name: "my-app"\n - name: "jenkins" # Add Jenkins to your default services\n\nservices:\n - name: "jenkins"\n helm:\n deploymentMethod: "native"\n repository: "myorg/apps"\n branchName: "main"\n chart:\n name: "jenkins"\n repoUrl: "https://charts.bitnami.com/bitnami"\n version: "13.6.8"\n values:\n - "service.type=NodePort" # Override default LoadBalancer to avoid cloud resource costs\n```\n\n\n 🎉 That\'s it! With just a few lines of configuration, you\'ll have Jenkins\n running in your Kubernetes cluster. The `service.type=NodePort` override is\n important - the Bitnami Jenkins chart defaults to `LoadBalancer` which would\n consume cloud resources like an AWS ELB or GCP Load Balancer.\n\n\nTo access your Jenkins instance:\n\n1. Check the deployment status in your PR comment\n2. Click the **Deploy Logs** link to monitor the deployment\n3. Once deployed, Jenkins will be available at the internal hostname\n\n\n For more Jenkins configuration options and values, check out the [Bitnami\n Jenkins chart\n documentation](https://github.com/bitnami/charts/tree/main/bitnami/jenkins).\n This same pattern works for any Bitnami chart (PostgreSQL, Redis, MongoDB) or\n any other public Helm chart!\n\n\n### Basic Service Deployment\n\n```yaml filename="lifecycle.yaml"\nservices:\n - name: web-api\n helm:\n deploymentMethod: "native"\n chart:\n name: web-app\n version: "1.2.0"\n```\n\n### PostgreSQL with Overrides\n\n```yaml filename="lifecycle.yaml"\nservices:\n - name: database\n helm:\n deploymentMethod: "native"\n version: "3.14.0" # Override Helm version\n args: "--atomic" # Override deployment args\n chart:\n name: postgresql\n values: # Additional values merged with defaults\n - "persistence.size=20Gi"\n - "replicaCount=2"\n```\n\n### Custom Environment Variables\n\nLifecycle supports flexible environment variable formatting through the `envMapping` configuration. This feature allows you to control how environment variables from your service configuration are passed to your Helm chart.\n\n\n **Why envMapping?** Different Helm charts expect environment variables in\n different formats. Some expect an array of objects with `name` and `value`\n fields (Kubernetes standard), while others expect a simple key-value map. The\n `envMapping` feature lets you adapt to your chart\'s requirements.\n\n\n#### Default envMapping Configuration\n\nYou can define default `envMapping` configurations in the `global_config` database table. These defaults apply to all services using that chart unless overridden at the service level.\n\n**Example: Setting defaults for your organization\'s chart**\n\n```json\n// In global_config table, key: "myorg-web-app"\n{\n "chart": {\n "name": "myorg-web-app",\n "repoUrl": "https://charts.myorg.com"\n },\n "envMapping": {\n "app": {\n "format": "array",\n "path": "deployment.containers[0].env"\n }\n }\n}\n```\n\nWith this configuration, any service using the `myorg-web-app` chart will automatically use array format for environment variables:\n\n```yaml filename="lifecycle.yaml"\nservices:\n - name: api\n helm:\n deploymentMethod: "native"\n chart:\n name: "myorg-web-app" # Inherits envMapping from global_config\n docker:\n app:\n env:\n API_KEY: "secret"\n # These will be formatted as array automatically\n```\n\n\n Setting `envMapping` in global_config is particularly useful when: - You have\n a standard organizational chart used by many services - You want consistent\n environment variable handling across services - You\'re migrating multiple\n services and want to reduce configuration duplication\n\n\n\n \n#### Array Format\nBest for charts that expect Kubernetes-style env arrays.\n\n```yaml {7-9} filename="lifecycle.yaml"\nservices:\n - name: api\n helm:\n deploymentMethod: "native"\n chart:\n name: local\n envMapping:\n app:\n format: "array"\n path: "env"\n docker:\n app:\n env:\n DATABASE_URL: "postgres://localhost:5432/mydb"\n API_KEY: "secret-key-123"\n NODE_ENV: "production"\n```\n\n**This produces the following Helm values:**\n\n```bash\n--set env[0].name=DATABASE_URL\n--set env[0].value=postgres://localhost:5432/mydb\n--set env[1].name=API_KEY\n--set env[1].value=secret-key-123\n--set env[2].name=NODE_ENV\n--set env[2].value=production\n```\n\n**Your chart\'s values.yaml would use it like:**\n\n```yaml\nenv:\n - name: DATABASE_URL\n value: postgres://localhost:5432/mydb\n - name: API_KEY\n value: secret-key-123\n - name: NODE_ENV\n value: production\n```\n\n \n \n#### Map Format\nBest for charts that expect a simple key-value object.\n\n```yaml {7-9} filename="lifecycle.yaml"\nservices:\n - name: api\n helm:\n deploymentMethod: "native"\n chart:\n name: local\n envMapping:\n app:\n format: "map"\n path: "envVars"\n docker:\n app:\n env:\n DATABASE_URL: "postgres://localhost:5432/mydb"\n API_KEY: "secret-key-123"\n NODE_ENV: "production"\n```\n\n**This produces the following Helm values:**\n\n```bash\n--set envVars.DATABASE__URL=postgres://localhost:5432/mydb\n--set envVars.API__KEY=secret-key-123\n--set envVars.NODE__ENV=production\n```\n\n\n Note: Underscores in environment variable names are converted to double\n underscores (`__`) in map format to avoid Helm parsing issues.\n\n\n**Your chart\'s values.yaml would use it like:**\n\n```yaml\nenvVars:\n DATABASE__URL: postgres://localhost:5432/mydb\n API__KEY: secret-key-123\n NODE__ENV: production\n```\n\n \n\n\n#### Complete Example with Multiple Services\n\n```yaml filename="lifecycle.yaml"\nservices:\n # Service using array format (common for standard Kubernetes deployments)\n - name: frontend\n helm:\n deploymentMethod: "native"\n repository: "myorg/apps"\n branchName: "main"\n envMapping:\n app:\n format: "array"\n path: "deployment.env"\n chart:\n name: "./charts/web-app"\n docker:\n app:\n dockerfilePath: "frontend/Dockerfile"\n env:\n REACT_APP_API_URL: "https://api.example.com"\n REACT_APP_VERSION: "{{build.uuid}}"\n\n # Service using map format (common for custom charts)\n - name: backend\n helm:\n deploymentMethod: "native"\n repository: "myorg/apps"\n branchName: "main"\n envMapping:\n app:\n format: "map"\n path: "config.environment"\n chart:\n name: "./charts/api"\n docker:\n builder:\n engine: "buildkit"\n defaultTag: "main"\n app:\n dockerfilePath: "docker/backend.dockerfile"\n ports:\n - 3000\n env:\n NODE_ENV: "production"\n SERVICE_NAME: "backend"\n\n - name: "mysql-database"\n helm:\n deploymentMethod: "native"\n repository: "myorg/api-services"\n branchName: "main"\n chart:\n name: "mysql" # Using public Helm chart\n version: "9.14.1"\n repoUrl: "https://charts.bitnami.com/bitnami"\n valueFiles:\n - "deploy/helm/mysql-values.yaml"\n```\n\n## Templated Variables\n\nLifecycle supports template variables in Helm values that are resolved at deployment time. These variables allow you to reference dynamic values like build UUIDs, docker tags, and internal hostnames.\n\n### Available Variables\n\nTemplate variables use the format `{{{variableName}}}` and are replaced with actual values during deployment:\n\n| Variable | Description | Example Value |\n| ------------------------------------ | ------------------------- | ---------------------------------------- |\n| `{{{serviceName_dockerTag}}}` | Docker tag for a service | `main-abc123` |\n| `{{{serviceName_dockerImage}}}` | Full docker image path | `registry.com/org/repo:main-abc123` |\n| `{{{serviceName_internalHostname}}}` | Internal service hostname | `api-service.env-uuid.svc.cluster.local` |\n| `{{{build.uuid}}}` | Build UUID | `env-12345` |\n| `{{{build.namespace}}}` | Kubernetes namespace | `env-12345` |\n\n### Usage in Values\n\n```yaml filename="lifecycle.yaml"\nservices:\n - name: web-api\n helm:\n deploymentMethod: "native"\n chart:\n name: "./charts/app"\n values:\n - "image.tag={{{web-api_dockerTag}}}"\n - "backend.url=http://{{{backend-service_internalHostname}}}:8080"\n - "env.BUILD_ID={{{build.uuid}}}"\n```\n\n\n**Docker Image Mapping**: When using custom charts, you\'ll need to map `{{{serviceName_dockerImage}}}` or `{{{serviceName_dockerTag}}}` to your chart\'s expected value path. Common patterns include:\n- `image.repository` and `image.tag` (most common)\n- `deployment.image` (single image string)\n- `app.image` or `application.image`\n- Custom paths specific to your chart\n\nCheck your chart\'s `values.yaml` to determine the correct path.\n\n\n\n#### Image Mapping Examples\n\n```yaml filename="lifecycle.yaml"\n# Example 1: Separate repository and tag (most common)\nservices:\n - name: web-api\n helm:\n chart:\n name: "./charts/standard"\n values:\n - "image.repository=registry.com/org/web-api" # Static repository\n - "image.tag={{{web-api_dockerTag}}}" # Dynamic tag only\n\n# Example 2: Combined image string\nservices:\n - name: worker\n helm:\n chart:\n name: "./charts/custom"\n values:\n - "deployment.image={{{worker_dockerImage}}}" # Full image with tag\n\n# Example 3: Nested structure\nservices:\n - name: backend\n helm:\n chart:\n name: "./charts/microservice"\n values:\n - "app.container.image={{{backend_dockerImage}}}" # Full image with tag\n```\n\n\n**Important**: Always use triple braces `{{{variable}}}` instead of double braces `{{variable}}` for Lifecycle template variables. This prevents Helm from trying to process them as Helm template functions and ensures they are passed through correctly for Lifecycle to resolve.\n\n\n### Template Resolution Order\n\n1. Lifecycle resolves `{{{variables}}}` before passing values to Helm\n2. The resolved values are then passed to Helm using `--set` flags\n3. Helm processes its own template functions (if any) after receiving the resolved values\n\n### Example with Service Dependencies\n\n```yaml filename="lifecycle.yaml"\nservices:\n - name: api-gateway\n helm:\n chart:\n name: "./charts/gateway"\n values:\n - "config.authServiceUrl=http://{{{auth-service_internalHostname}}}:3000"\n - "config.userServiceUrl=http://{{{user-service_internalHostname}}}:3000"\n - "image.tag={{{api-gateway_dockerTag}}}"\n\n - name: auth-service\n helm:\n chart:\n name: "./charts/microservice"\n values:\n - "image.tag={{{auth-service_dockerTag}}}"\n - "database.host={{{postgres-db_internalHostname}}}"\n```\n\n## Deployment Process\n\n\n 1. **Job Creation**: A Kubernetes job is created in the ephemeral namespace 2.\n **RBAC Setup**: Service account with namespace-scoped permissions is created\n 3. **Git Clone**: Init container clones the repository 4. **Helm Deploy**:\n Main container executes the Helm deployment 5. **Monitoring**: Logs are\n streamed in real-time via WebSocket\n\n\n### Concurrent Deployment Handling\n\nNative Helm automatically handles concurrent deployments by:\n\n- Detecting existing deployment jobs\n- Force-deleting the old job\n- Starting the new deployment\n\nThis ensures the newest deployment always takes precedence.\n\n## Monitoring Deployments\n\n### Deploy Logs Access\n\nFor services using native Helm deployment, you can access deployment logs through the Lifecycle PR comment:\n\n1. Add the `lifecycle-status-comments!` label to your PR\n2. In the status comment that appears, you\'ll see a **Deploy Logs** link for each service using native Helm\n3. Click the link to view real-time deployment logs\n\n### Log Contents\n\nThe deployment logs show:\n\n- Git repository cloning progress (`clone-repo` container)\n- Helm deployment execution (`helm-deploy` container)\n- Real-time streaming of all deployment output\n- Success or failure status\n\n## Chart Types\n\nLifecycle automatically detects and handles three chart types:\n\n| Type | Detection | Features |\n| ------------- | -------------------------------------------- | ---------------------------------------------- |\n| **ORG_CHART** | Matches `orgChartName` AND has `helm.docker` | Docker image injection, env var transformation |\n| **LOCAL** | Name is "local" or starts with "./" or "../" | Flexible `envMapping` support |\n| **PUBLIC** | Everything else | Standard labels and tolerations |\n\n\n The `orgChartName` is configured in the database\'s `global_config` table with\n key `orgChart`. This allows organizations to define their standard internal\n Helm chart.\n\n\n## Troubleshooting\n\n### Deployment Fails with "Another Operation in Progress"\n\n**Symptom**: Helm reports an existing operation is blocking deployment\n\n**Solution**: Native Helm automatically handles this by killing existing jobs. If the issue persists:\n\n```bash\n# Check for stuck jobs\nkubectl get jobs -n env-{uuid} -l service={serviceName}\n\n# Force delete if needed\nkubectl delete job {jobName} -n env-{uuid} --force --grace-period=0\n```\n\n### Environment Variables Not Working\n\n**Symptom**: Environment variables not passed to the deployment\n\n**Common Issues**:\n\n1. `envMapping` placed under `chart` instead of directly under `helm`\n2. Incorrect format specification (array vs map)\n3. Missing path configuration\n\n**Correct Configuration**:\n\n```yaml {4-7}\nhelm:\n deploymentMethod: "native"\n chart:\n name: local\n envMapping: # Correct: directly under helm\n app:\n format: "array"\n path: "env"\n```\n\n## Migration Example\n\nHere\'s a complete example showing how to migrate from GitHub-type services to Helm-type services:\n\n### Before: GitHub-type Services\n\n```yaml filename="lifecycle.yaml"\nservices:\n - name: "api-gateway"\n github:\n repository: "myorg/api-services"\n branchName: "main"\n docker:\n builder:\n engine: "buildkit"\n defaultTag: "main"\n app:\n dockerfilePath: "docker/api.dockerfile"\n env:\n BACKEND_URL: "{{backend-service_internalHostname}}:3000"\n LOG_LEVEL: "info"\n ENV_NAME: "production"\n ports:\n - 8080\n deployment:\n public: true\n resource:\n cpu:\n request: "100m"\n memory:\n request: "256Mi"\n readiness:\n tcpSocketPort: 8080\n hostnames:\n host: "example.com"\n defaultInternalHostname: "api-gateway-prod"\n defaultPublicUrl: "api.example.com"\n\n - name: "backend-service"\n github:\n repository: "myorg/api-services"\n branchName: "main"\n docker:\n builder:\n engine: "buildkit"\n defaultTag: "main"\n app:\n dockerfilePath: "docker/backend.dockerfile"\n ports:\n - 3000\n env:\n NODE_ENV: "production"\n SERVICE_NAME: "backend"\n deployment:\n public: false\n resource:\n cpu:\n request: "50m"\n memory:\n request: "128Mi"\n readiness:\n tcpSocketPort: 3000\n\n - name: "mysql-database"\n docker:\n dockerImage: "mysql"\n defaultTag: "8.0-debian"\n ports:\n - 3306\n env:\n MYSQL_ROOT_PASSWORD: "strongpassword123"\n MYSQL_DATABASE: "app_database"\n MYSQL_USER: "app_user"\n MYSQL_PASSWORD: "apppassword456"\n deployment:\n public: false\n resource:\n cpu:\n request: "100m"\n memory:\n request: "512Mi"\n readiness:\n tcpSocketPort: 3306\n serviceDisks:\n - name: "mysql-data"\n mountPath: "/var/lib/mysql"\n accessModes: "ReadWriteOnce"\n storageSize: "10Gi"\n```\n\n### After: Helm-type Services with Native Deployment\n\n```yaml filename="lifecycle.yaml"\nservices:\n - name: "api-gateway"\n helm:\n deploymentMethod: "native" # Enable native Helm\n version: "3.14.0"\n repository: "myorg/api-services"\n branchName: "main"\n args: "--wait --timeout 10m"\n envMapping:\n app:\n format: "array"\n path: "containers.api.env"\n chart:\n name: "./charts/microservices"\n values:\n - \'image.tag="{{{api-gateway_dockerTag}}}"\'\n - "service.type=LoadBalancer"\n - "ingress.enabled=true"\n valueFiles:\n - "deploy/helm/base-values.yaml"\n - "deploy/helm/api-gateway-values.yaml"\n docker:\n builder:\n engine: "buildkit"\n defaultTag: "main"\n app:\n dockerfilePath: "docker/api.dockerfile"\n env:\n BACKEND_URL: "{{backend-service_internalHostname}}:3000"\n LOG_LEVEL: "info"\n ENV_NAME: "production"\n ports:\n - 8080\n\n - name: "backend-service"\n helm:\n deploymentMethod: "native"\n version: "3.14.0"\n repository: "myorg/api-services"\n branchName: "main"\n envMapping:\n app:\n format: "map" # Using map format for this service\n path: "env"\n chart:\n name: "./charts/microservices"\n values:\n - \'image.tag="{{{backend-service_dockerTag}}}"\'\n - "replicaCount=2"\n valueFiles:\n - "deploy/helm/base-values.yaml"\n - "deploy/helm/backend-values.yaml"\n docker:\n builder:\n engine: "buildkit"\n defaultTag: "main"\n app:\n dockerfilePath: "docker/backend.dockerfile"\n ports:\n - 3000\n env:\n NODE_ENV: "production"\n SERVICE_NAME: "backend"\n\n - name: "mysql-database"\n helm:\n deploymentMethod: "native"\n repository: "myorg/api-services"\n branchName: "main"\n chart:\n name: "mysql" # Using public Helm chart\n version: "9.14.1"\n repoUrl: "https://charts.bitnami.com/bitnami"\n valueFiles:\n - "deploy/helm/mysql-values.yaml"\n```\n\n### Key Migration Points\n\n1. **Service Type Change**: Changed from `github:` to `helm:` configuration\n2. **Repository Location**: `repository` and `branchName` move from under `github:` to directly under `helm:`\n3. **Deployment Method**: Added `deploymentMethod: "native"` to enable native Helm\n4. **Chart Configuration**: Added `chart:` section with local or public charts\n5. **Environment Mapping**: Added `envMapping:` to control how environment variables are passed\n6. **Helm Arguments**: Added `args:` for Helm command customization\n7. **Docker Configuration**: Kept existing `docker:` config for build process\n\n\n Note that when converting from GitHub-type to Helm-type services, the\n `repository` and `branchName` fields move from being nested under `github:` to\n being directly under `helm:`.\n\n\n\n Many configuration options (like Helm version, args, and chart details) can be\n defined in the `global_config` database table, making the service YAML\n cleaner. Only override when needed.',
+ },
{
title: "Template Variables",
description: null,
diff --git a/src/pages/docs/features/native-helm-deployment.mdx b/src/pages/docs/features/native-helm-deployment.mdx
new file mode 100644
index 0000000..fd94419
--- /dev/null
+++ b/src/pages/docs/features/native-helm-deployment.mdx
@@ -0,0 +1,821 @@
+---
+title: Native Helm Deployment
+description: Deploy services using Helm directly in Kubernetes without external CI/CD dependencies
+tags:
+ - helm
+ - deployment
+ - kubernetes
+ - native
+date: "2025-01-29"
+---
+
+import { Callout, Steps, Tabs } from "nextra/components";
+import { Image } from "@lifecycle-docs/components";
+
+
+ This feature is still in alpha and might change with breaking changes.
+
+
+**Native Helm** is an alternative deployment method that runs Helm deployments directly within Kubernetes jobs, eliminating the need for external CI/CD systems. This provides a more self-contained and portable deployment solution.
+
+
+ Native Helm deployment is an opt-in feature that can be enabled globally or
+ per-service.
+
+
+## Overview
+
+When enabled, Native Helm:
+
+- Creates Kubernetes jobs to execute Helm deployments
+- Runs in ephemeral namespaces with proper RBAC
+- Provides real-time deployment logs via WebSocket
+- Handles concurrent deployments automatically
+- Supports all standard Helm chart types
+
+## Quickstart
+
+Want to try native Helm deployment? Here's the fastest way to get started:
+
+```yaml filename="lifecycle.yaml" {5}
+services:
+ - name: my-api
+ defaultUUID: "dev-0"
+ helm:
+ deploymentMethod: "native" # That's it!
+ chart:
+ name: "local"
+ valueFiles:
+ - "./helm/values.yaml"
+```
+
+This configuration:
+
+1. Enables native Helm for the `my-api` service
+2. Uses a local Helm chart from your repository
+3. Applies values from `./helm/values.yaml`
+4. Runs deployment as a Kubernetes job
+
+
+ To enable native Helm for all services at once, see [Global
+ Configuration](#enabling-native-helm).
+
+
+## Configuration
+
+### Enabling Native Helm
+
+There are two ways to enable native Helm deployment:
+
+
+
+ ```yaml {4} filename="lifecycle.yaml" services: - name: my-service helm:
+ deploymentMethod: "native" # Enable for this service only chart: name:
+ my-chart ```
+
+
+ ```yaml {3} filename="lifecycle.yaml" helm: nativeHelm: enabled: true #
+ Enable for all services ```
+
+
+
+### Configuration Precedence
+
+Lifecycle uses a hierarchical configuration system with three levels of precedence:
+
+1. **helmDefaults** - Base defaults for all deployments (database: `global_config` table)
+2. **Chart-specific config** - Per-chart defaults (database: `global_config` table)
+3. **Service YAML config** - Service-specific overrides (highest priority)
+
+
+ Service-level configuration always takes precedence over global defaults.
+
+
+### Global Configuration (Database)
+
+Global configurations are stored in the `global_config` table in the database. Each configuration is stored as a row with:
+
+- **key**: The configuration name (e.g., 'helmDefaults', 'postgresql', 'redis')
+- **config**: JSON object containing the configuration
+
+#### helmDefaults Configuration
+
+Stored in database with key `helmDefaults`:
+
+```json
+{
+ "nativeHelm": {
+ "enabled": true,
+ "defaultArgs": "--wait --timeout 30m",
+ "defaultHelmVersion": "3.12.0"
+ }
+}
+```
+
+**Field Descriptions**:
+
+- `enabled`: When `true`, enables native Helm deployment for all services unless they explicitly set `deploymentMethod: "ci"`
+- `defaultArgs`: Arguments automatically appended to every Helm command (appears before service-specific args)
+- `defaultHelmVersion`: The Helm version to use when not specified at the service or chart level
+
+#### Chart-specific Configuration
+
+Example: PostgreSQL configuration stored with key `postgresql`:
+
+```json
+{
+ "version": "3.13.0",
+ "args": "--force --timeout 60m0s --wait",
+ "chart": {
+ "name": "postgresql",
+ "repoUrl": "https://charts.bitnami.com/bitnami",
+ "version": "12.9.0",
+ "values": ["auth.username=postgres_user", "auth.database=postgres_db"]
+ }
+}
+```
+
+
+ These global configurations are managed by administrators and stored in the
+ database. They provide consistent defaults across all environments and can be
+ overridden at the service level.
+
+
+## Usage Examples
+
+### Quick Experiment: Deploy Jenkins!
+
+Want to see native Helm in action? Let's deploy everyone's favorite CI/CD tool - Jenkins! This example shows how easy it is to deploy popular applications using native Helm.
+
+```yaml filename="lifecycle.yaml"
+environment:
+ defaultServices:
+ - name: "my-app"
+ - name: "jenkins" # Add Jenkins to your default services
+
+services:
+ - name: "jenkins"
+ helm:
+ deploymentMethod: "native"
+ repository: "myorg/apps"
+ branchName: "main"
+ chart:
+ name: "jenkins"
+ repoUrl: "https://charts.bitnami.com/bitnami"
+ version: "13.6.8"
+ values:
+ - "service.type=NodePort" # Override default LoadBalancer to avoid cloud resource costs
+```
+
+
+ 🎉 That's it! With just a few lines of configuration, you'll have Jenkins
+ running in your Kubernetes cluster. The `service.type=NodePort` override is
+ important - the Bitnami Jenkins chart defaults to `LoadBalancer` which would
+ consume cloud resources like an AWS ELB or GCP Load Balancer.
+
+
+To access your Jenkins instance:
+
+1. Check the deployment status in your PR comment
+2. Click the **Deploy Logs** link to monitor the deployment
+3. Once deployed, Jenkins will be available at the internal hostname
+
+
+ For more Jenkins configuration options and values, check out the [Bitnami
+ Jenkins chart
+ documentation](https://github.com/bitnami/charts/tree/main/bitnami/jenkins).
+ This same pattern works for any Bitnami chart (PostgreSQL, Redis, MongoDB) or
+ any other public Helm chart!
+
+
+### Basic Service Deployment
+
+```yaml filename="lifecycle.yaml"
+services:
+ - name: web-api
+ helm:
+ deploymentMethod: "native"
+ chart:
+ name: web-app
+ version: "1.2.0"
+```
+
+### PostgreSQL with Overrides
+
+```yaml filename="lifecycle.yaml"
+services:
+ - name: database
+ helm:
+ deploymentMethod: "native"
+ version: "3.14.0" # Override Helm version
+ args: "--atomic" # Override deployment args
+ chart:
+ name: postgresql
+ values: # Additional values merged with defaults
+ - "persistence.size=20Gi"
+ - "replicaCount=2"
+```
+
+### Custom Environment Variables
+
+Lifecycle supports flexible environment variable formatting through the `envMapping` configuration. This feature allows you to control how environment variables from your service configuration are passed to your Helm chart.
+
+
+ **Why envMapping?** Different Helm charts expect environment variables in
+ different formats. Some expect an array of objects with `name` and `value`
+ fields (Kubernetes standard), while others expect a simple key-value map. The
+ `envMapping` feature lets you adapt to your chart's requirements.
+
+
+#### Default envMapping Configuration
+
+You can define default `envMapping` configurations in the `global_config` database table. These defaults apply to all services using that chart unless overridden at the service level.
+
+**Example: Setting defaults for your organization's chart**
+
+```json
+// In global_config table, key: "myorg-web-app"
+{
+ "chart": {
+ "name": "myorg-web-app",
+ "repoUrl": "https://charts.myorg.com"
+ },
+ "envMapping": {
+ "app": {
+ "format": "array",
+ "path": "deployment.containers[0].env"
+ }
+ }
+}
+```
+
+With this configuration, any service using the `myorg-web-app` chart will automatically use array format for environment variables:
+
+```yaml filename="lifecycle.yaml"
+services:
+ - name: api
+ helm:
+ deploymentMethod: "native"
+ chart:
+ name: "myorg-web-app" # Inherits envMapping from global_config
+ docker:
+ app:
+ env:
+ API_KEY: "secret"
+ # These will be formatted as array automatically
+```
+
+
+ Setting `envMapping` in global_config is particularly useful when: - You have
+ a standard organizational chart used by many services - You want consistent
+ environment variable handling across services - You're migrating multiple
+ services and want to reduce configuration duplication
+
+
+
+
+#### Array Format
+Best for charts that expect Kubernetes-style env arrays.
+
+```yaml {7-9} filename="lifecycle.yaml"
+services:
+ - name: api
+ helm:
+ deploymentMethod: "native"
+ chart:
+ name: local
+ envMapping:
+ app:
+ format: "array"
+ path: "env"
+ docker:
+ app:
+ env:
+ DATABASE_URL: "postgres://localhost:5432/mydb"
+ API_KEY: "secret-key-123"
+ NODE_ENV: "production"
+```
+
+**This produces the following Helm values:**
+
+```bash
+--set env[0].name=DATABASE_URL
+--set env[0].value=postgres://localhost:5432/mydb
+--set env[1].name=API_KEY
+--set env[1].value=secret-key-123
+--set env[2].name=NODE_ENV
+--set env[2].value=production
+```
+
+**Your chart's values.yaml would use it like:**
+
+```yaml
+env:
+ - name: DATABASE_URL
+ value: postgres://localhost:5432/mydb
+ - name: API_KEY
+ value: secret-key-123
+ - name: NODE_ENV
+ value: production
+```
+
+
+
+#### Map Format
+Best for charts that expect a simple key-value object.
+
+```yaml {7-9} filename="lifecycle.yaml"
+services:
+ - name: api
+ helm:
+ deploymentMethod: "native"
+ chart:
+ name: local
+ envMapping:
+ app:
+ format: "map"
+ path: "envVars"
+ docker:
+ app:
+ env:
+ DATABASE_URL: "postgres://localhost:5432/mydb"
+ API_KEY: "secret-key-123"
+ NODE_ENV: "production"
+```
+
+**This produces the following Helm values:**
+
+```bash
+--set envVars.DATABASE__URL=postgres://localhost:5432/mydb
+--set envVars.API__KEY=secret-key-123
+--set envVars.NODE__ENV=production
+```
+
+
+ Note: Underscores in environment variable names are converted to double
+ underscores (`__`) in map format to avoid Helm parsing issues.
+
+
+**Your chart's values.yaml would use it like:**
+
+```yaml
+envVars:
+ DATABASE__URL: postgres://localhost:5432/mydb
+ API__KEY: secret-key-123
+ NODE__ENV: production
+```
+
+
+
+
+#### Complete Example with Multiple Services
+
+```yaml filename="lifecycle.yaml"
+services:
+ # Service using array format (common for standard Kubernetes deployments)
+ - name: frontend
+ helm:
+ deploymentMethod: "native"
+ repository: "myorg/apps"
+ branchName: "main"
+ envMapping:
+ app:
+ format: "array"
+ path: "deployment.env"
+ chart:
+ name: "./charts/web-app"
+ docker:
+ app:
+ dockerfilePath: "frontend/Dockerfile"
+ env:
+ REACT_APP_API_URL: "https://api.example.com"
+ REACT_APP_VERSION: "{{build.uuid}}"
+
+ # Service using map format (common for custom charts)
+ - name: backend
+ helm:
+ deploymentMethod: "native"
+ repository: "myorg/apps"
+ branchName: "main"
+ envMapping:
+ app:
+ format: "map"
+ path: "config.environment"
+ chart:
+ name: "./charts/api"
+ docker:
+ builder:
+ engine: "buildkit"
+ defaultTag: "main"
+ app:
+ dockerfilePath: "docker/backend.dockerfile"
+ ports:
+ - 3000
+ env:
+ NODE_ENV: "production"
+ SERVICE_NAME: "backend"
+
+ - name: "mysql-database"
+ helm:
+ deploymentMethod: "native"
+ repository: "myorg/api-services"
+ branchName: "main"
+ chart:
+ name: "mysql" # Using public Helm chart
+ version: "9.14.1"
+ repoUrl: "https://charts.bitnami.com/bitnami"
+ valueFiles:
+ - "deploy/helm/mysql-values.yaml"
+```
+
+## Templated Variables
+
+Lifecycle supports template variables in Helm values that are resolved at deployment time. These variables allow you to reference dynamic values like build UUIDs, docker tags, and internal hostnames.
+
+### Available Variables
+
+Template variables use the format `{{{variableName}}}` and are replaced with actual values during deployment:
+
+| Variable | Description | Example Value |
+| ------------------------------------ | ------------------------- | ---------------------------------------- |
+| `{{{serviceName_dockerTag}}}` | Docker tag for a service | `main-abc123` |
+| `{{{serviceName_dockerImage}}}` | Full docker image path | `registry.com/org/repo:main-abc123` |
+| `{{{serviceName_internalHostname}}}` | Internal service hostname | `api-service.env-uuid.svc.cluster.local` |
+| `{{{build.uuid}}}` | Build UUID | `env-12345` |
+| `{{{build.namespace}}}` | Kubernetes namespace | `env-12345` |
+
+### Usage in Values
+
+```yaml filename="lifecycle.yaml"
+services:
+ - name: web-api
+ helm:
+ deploymentMethod: "native"
+ chart:
+ name: "./charts/app"
+ values:
+ - "image.tag={{{web-api_dockerTag}}}"
+ - "backend.url=http://{{{backend-service_internalHostname}}}:8080"
+ - "env.BUILD_ID={{{build.uuid}}}"
+```
+
+
+**Docker Image Mapping**: When using custom charts, you'll need to map `{{{serviceName_dockerImage}}}` or `{{{serviceName_dockerTag}}}` to your chart's expected value path. Common patterns include:
+- `image.repository` and `image.tag` (most common)
+- `deployment.image` (single image string)
+- `app.image` or `application.image`
+- Custom paths specific to your chart
+
+Check your chart's `values.yaml` to determine the correct path.
+
+
+
+#### Image Mapping Examples
+
+```yaml filename="lifecycle.yaml"
+# Example 1: Separate repository and tag (most common)
+services:
+ - name: web-api
+ helm:
+ chart:
+ name: "./charts/standard"
+ values:
+ - "image.repository=registry.com/org/web-api" # Static repository
+ - "image.tag={{{web-api_dockerTag}}}" # Dynamic tag only
+
+# Example 2: Combined image string
+services:
+ - name: worker
+ helm:
+ chart:
+ name: "./charts/custom"
+ values:
+ - "deployment.image={{{worker_dockerImage}}}" # Full image with tag
+
+# Example 3: Nested structure
+services:
+ - name: backend
+ helm:
+ chart:
+ name: "./charts/microservice"
+ values:
+ - "app.container.image={{{backend_dockerImage}}}" # Full image with tag
+```
+
+
+**Important**: Always use triple braces `{{{variable}}}` instead of double braces `{{variable}}` for Lifecycle template variables. This prevents Helm from trying to process them as Helm template functions and ensures they are passed through correctly for Lifecycle to resolve.
+
+
+### Template Resolution Order
+
+1. Lifecycle resolves `{{{variables}}}` before passing values to Helm
+2. The resolved values are then passed to Helm using `--set` flags
+3. Helm processes its own template functions (if any) after receiving the resolved values
+
+### Example with Service Dependencies
+
+```yaml filename="lifecycle.yaml"
+services:
+ - name: api-gateway
+ helm:
+ chart:
+ name: "./charts/gateway"
+ values:
+ - "config.authServiceUrl=http://{{{auth-service_internalHostname}}}:3000"
+ - "config.userServiceUrl=http://{{{user-service_internalHostname}}}:3000"
+ - "image.tag={{{api-gateway_dockerTag}}}"
+
+ - name: auth-service
+ helm:
+ chart:
+ name: "./charts/microservice"
+ values:
+ - "image.tag={{{auth-service_dockerTag}}}"
+ - "database.host={{{postgres-db_internalHostname}}}"
+```
+
+## Deployment Process
+
+
+ 1. **Job Creation**: A Kubernetes job is created in the ephemeral namespace 2.
+ **RBAC Setup**: Service account with namespace-scoped permissions is created
+ 3. **Git Clone**: Init container clones the repository 4. **Helm Deploy**:
+ Main container executes the Helm deployment 5. **Monitoring**: Logs are
+ streamed in real-time via WebSocket
+
+
+### Concurrent Deployment Handling
+
+Native Helm automatically handles concurrent deployments by:
+
+- Detecting existing deployment jobs
+- Force-deleting the old job
+- Starting the new deployment
+
+This ensures the newest deployment always takes precedence.
+
+## Monitoring Deployments
+
+### Deploy Logs Access
+
+For services using native Helm deployment, you can access deployment logs through the Lifecycle PR comment:
+
+1. Add the `lifecycle-status-comments!` label to your PR
+2. In the status comment that appears, you'll see a **Deploy Logs** link for each service using native Helm
+3. Click the link to view real-time deployment logs
+
+### Log Contents
+
+The deployment logs show:
+
+- Git repository cloning progress (`clone-repo` container)
+- Helm deployment execution (`helm-deploy` container)
+- Real-time streaming of all deployment output
+- Success or failure status
+
+## Chart Types
+
+Lifecycle automatically detects and handles three chart types:
+
+| Type | Detection | Features |
+| ------------- | -------------------------------------------- | ---------------------------------------------- |
+| **ORG_CHART** | Matches `orgChartName` AND has `helm.docker` | Docker image injection, env var transformation |
+| **LOCAL** | Name is "local" or starts with "./" or "../" | Flexible `envMapping` support |
+| **PUBLIC** | Everything else | Standard labels and tolerations |
+
+
+ The `orgChartName` is configured in the database's `global_config` table with
+ key `orgChart`. This allows organizations to define their standard internal
+ Helm chart.
+
+
+## Troubleshooting
+
+### Deployment Fails with "Another Operation in Progress"
+
+**Symptom**: Helm reports an existing operation is blocking deployment
+
+**Solution**: Native Helm automatically handles this by killing existing jobs. If the issue persists:
+
+```bash
+# Check for stuck jobs
+kubectl get jobs -n env-{uuid} -l service={serviceName}
+
+# Force delete if needed
+kubectl delete job {jobName} -n env-{uuid} --force --grace-period=0
+```
+
+### Environment Variables Not Working
+
+**Symptom**: Environment variables not passed to the deployment
+
+**Common Issues**:
+
+1. `envMapping` placed under `chart` instead of directly under `helm`
+2. Incorrect format specification (array vs map)
+3. Missing path configuration
+
+**Correct Configuration**:
+
+```yaml {4-7}
+helm:
+ deploymentMethod: "native"
+ chart:
+ name: local
+ envMapping: # Correct: directly under helm
+ app:
+ format: "array"
+ path: "env"
+```
+
+## Migration Example
+
+Here's a complete example showing how to migrate from GitHub-type services to Helm-type services:
+
+### Before: GitHub-type Services
+
+```yaml filename="lifecycle.yaml"
+services:
+ - name: "api-gateway"
+ github:
+ repository: "myorg/api-services"
+ branchName: "main"
+ docker:
+ builder:
+ engine: "buildkit"
+ defaultTag: "main"
+ app:
+ dockerfilePath: "docker/api.dockerfile"
+ env:
+ BACKEND_URL: "{{backend-service_internalHostname}}:3000"
+ LOG_LEVEL: "info"
+ ENV_NAME: "production"
+ ports:
+ - 8080
+ deployment:
+ public: true
+ resource:
+ cpu:
+ request: "100m"
+ memory:
+ request: "256Mi"
+ readiness:
+ tcpSocketPort: 8080
+ hostnames:
+ host: "example.com"
+ defaultInternalHostname: "api-gateway-prod"
+ defaultPublicUrl: "api.example.com"
+
+ - name: "backend-service"
+ github:
+ repository: "myorg/api-services"
+ branchName: "main"
+ docker:
+ builder:
+ engine: "buildkit"
+ defaultTag: "main"
+ app:
+ dockerfilePath: "docker/backend.dockerfile"
+ ports:
+ - 3000
+ env:
+ NODE_ENV: "production"
+ SERVICE_NAME: "backend"
+ deployment:
+ public: false
+ resource:
+ cpu:
+ request: "50m"
+ memory:
+ request: "128Mi"
+ readiness:
+ tcpSocketPort: 3000
+
+ - name: "mysql-database"
+ docker:
+ dockerImage: "mysql"
+ defaultTag: "8.0-debian"
+ ports:
+ - 3306
+ env:
+ MYSQL_ROOT_PASSWORD: "strongpassword123"
+ MYSQL_DATABASE: "app_database"
+ MYSQL_USER: "app_user"
+ MYSQL_PASSWORD: "apppassword456"
+ deployment:
+ public: false
+ resource:
+ cpu:
+ request: "100m"
+ memory:
+ request: "512Mi"
+ readiness:
+ tcpSocketPort: 3306
+ serviceDisks:
+ - name: "mysql-data"
+ mountPath: "/var/lib/mysql"
+ accessModes: "ReadWriteOnce"
+ storageSize: "10Gi"
+```
+
+### After: Helm-type Services with Native Deployment
+
+```yaml filename="lifecycle.yaml"
+services:
+ - name: "api-gateway"
+ helm:
+ deploymentMethod: "native" # Enable native Helm
+ version: "3.14.0"
+ repository: "myorg/api-services"
+ branchName: "main"
+ args: "--wait --timeout 10m"
+ envMapping:
+ app:
+ format: "array"
+ path: "containers.api.env"
+ chart:
+ name: "./charts/microservices"
+ values:
+ - 'image.tag="{{{api-gateway_dockerTag}}}"'
+ - "service.type=LoadBalancer"
+ - "ingress.enabled=true"
+ valueFiles:
+ - "deploy/helm/base-values.yaml"
+ - "deploy/helm/api-gateway-values.yaml"
+ docker:
+ builder:
+ engine: "buildkit"
+ defaultTag: "main"
+ app:
+ dockerfilePath: "docker/api.dockerfile"
+ env:
+ BACKEND_URL: "{{backend-service_internalHostname}}:3000"
+ LOG_LEVEL: "info"
+ ENV_NAME: "production"
+ ports:
+ - 8080
+
+ - name: "backend-service"
+ helm:
+ deploymentMethod: "native"
+ version: "3.14.0"
+ repository: "myorg/api-services"
+ branchName: "main"
+ envMapping:
+ app:
+ format: "map" # Using map format for this service
+ path: "env"
+ chart:
+ name: "./charts/microservices"
+ values:
+ - 'image.tag="{{{backend-service_dockerTag}}}"'
+ - "replicaCount=2"
+ valueFiles:
+ - "deploy/helm/base-values.yaml"
+ - "deploy/helm/backend-values.yaml"
+ docker:
+ builder:
+ engine: "buildkit"
+ defaultTag: "main"
+ app:
+ dockerfilePath: "docker/backend.dockerfile"
+ ports:
+ - 3000
+ env:
+ NODE_ENV: "production"
+ SERVICE_NAME: "backend"
+
+ - name: "mysql-database"
+ helm:
+ deploymentMethod: "native"
+ repository: "myorg/api-services"
+ branchName: "main"
+ chart:
+ name: "mysql" # Using public Helm chart
+ version: "9.14.1"
+ repoUrl: "https://charts.bitnami.com/bitnami"
+ valueFiles:
+ - "deploy/helm/mysql-values.yaml"
+```
+
+### Key Migration Points
+
+1. **Service Type Change**: Changed from `github:` to `helm:` configuration
+2. **Repository Location**: `repository` and `branchName` move from under `github:` to directly under `helm:`
+3. **Deployment Method**: Added `deploymentMethod: "native"` to enable native Helm
+4. **Chart Configuration**: Added `chart:` section with local or public charts
+5. **Environment Mapping**: Added `envMapping:` to control how environment variables are passed
+6. **Helm Arguments**: Added `args:` for Helm command customization
+7. **Docker Configuration**: Kept existing `docker:` config for build process
+
+
+ Note that when converting from GitHub-type to Helm-type services, the
+ `repository` and `branchName` fields move from being nested under `github:` to
+ being directly under `helm:`.
+
+
+
+ Many configuration options (like Helm version, args, and chart details) can be
+ defined in the `global_config` database table, making the service YAML
+ cleaner. Only override when needed.
+