-
Notifications
You must be signed in to change notification settings - Fork 306
feat(KFLUXVNGD-416): deploy squid to development and staging #8013
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(KFLUXVNGD-416): deploy squid to development and staging #8013
Conversation
avi-biton
commented
Sep 8, 2025
- Deploy squid to development and staging environments
- Exclude from production clusters
- Should be rebase and merge only after feat(KFLUXVNGD-469): add trust-manager to development and staging #7918 will be merged
Code Review by Gemini## Code Review
The changes introduce `squid` and `trust-manager` applications, update several component versions, modify Kueue configurations, and adjust banner content.
### Potential Issues and Suggestions:
1. **Squid ApplicationSet `environment` Mismatch**
* **Issue**: The `squid.yaml` ApplicationSet explicitly sets the `environment` to `development`. However, the commit message states that Squid should be deployed to "development and staging environments". This configuration will only deploy to development.
* **File**: `argo-cd-apps/base/member/infra-deployments/squid/squid.yaml`
* **Suggestion**: To deploy to both development and staging, the `environment` value should be dynamic, typically derived from the cluster's environment label or an overlay. A common pattern is to use a `values.environment` placeholder that is then patched by environment-specific overlays.
```diff
--- a/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
+++ b/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
@@ -13,7 +13,7 @@
matchLabels:
appstudio.redhat.com/member-cluster: "true"
values:
sourceRoot: components/squid
- environment: development
+ environment: '{{environment}}' # Or a more specific dynamic value
clusterDir: ""
- list:
elements: []
```
And then ensure the `development` and `staging` overlays patch this `environment` value accordingly.
2. **Trust Manager ApplicationSet `environment` Mismatch**
* **Issue**: The `trust-manager.yaml` ApplicationSet sets `environment: staging`. However, the `argo-cd-apps/overlays/development/kustomization.yaml` file includes a patch for `trust-manager`, indicating it should also be deployed in the `development` environment. With `environment: staging` hardcoded, the development deployment will incorrectly source its configuration from `components/trust-manager/staging/`.
* **File**: `argo-cd-apps/base/member/infra-deployments/trust-manager/trust-manager.yaml`
* **Suggestion**: Similar to Squid, the `environment` value should be dynamic.
```diff
--- a/argo-cd-apps/base/member/infra-deployments/trust-manager/trust-manager.yaml
+++ b/argo-cd-apps/base/member/infra-deployments/trust-manager/trust-manager.yaml
@@ -13,7 +13,7 @@
matchLabels:
appstudio.redhat.com/member-cluster: "true"
values:
sourceRoot: components/trust-manager
- environment: staging
+ environment: '{{environment}}' # Or a more specific dynamic value
clusterDir: ""
- list:
elements: []
```
And then ensure the `development` and `staging` overlays patch this `environment` value accordingly.
3. **Squid Helm Generator `environment` Mismatch**
* **Issue**: Both `development/squid-helm-generator.yaml` and `staging/squid-helm-generator.yaml` set `valuesInline.environment: release`. This means the Squid Helm chart will always use "release" specific configurations, even when deployed in development or staging. This is likely unintended.
* **File**: `components/squid/development/squid-helm-generator.yaml`
* **Suggestion**: The `environment` value should reflect the actual environment.
```diff
--- a/components/squid/development/squid-helm-generator.yaml
+++ b/components/squid/development/squid-helm-generator.yaml
@@ -13,4 +13,4 @@
test:
enabled: false
cert-manager:
enabled: false
- environment: release
+ environment: development
```
* **File**: `components/squid/staging/squid-helm-generator.yaml`
* **Suggestion**: The `environment` value should reflect the actual environment.
```diff
--- a/components/squid/staging/squid-helm-generator.yaml
+++ b/components/squid/staging/squid-helm-generator.yaml
@@ -13,4 +13,4 @@
test:
enabled: false
cert-manager:
enabled: false
- environment: release
+ environment: staging
```
4. **Kueue Test Coverage Regression**
* **Issue**: The `hack/test-tekton-kueue-config.py` script has been modified to remove testing of `production-kflux-ocp-p01` and now *only* tests the `development` Kueue configuration. This significantly reduces test coverage for staging and production Kueue configurations, which might have different CEL expressions or priorities.
* **File**: `hack/test-tekton-kueue-config.py`
* **Suggestion**: Reintroduce testing for staging and production Kueue configurations. The script should be able to dynamically load and test different `config.yaml` files (e.g., `components/kueue/staging/base/tekton-kueue/config.yaml` and `components/kueue/production/base/tekton-kueue/config.yaml`) to ensure comprehensive coverage.
```diff
--- a/hack/test-tekton-kueue-config.py
+++ b/hack/test-tekton-kueue-config.py
@@ -1,101 +1,89 @@
#!/usr/bin/env python3
"""
Tekton-Kueue Configuration Test
A comprehensive test suite that validates the CEL expressions in the tekton-kueue configuration by:
-1. **Reading configuration dynamically** from specified config files
-2. **Getting the image** from specified kustomization files
+1. **Reading configuration dynamically** from `components/kueue/development/tekton-kueue/config.yaml`
+2. **Getting the image** from `components/kueue/staging/base/tekton-kueue/kustomization.yaml`
3. **Running mutations** using the actual tekton-kueue container via podman
4. **Validating results** against expected annotations, labels, and priority classes
Usage:
# Check if all prerequisites are met
python hack/test-tekton-kueue-config.py --check-setup
# Run all tests
python hack/test-tekton-kueue-config.py
# Run tests with verbose output
python hack/test-tekton-kueue-config.py --verbose
Test Scenarios:
- Each test can now specify its own config file and kustomization file,
- allowing testing of multiple configurations and images.
+ The test covers all CEL expressions in the configuration:
+
+ 1. **Multi-platform Resource Requests**:
+ - New style: `build-platforms` parameter → `kueue.konflux-ci.dev/requests-*` annotations
+ - Old style: `PLATFORM` parameters in tasks → `kueue.konflux-ci.dev/requests-*` annotations
+
+ 2. **AWS IP Resource Requests**:
+ - New style: `build-platforms` parameter → `kueue.konflux-ci.dev/requests-aws-ip` annotations
+ for platforms NOT in the excluded list (linux/ppc64le, linux/s390x, linux-x86-64, local, localhost, linux/amd64)
+ - Old style: `PLATFORM` parameters in tasks → `kueue.konflux-ci.dev/requests-aws-ip` annotations
+ for platforms NOT in the excluded list
+
+ 3. **Priority Assignment Logic**:
+ - Push events → `konflux-post-merge-build`
+ - Pull requests → `konflux-pre-merge-build`
+ - Integration test push → `konflux-post-merge-test`
+ - Integration test PR → `konflux-pre-merge-test`
+ - Release managed → `konflux-release`
+ - Release tenant → `konflux-tenant-release`
+ - Mintmaker namespace → `konflux-dependency-update`
+ - Default → `konflux-default`
+
+ 4. **Queue Assignment**: All PipelineRuns get `kueue.x-k8s.io/queue-name: pipelines-queue`
Prerequisites:
- Python 3 with PyYAML
- Podman (for running the tekton-kueue container)
- - Access to the tekton-kueue images specified in the kustomizations
+ - Access to the tekton-kueue image specified in the kustomization
CI/CD Integration:
The test runs automatically on pull requests via the GitHub action
`.github/workflows/test-tekton-kueue-config.yaml` when:
- Changes are made to `components/kueue/**`
- The test script itself is modified
- The workflow file is modified
The test will **FAIL** (not skip) if any prerequisites are missing, ensuring
issues are caught early in CI/CD pipelines.
"""
import subprocess
import tempfile
import os
import yaml
import unittest
from pathlib import Path
-from typing import Dict, Any, TypedDict
+from typing import Dict
from dataclasses import dataclass
import sys
@dataclass
-class TestConfig:
+class Prerequisites:
+ image: str
+ podman_version: str
+ config_file: Path
+ kustomization_file: Path
- config_file: Path
- kustomization_file: Path
- image: str
-
-
-class ConfigCombination(TypedDict):
- config_file: str
- kustomization_file: str
-
-
-class TestCombination(TypedDict):
- pipelinerun_key: str
- config_key: str
-
-
-class PipelineRunMetadata(TypedDict, total=False):
- name: str
- namespace: str
- labels: Dict[str, str]
- annotations: Dict[str, str]
-
-
-class PipelineRunDefinition(TypedDict):
- apiVersion: str
- kind: str
- metadata: PipelineRunMetadata
- spec: Dict[str, Any] # More flexible since PipelineRun specs can vary
-
-
-class ExpectedResults(TypedDict):
- annotations: Dict[str, str]
- labels: Dict[str, str]
-
-
-class PipelineRunTestData(TypedDict):
- pipelinerun: PipelineRunDefinition
- expected: ExpectedResults
def get_tekton_kueue_image(kustomization_file: Path) -> str:
"""Read the tekton-kueue image from the given kustomization file."""
try:
with open(kustomization_file, 'r') as f:
kustomization = yaml.safe_load(f)
# Look for the tekton-kueue image in the images section
images = kustomization.get('images', [])
for image in images:
if image.get('name') == 'tekton-kueue':
new_name = image.get('newName', '')
new_tag = image.get('newTag', '')
if new_name and new_tag:
return f"{new_name}:{new_tag}"
raise ValueError("tekton-kueue image not found in kustomization")
except Exception as e:
raise RuntimeError(f"Failed to read tekton-kueue image from {kustomization_file}: {e}")
-def resolve_path(path_str: str, repo_root: Path) -> Path:
- """Resolve a path string to an absolute Path, handling both relative and absolute paths."""
- if Path(path_str).is_absolute():
- return Path(path_str)
- return repo_root / path_str
-
-
-def validate_config_combination(config_key: str, repo_root: Path) -> TestConfig:
- """Validate and resolve config and kustomization files for a config combination."""
- config_data = CONFIG_COMBINATIONS[config_key]
-
- config_file = resolve_path(config_data["config_file"], repo_root)
- kustomization_file = resolve_path(config_data["kustomization_file"], repo_root)
-
- # Validate files exist
- if not config_file.exists():
- raise FileNotFoundError(f"Config file not found for config '{config_key}': {config_file}")
-
- if not kustomization_file.exists():
- raise FileNotFoundError(f"Kustomization file not found for config '{config_key}': {kustomization_file}")
-
- # Get image from kustomization
- image = get_tekton_kueue_image(kustomization_file)
-
- return TestConfig(
- config_file=config_file,
- kustomization_file=kustomization_file,
- image=image
- )
-
-
def check_prerequisites(should_print: bool = True) -> Prerequisites:
"""Check that all prerequisites are available.
Returns a Prerequisites object with discovered info (image, podman_version)
on success. Raises an exception on failure.
"""
messages = ["Checking prerequisites..."]
# Compute repo paths locally
repo_root = Path(__file__).parent.parent
- config_file = repo_root / "components/kueue/development/tekton-kueue/config.yaml"
- kustomization_file = repo_root / "components/kueue/development/tekton-kueue/kustomization.yaml"
+ # The test script should be updated to test all relevant configurations,
+ # or at least the base ones for staging and production.
+ # For now, we'll use the development config as a default for the test setup.
+ config_file = repo_root / "components/kueue/development/tekton-kueue/config.yaml" # This should be dynamic
+ kustomization_file = repo_root / "components/kueue/development/tekton-kueue/kustomization.yaml" # This should be dynamic
# Config file
if not config_file.exists():
raise FileNotFoundError(f"Config file not found: {config_file}")
messages.append(f"\u2713 Config file found: {config_file}")
# Kustomization file
if not kustomization_file.exists():
raise FileNotFoundError(f"Kustomization file not found: {kustomization_file}")
messages.append(f"\u2713 Kustomization file found: {kustomization_file}")
# Image from kustomization
image = get_tekton_kueue_image(kustomization_file)
messages.append(f"\u2713 Tekton-kueue image: {image}")
# Podman availability
result = subprocess.run(["podman", "--version"], capture_output=True, check=True, text=True)
podman_version = result.stdout.strip()
messages.append(f"\u2713 Podman available: {podman_version}")
if should_print:
for line in messages:
print(line)
return Prerequisites(
image=image,
podman_version=podman_version,
config_file=config_file,
kustomization_file=kustomization_file,
)
-# Test PipelineRun definitions (reusable across different configs)
-PIPELINERUN_DEFINITIONS: Dict[str, PipelineRunTestData] = {
+# Test PipelineRun definitions
+TEST_PIPELINERUNS = {
"multiplatform_new": {
"name": "Multi-platform pipeline (new style with build-platforms parameter)",
"pipelinerun": {
"apiVersion": "tekton.dev/v1",
"kind": "PipelineRun",
"metadata": {
"name": "test-multiplatform-new",
"namespace": "default",
"labels": {
"pipelinesascode.tekton.dev/event-type": "push"
}
},
"spec": {
"pipelineRef": {"name": "build-pipeline"},
"params": [
{
"name": "build-platforms",
"value": ["linux/amd64", "linux/s390x", "linux/ppc64le", "linux/arm64", "darwin/amd64"]
}
],
"workspaces": [{"name": "shared-workspace", "emptyDir": {}}]
}
},
"expected": {
"annotations": {
"kueue.konflux-ci.dev/requests-linux-amd64": "1",
"kueue.konflux-ci.dev/requests-linux-s390x": "1",
"kueue.konflux-ci.dev/requests-linux-ppc64le": "1",
"kueue.konflux-ci.dev/requests-linux-arm64": "1",
"kueue.konflux-ci.dev/requests-darwin-amd64": "1",
"kueue.konflux-ci.dev/requests-aws-ip": "3"
},
"labels": {
"kueue.x-k8s.io/queue-name": "pipelines-queue",
"kueue.x-k8s.io/priority-class": "konflux-post-merge-build"
}
}
},
"multiplatform_old": {
"name": "Multi-platform pipeline (old style with PLATFORM parameter in tasks)",
"pipelinerun": {
"apiVersion": "tekton.dev/v1",
"kind": "PipelineRun",
"metadata": {
"name": "test-multiplatform-old",
"namespace": "default",
"labels": {
"pipelinesascode.tekton.dev/event-type": "pull_request"
}
},
"spec": {
"pipelineSpec": {
"tasks": [
{
"name": "build-amd64",
"taskRef": {"name": "build-task"},
"params": [{"name": "PLATFORM", "value": "linux/amd64"}]
},
{
"name": "build-s390x",
"taskRef": {"name": "build-task"},
"params": [{"name": "PLATFORM", "value": "linux/s390x"}]
},
{
"name": "build-ppc64le",
"taskRef": {"name": "build-task"},
"params": [{"name": "PLATFORM", "value": "linux/ppc64le"}]
},
{
"name": "build-arm64",
"taskRef": {"name": "build-task"},
"params": [{"name": "PLATFORM", "value": "linux/arm64"}]
},
{
"name": "build-darwin",
"taskRef": {"name": "build-task"},
"params": [{"name": "PLATFORM", "value": "darwin/amd64"}]
}
]
},
"workspaces": [{"name": "shared-workspace", "emptyDir": {}}]
}
},
"expected": {
"annotations": {
"kueue.konflux-ci.dev/requests-linux-amd64": "1",
"kueue.konflux-ci.dev/requests-linux-s390x": "1",
"kueue.konflux-ci.dev/requests-linux-ppc64le": "1",
"kueue.konflux-ci.dev/requests-linux-arm64": "1",
"kueue.konflux-ci.dev/requests-darwin-amd64": "1",
"kueue.konflux-ci.dev/requests-aws-ip": "3"
},
"labels": {
"kueue.x-k8s.io/queue-name": "pipelines-queue",
"kueue.x-k8s.io/priority-class": "konflux-pre-merge-build"
}
}
},
"release_managed": {
"name": "Release managed pipeline",
"pipelinerun": {
"apiVersion": "tekton.dev/v1",
"kind": "PipelineRun",
"metadata": {
"name": "test-release-managed",
"namespace": "default",
"labels": {
"appstudio.openshift.io/service": "release",
"pipelines.appstudio.openshift.io/type": "managed"
}
},
"spec": {
"pipelineRef": {"name": "release-pipeline"},
"workspaces": [{"name": "shared-workspace", "emptyDir": {}}]
}
},
"expected": {
"annotations": {},
"labels": {
"kueue.x-k8s.io/queue-name": "pipelines-queue",
"kueue.x-k8s.io/priority-class": "konflux-release"
}
}
},
"release_tenant": {
"name": "Release tenant pipeline",
"pipelinerun": {
"apiVersion": "tekton.dev/v1",
"kind": "PipelineRun",
"metadata": {
"name": "test-release-tenant",
"namespace": "default",
"labels": {
"appstudio.openshift.io/service": "release",
"pipelines.appstudio.openshift.io/type": "tenant"
}
},
"spec": {
"pipelineRef": {"name": "release-pipeline"},
"workspaces": [{"name": "shared-workspace", "emptyDir": {}}]
}
}
,
"expected": {
"annotations": {},
"labels": {
"kueue.x-k8s.io/queue-name": "pipelines-queue",
"kueue.x-k8s.io/priority-class": "konflux-tenant-release"
}
}
},
"mintmaker": {
"name": "Mintmaker pipeline",
"pipelinerun": {
"apiVersion": "tekton.dev/v1",
"kind": "PipelineRun",
"metadata": {
"name": "test-mintmaker",
"namespace": "mintmaker"
},
"spec": {
"pipelineRef": {"name": "mintmaker-pipeline"},
"workspaces": [{"name": "shared-workspace", "emptyDir": {}}]
}
},
"expected": {
"annotations": {},
"labels": {
"kueue.x-k8s.io/queue-name": "pipelines-queue",
"kueue.x-k8s.io/priority-class": "konflux-dependency-update"
}
}
},
"integration_test_push": {
"name": "Integration test pipeline (push event)",
"pipelinerun": {
"apiVersion": "tekton.dev/v1",
"kind": "PipelineRun",
"metadata": {
"name": "test-integration-push",
"namespace": "default",
"labels": {
"pipelinesascode.tekton.dev/event-type": "push",
"test.appstudio.openshift.io/scenario": "integration-test"
}
},
"spec": {
"pipelineRef": {"name": "integration-pipeline"},
"workspaces": [{"name": "shared-workspace", "emptyDir": {}}]
}
},
"expected": {
"annotations": {},
"labels": {
"kueue.x-k8s.io/queue-name": "pipelines-queue",
"kueue.x-k8s.io/priority-class": "konflux-post-merge-test"
}
}
},
"integration_test_pr": {
"name": "Integration test pipeline (pull_request event)",
"pipelinerun": {
"apiVersion": "tekton.dev/v1",
"kind": "PipelineRun",
"metadata": {
"name": "test-integration-pr",
"namespace": "default",
"labels": {
"pipelinesascode.tekton.dev/event-type": "pull_request",
"test.appstudio.openshift.io/scenario": "integration-test"
}
},
"spec": {
"pipelineRef": {"name": "integration-pipeline"},
"workspaces": [{"name": "shared-workspace", "emptyDir": {}}]
}
},
"expected": {
"annotations": {},
"labels": {
"kueue.x-k8s.io/queue-name": "pipelines-queue",
"kueue.x-k8s.io/priority-class": "konflux-pre-merge-test"
}
}
},
"default_priority": {
"name": "Pipeline with default priority",
"pipelinerun": {
"apiVersion": "tekton.dev/v1",
"kind": "PipelineRun",
"metadata": {
"name": "test-default-priority",
"namespace": "default"
},
"spec": {
"pipelineRef": {"name": "default-pipeline"},
"workspaces": [{"name": "shared-workspace", "emptyDir": {}}]
}
},
"expected": {
"annotations": {},
"labels": {
"kueue.x-k8s.io/queue-name": "pipelines-queue",
"kueue.x-k8s.io/priority-class": "konflux-default"
}
}
},
"aws_platforms_only": {
"name": "Pipeline with only AWS-based platforms",
"pipelinerun": {
"apiVersion": "tekton.dev/v1",
"kind": "PipelineRun",
"metadata": {
"name": "test-aws-platforms-only",
"namespace": "default",
"labels": {
"pipelinesascode.tekton.dev/event-type": "push"
}
},
"spec": {
"pipelineRef": {"name": "build-pipeline"},
"params": [
{
"name": "build-platforms",
"value": ["linux/arm64", "darwin/amd64"]
}
],
"workspaces": [{"name": "shared-workspace", "emptyDir": {}}]
}
},
"expected": {
"annotations": {
"kueue.konflux-ci.dev/requests-linux-arm64": "1",
"kueue.konflux-ci.dev/requests-darwin-amd64": "1",
"kueue.konflux-ci.dev/requests-aws-ip": "2"
},
"labels": {
"kueue.x-k8s.io/queue-name": "pipelines-queue",
"kueue.x-k8s.io/priority-class": "konflux-post-merge-build"
}
}
},
"mixed_platforms_excluded_included": {
"name": "Pipeline with mixed excluded and included platforms for AWS IP",
"pipelinerun": {
"apiVersion": "tekton.dev/v1",
"kind": "PipelineRun",
"metadata": {
"name": "test-mixed-platforms",
"namespace": "default",
"labels": {
"pipelinesascode.tekton.dev/event-type": "push"
}
},
"spec": {
"pipelineRef": {"name": "build-pipeline"},
"params": [
{
"name": "build-platforms",
"value": ["linux/amd64", "linux/s390x", "linux/ppc64le", "linux/arm64", "darwin/amd64"]
}
],
"workspaces": [{"name": "shared-workspace", "emptyDir": {}}]
}
},
"expected": {
"annotations": {
"kueue.konflux-ci.dev/requests-linux-amd64": "1",
"kueue.konflux-ci.dev/requests-linux-s390x": "1",
"kueue.konflux-ci.dev/requests-linux-ppc64le": "1",
"kueue.konflux-ci.dev/requests-linux-arm64": "1",
"kueue.konflux-ci.dev/requests-darwin-amd64": "1",
"kueue.konflux-ci.dev/requests-aws-ip": "3"
},
"labels": {
"kueue.x-k8s.io/queue-name": "pipelines-queue",
"kueue.x-k8s.io/priority-class": "konflux-post-merge-build"
}
}
- },
-
- "user-specific-priority": {
- "name": "Multi-platform pipeline with user-specific priority (new style)",
- "pipelinerun": {
- "apiVersion": "tekton.dev/v1",
- "kind": "PipelineRun",
- "metadata": {
- "name": "test-user-specific-priority",
- "namespace": "default",
- "labels": {
- "pipelinesascode.tekton.dev/event-type": "push",
- "kueue.x-k8s.io/priority-class": "konflux-user-specific"
- }
- },
- "spec": {
- "pipelineRef": {"name": "build-pipeline"},
- "params": [
- {
- "name": "build-platforms",
- "value": ["linux/amd64", "linux/s390x", "linux/ppc64le", "linux/arm64", "darwin/amd64"]
- }
- ],
- "workspaces": [{"name": "shared-workspace", "emptyDir": {}}]
- }
- },
- "expected": {
- "annotations": {
- "kueue.konflux-ci.dev/requests-linux-amd64": "1",
- "kueue.konflux-ci.dev/requests-linux-s390x": "1",
- "kueue.konflux-ci.dev/requests-linux-ppc64le": "1",
- "kueue.konflux-ci.dev/requests-linux-arm64": "1",
- "kueue.konflux-ci.dev/requests-darwin-amd64": "1",
- "kueue.konflux-ci.dev/requests-aws-ip": "3"
- },
- "labels": {
- "kueue.x-k8s.io/queue-name": "pipelines-queue",
- "kueue.x-k8s.io/priority-class": "konflux-user-specific"
- }
- }
}
-# Configuration combinations that can be applied to any PipelineRun
-CONFIG_COMBINATIONS: Dict[str, ConfigCombination] = {
- "development": {
- "name": "Development config",
- "config_file": "components/kueue/development/tekton-kueue/config.yaml",
- "kustomization_file": "components/kueue/development/tekton-kueue/kustomization.yaml"
- },
- "staging": {
- "name": "Staging config",
- "config_file": "components/kueue/staging/base/tekton-kueue/config.yaml",
- "kustomization_file": "components/kueue/staging/base/tekton-kueue/kustomization.yaml"
- },
- "production": {
- "name": "Production config",
- "config_file": "components/kueue/production/base/tekton-kueue/config.yaml",
- "kustomization_file": "components/kueue/production/base/tekton-kueue/kustomization.yaml"
- },
- "production-kflux-ocp-p01": {
- "name": "Production config",
- "config_file": "components/kueue/production/kflux-ocp-p01/config.yaml",
- "kustomization_file": "components/kueue/production/base/tekton-kueue/kustomization.yaml"
- }
-}
-
-# Test combinations: which PipelineRuns to test with which configs
-# This creates a cartesian product of PipelineRuns and configs
-TEST_COMBINATIONS: Dict[str, TestCombination] = {
- # Test all PipelineRuns with development config (default)
- "multiplatform_new_dev": {
- "pipelinerun_key": "multiplatform_new",
- "config_key": "development"
- },
- "multiplatform_old_dev": {
- "pipelinerun_key": "multiplatform_old",
- "config_key": "development"
- },
- "release_managed_dev": {
- "pipelinerun_key": "release_managed",
- "config_key": "development"
- },
- "release_tenant_dev": {
- "pipelinerun_key": "release_tenant",
- "config_key": "development"
- },
- "mintmaker_dev": {
- "pipelinerun_key": "mintmaker",
- "config_key": "development"
- },
- "integration_test_push_dev": {
- "pipelinerun_key": "integration_test_push",
- "config_key": "development"
- },
- "integration_test_pr_dev": {
- "pipelinerun_key": "integration_test_pr",
- "config_key": "development"
- },
- "default_priority_dev": {
- "pipelinerun_key": "default_priority",
- "config_key": "development"
- },
- "aws_platforms_only_dev": {
- "pipelinerun_key": "aws_platforms_only",
- "config_key": "development"
- },
- "mixed_platforms_excluded_included_dev": {
- "pipelinerun_key": "mixed_platforms_excluded_included",
- "config_key": "development"
- },
-
- # Test key PipelineRuns with staging config
- "multiplatform_new_staging": {
- "pipelinerun_key": "multiplatform_new",
- "config_key": "staging"
- },
- "release_managed_staging": {
- "pipelinerun_key": "release_managed",
- "config_key": "staging"
- },
- "integration_test_push_staging": {
- "pipelinerun_key": "integration_test_push",
- "config_key": "staging"
- },
-
- # Test key PipelineRuns with production config
- "multiplatform_new_production": {
- "pipelinerun_key": "multiplatform_new",
- "config_key": "production"
- },
- "release_managed_production": {
- "pipelinerun_key": "release_managed",
- "config_key": "production"
- },
-
- # Example: Test the same PipelineRun with different configs to show reusability
- "user-specific-priority_and_mixed_platforms_production-kflux-ocp-p01": {
- "pipelinerun_key": "user-specific-priority",
- "config_key": "production-kflux-ocp-p01"
- }
-}
-
-
class TektonKueueMutationTest(unittest.TestCase):
"""Test suite for tekton-kueue CEL expression mutations."""
@classmethod
def setUpClass(cls):
-"""Set up test class - check prerequisites and pre-process configs."""
-cls.processed_configs = check_prerequisites(should_print=False)
-cls.repo_root = Path(__file__).parent.parent
-print("Prerequisites validated for all tests.")
+"""Set up test class - check prerequisites."""
+info = check_prerequisites(should_print=False)
+cls.tekton_kueue_image = info.image
+cls.config_file = info.config_file
+print(f"Using tekton-kueue image: {cls.tekton_kueue_image}")
-def run_mutation_test(self, test_combination: TestCombination) -> Dict[str, Any]:
+def run_mutation_test(self, test_data: Dict) -> Dict:
"""Run a single mutation test and return results."""
- # Get pre-processed configuration
- config_key = test_combination["config_key"]
- test_config = self.processed_configs[config_key]
-
- # Get the PipelineRun definition
- pipelinerun_key = test_combination["pipelinerun_key"]
- pipelinerun_data = PIPELINERUN_DEFINITIONS[pipelinerun_key]
- pipelinerun = pipelinerun_data["pipelinerun"]
+ pipelinerun = test_data["pipelinerun"]
with tempfile.TemporaryDirectory() as temp_dir:
# Write the config file
config_path = Path(temp_dir) / "config.yaml"
pipelinerun_path = Path(temp_dir) / "pipelinerun.yaml"
- # Copy the test-specific config file
+ # Copy the config file
import shutil
- shutil.copy2(test_config.config_file, config_path)
+ shutil.copy2(self.config_file, config_path)
# Write the PipelineRun
with open(pipelinerun_path, 'w') as f:
yaml.dump(pipelinerun, f, default_flow_style=False)
# Set proper permissions
os.chmod(config_path, 0o644)
os.chmod(pipelinerun_path, 0o644)
os.chmod(temp_dir, 0o755)
- # Run the mutation with test-specific image
+ # Run the mutation
cmd = [
"podman", "run", "--rm",
"-v", f"{temp_dir}:/workspace:z",
-test_config.image,
+self.tekton_kueue_image,
"mutate",
"--pipelinerun-file", "/workspace/pipelinerun.yaml",
"--config-dir", "/workspace"
]
result = subprocess.run(cmd, capture_output=True, text=True)
if result.returncode != 0:
self.fail(f"Mutation failed: {result.stderr}")
# Parse the mutated PipelineRun
try:
mutated = yaml.safe_load(result.stdout)
except yaml.YAMLError as e:
self.fail(f"Failed to parse mutated YAML: {e}")
return mutated
- def validate_mutation_result(self, test_key: str, test_combination: TestCombination) -> None:
+ def validate_mutation_result(self, test_key: str, test_data: Dict):
"""Helper method to validate mutation results."""
with self.subTest(test=test_key):
- # Get pre-processed configuration for logging
- config_key = test_combination["config_key"]
- test_config = self.processed_configs[config_key]
- print(f"Running test '{test_key}' with config: {test_config.config_file}, image: {test_config.image}")
-
- mutated = self.run_mutation_test(test_combination)
-
- # Get expected results from the PipelineRun definition
- pipelinerun_key = test_combination["pipelinerun_key"]
- pipelinerun_data = PIPELINERUN_DEFINITIONS[pipelinerun_key]
- expected = pipelinerun_data["expected"]
+ mutated = self.run_mutation_test(test_data)
+ expected = test_data["expected"]
- original_metadata = pipelinerun_data["pipelinerun"].get("metadata", {})
+ original_metadata = test_data["pipelinerun"].get("metadata", {})
original_annotations = original_metadata.get("annotations", {}) or {}
original_labels = original_metadata.get("labels", {}) or {}
# Check annotations (full equality vs original + expected)
annotations = mutated.get("metadata", {}).get("annotations", {})
expected_annotations = expected["annotations"]
expected_annotations_full = {**original_annotations, **expected_annotations}
self.assertDictEqual(
annotations,
expected_annotations_full,
f"Annotations mismatch; expected {expected_annotations_full}, got {annotations}"
)
# Check labels (full equality vs original + expected)
labels = mutated.get("metadata", {}).get("labels", {})
expected_labels = expected["labels"]
expected_labels_full = {**original_labels, **expected_labels}
self.assertDictEqual(
labels,
expected_labels_full,
f"Labels mismatch; expected {expected_labels_full}, got {labels}"
)
def test_all_mutations(self):
"""Test all tekton-kueue mutation scenarios."""
- for test_key, test_combination in TEST_COMBINATIONS.items():
- self.validate_mutation_result(test_key, test_combination)
+ for test_key, test_data in TEST_PIPELINERUNS.items():
+ self.validate_mutation_result(test_key, test_data)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description="Test tekton-kueue CEL expressions")
parser.add_argument("--check-setup", action="store_true",
help="Check if prerequisites are met and show configuration")
parser.add_argument("--verbose", "-v", action="store_true",
help="Run tests with verbose output")
# Parse known args to allow unittest args to pass through
args, unknown = parser.parse_known_args()
if args.check_setup:
try:
- processed_configs = check_prerequisites(should_print=True)
+ info = check_prerequisites(should_print=True)
except Exception as e:
print(f"✗ {e}")
sys.exit(1)
print("\n✅ All prerequisites met! Ready to run tests.")
print("Run: python hack/test-tekton-kueue-config.py")
print("\nNote: Tests will FAIL (not skip) if any prerequisites are missing.")
else:
# Run unittest with remaining args
```
5. **Kueue `config.yaml` Deletion for `kflux-ocp-p01`**
* **Issue**: The `components/kueue/production/kflux-ocp-p01/config.yaml` file has been deleted, and its `configMapGenerator` removed from `kustomization.yaml`. This means `kflux-ocp-p01` will now implicitly use the `config.yaml` from `components/kueue/production/base/tekton-kueue/config.yaml`. While this might be intentional, it's a significant change in configuration for a specific production cluster.
* **File**: `components/kueue/production/kflux-ocp-p01/config.yaml` (deleted) and `components/kueue/production/kflux-ocp-p01/kustomization.yaml`
* **Suggestion**: Confirm that the `production/base` Kueue configuration is indeed the desired and fully tested configuration for `kflux-ocp-p01`. If there were specific requirements for `kflux-ocp-p01` that were handled by its dedicated `config.yaml`, ensure they are now covered by the base or are no longer needed.
### Minor Observations:
* **Component Version Bumps**: Numerous components (`build-service`, `image-controller`, `integration`, `konflux-ui`, `mintmaker`, `monitoring/grafana/release`, `multi-platform-controller`, `project-controller`, `release`) have their image tags and Git references updated. These are standard updates, assuming the new versions have been tested.
* **Kueue ClusterQueue Changes**: The `components/kueue/production/kflux-rhel-p01/queue-config/cluster-queue.yaml` has significant changes to `coveredResources` and `nominalQuota`, including the removal of `linux-d160-*` resources and a reduction in `linux-s390x` quota. These changes align with the `multi-platform-controller` host configuration updates and seem consistent.
* **Multi-Platform Controller S390X Host Reduction**: The `components/multi-platform-controller/production-downstream/kflux-rhel-p01/host-config.yaml` shows a substantial reduction in S390X static hosts and the removal of `linux-large/s390x` platforms. This is a significant change in builder capacity for S390X. Ensure this reduction is intentional and has been communicated to relevant teams.
Please address the identified issues, especially the `environment` mismatches for Squid and Trust Manager, and the regression in Kueue test coverage. |
612f0a1
to
639e3f3
Compare
Code Review by GeminiThe following issues and suggestions have been identified in the provided diff: 1.
|
639e3f3
to
cebca37
Compare
Code Review by GeminiThe pull request introduces several changes, many of which appear unrelated to the stated goal of deploying Squid to development and staging environments. This makes the review process difficult and increases the risk of unintended side effects. Here's a breakdown of the issues and suggested improvements: 1. Scope Creep / Unrelated Changes
Recommendation: These unrelated changes should be split into separate pull requests to allow for focused review and reduce the risk of introducing regressions. 2. Squid ApplicationSet Configuration Issues
3. Trust Manager ApplicationSet Configuration Issues
4. Kueue Test Coverage Regression
5.
6. Multi-platform Controller Host Configuration Changes
|
cebca37
to
f2b0c34
Compare
Code Review by Gemini### `argo-cd-apps/base/member/infra-deployments/squid/squid.yaml`
**Issue:** The `ApplicationSet` hardcodes the `environment` to `development`, which prevents deployment to `staging` environments as stated in the commit message.
**Suggestion:** Dynamically set the `environment` based on a label on the target cluster. This assumes that member clusters are labeled with `appstudio.redhat.com/environment` (e.g., `development`, `staging`).
```diff
--- a/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
+++ b/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
@@ -12,7 +12,7 @@
appstudio.redhat.com/member-cluster: "true"
values:
sourceRoot: components/squid
- environment: development
+ environment: '{{metadata.labels.appstudio.redhat.com/environment}}'
clusterDir: ""
- list:
elements: [] Issue: The Suggestion: Remove the empty --- a/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
+++ b/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
@@ -14,8 +14,6 @@
values:
sourceRoot: components/squid
environment: '{{metadata.labels.appstudio.redhat.com/environment}}'
clusterDir: ""
- - list:
- elements: []
template:
metadata:
name: squid-{{nameNormalized}}
|
f2b0c34
to
0a08397
Compare
Code Review by GeminiHere are some observations and suggested improvements for the changes:
**1. `argo-cd-apps/base/member/infra-deployments/squid/squid.yaml` - ApplicationSet configuration**
The current `ApplicationSet` configuration in `squid.yaml` hardcodes `environment: staging` for all member clusters. This means the `source.path` will always resolve to `components/squid/staging/`, effectively preventing the `components/squid/development/` configuration from being used, even for clusters intended for development. This contradicts the commit message "Deploy squid to development and staging environments".
To correctly deploy to both development and staging environments, the `ApplicationSet` needs to dynamically select the environment based on cluster labels.
**Suggested change:**
```diff
--- a/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
+++ b/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
@@ -9,14 +9,24 @@
mergeKeys:
- nameNormalized
generators:
- clusters:
selector:
matchLabels:
appstudio.redhat.com/member-cluster: "true"
- values:
- sourceRoot: components/squid
- environment: staging
+ appstudio.redhat.com/environment: development
+ values:
+ sourceRoot: components/squid
+ environment: development
+ clusterDir: ""
+ - clusters:
+ selector:
+ matchLabels:
+ appstudio.redhat.com/member-cluster: "true"
+ appstudio.redhat.com/environment: staging
+ values:
+ sourceRoot: components/squid
+ environment: staging
clusterDir: ""
- list:
elements: []
template: 2. The Suggested change: --- a/components/squid/development/squid-helm-generator.yaml
+++ b/components/squid/development/squid-helm-generator.yaml
@@ -15,7 +15,7 @@
test:
enabled: false
cert-manager:
enabled: false
- environment: release
+ environment: development
resources:
requests:
cpu: 100m 3. Similar to the development environment, the Suggested change: --- a/components/squid/staging/squid-helm-generator.yaml
+++ b/components/squid/staging/squid-helm-generator.yaml
@@ -15,7 +15,7 @@
test:
enabled: false
cert-manager:
enabled: false
- environment: release
+ environment: staging
resources:
requests:
cpu: 100m 4. With the Suggested change: --- a/argo-cd-apps/overlays/development/kustomization.yaml
+++ b/argo-cd-apps/overlays/development/kustomization.yaml
@@ -227,7 +227,4 @@
target:
kind: ApplicationSet
version: v1alpha1
name: trust-manager
- - path: development-overlay-patch.yaml
- target:
- kind: ApplicationSet
- version: v1alpha1
- name: squid |
@yftacherzog @ifireball @gbenhaim @amisstea PTAL :) |
/test appstudio-e2e-tests |
0a08397
to
76b4850
Compare
Code Review by GeminiHere are some observations and suggested improvements for the changes:
**1. `argo-cd-apps/base/member/infra-deployments/squid/squid.yaml` - ApplicationSet configuration**
The current `ApplicationSet` configuration in `squid.yaml` hardcodes `environment: staging` for all member clusters. This means the `source.path` will always resolve to `components/squid/staging/`, effectively preventing the `components/squid/development/` configuration from being used, even for clusters intended for development. This contradicts the commit message "Deploy squid to development and staging environments".
To correctly deploy to both development and staging environments, the `ApplicationSet` needs to dynamically select the environment based on cluster labels.
**Suggested change:**
```diff
--- a/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
+++ b/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
@@ -9,14 +9,24 @@
mergeKeys:
- nameNormalized
generators:
- clusters:
selector:
matchLabels:
appstudio.redhat.com/member-cluster: "true"
- values:
- sourceRoot: components/squid
- environment: staging
+ appstudio.redhat.com/environment: development
+ values:
+ sourceRoot: components/squid
+ environment: development
+ clusterDir: ""
+ - clusters:
+ selector:
+ matchLabels:
+ appstudio.redhat.com/member-cluster: "true"
+ appstudio.redhat.com/environment: staging
+ values:
+ sourceRoot: components/squid
+ environment: staging
clusterDir: ""
- list:
elements: []
template: 2. The Suggested change: --- a/components/squid/development/squid-helm-generator.yaml
+++ b/components/squid/development/squid-helm-generator.yaml
@@ -15,7 +15,7 @@
test:
enabled: false
cert-manager:
enabled: false
- environment: release
+ environment: development
resources:
requests:
cpu: 100m 3. Similar to the development environment, the Suggested change: --- a/components/squid/staging/squid-helm-generator.yaml
+++ b/components/squid/staging/squid-helm-generator.yaml
@@ -15,7 +15,7 @@
test:
enabled: false
cert-manager:
enabled: false
- environment: release
+ environment: staging
resources:
requests:
cpu: 100m 4. With the Suggested change: --- a/argo-cd-apps/overlays/development/kustomization.yaml
+++ b/argo-cd-apps/overlays/development/kustomization.yaml
@@ -227,7 +227,4 @@
target:
kind: ApplicationSet
version: v1alpha1
name: trust-manager
- - path: development-overlay-patch.yaml
- target:
- kind: ApplicationSet
- version: v1alpha1
- name: squid |
76b4850
to
b53db7d
Compare
Code Review by GeminiThe following issues and improvements have been identified: BugFile: The Suggested change: To correctly deploy different configurations based on the cluster's environment, the --- a/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
+++ b/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
@@ -6,15 +6,26 @@
- merge:
mergeKeys:
- nameNormalized
generators:
- clusters:
selector:
matchLabels:
appstudio.redhat.com/member-cluster: "true"
- values:
- sourceRoot: components/squid
- environment: staging
+ environment: development # Target development clusters
+ values:
+ sourceRoot: components/squid
+ environment: development # Use development configuration
clusterDir: ""
+ - clusters:
+ selector:
+ matchLabels:
+ appstudio.redhat.com/member-cluster: "true"
+ environment: staging # Target staging clusters
+ values:
+ sourceRoot: components/squid
+ environment: staging # Use staging configuration
+ clusterDir: ""
- list:
elements: []
template:
metadata: ImprovementFiles: The Suggested changes: File: --- a/components/squid/development/squid-helm-generator.yaml
+++ b/components/squid/development/squid-helm-generator.yaml
@@ -14,4 +14,4 @@
cert-manager:
enabled: false
- environment: release
+ environment: development
resources:
requests: File: --- a/components/squid/staging/squid-helm-generator.yaml
+++ b/components/squid/staging/squid-helm-generator.yaml
@@ -14,4 +14,4 @@
cert-manager:
enabled: false
- environment: release
+ environment: staging
resources:
requests: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/retest
/lgtm
repoURL: https://github.com/redhat-appstudio/infra-deployments.git | ||
targetRevision: main | ||
destination: | ||
namespace: proxy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe better to call the namespace konflux-caching-proxy just so it will be clear what is the purpose of this proxy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ADR mentions the proxy will be available at http.proxy.svc.cluster.local
(https://github.com/konflux-ci/architecture/blob/3f21424a9a089347fee6185e6763c62f1c3f51d8/ADR/0047-caching-for-container-build-layers.md#L28) which means deploying to the proxy
namespace. On the other hand, our service name is currently squid
and not http
. We should reconcile these one way or another. @ifireball do you have a strong preference for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might update the namespace before we will bump to production
Code Review by GeminiThe following issues have been identified in the provided code changes: Issue 1: Hardcoded environment in ApplicationSetThe File: --- a/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
+++ b/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
@@ -12,7 +12,7 @@
matchLabels:
appstudio.redhat.com/member-cluster: "true"
values:
sourceRoot: components/squid
- environment: staging
+ environment: '{{ (index .metadata.labels "appstudio.redhat.com/environment") | default "staging" }}'
clusterDir: ""
- list:
elements: [] This change assumes that member clusters are labeled with Issue 2: Incorrect environment in HelmChartInflationGenerator for developmentThe File: --- a/components/squid/development/squid-helm-generator.yaml
+++ b/components/squid/development/squid-helm-generator.yaml
@@ -15,7 +15,7 @@
enabled: false
cert-manager:
enabled: false
- environment: release
+ environment: development
resources:
requests:
cpu: 100m Issue 3: Incorrect environment in HelmChartInflationGenerator for stagingSimilar to the development environment, the File: --- a/components/squid/staging/squid-helm-generator.yaml
+++ b/components/squid/staging/squid-helm-generator.yaml
@@ -15,7 +15,7 @@
enabled: false
cert-manager:
enabled: false
- environment: release
+ environment: staging
resources:
requests:
cpu: 100m |
34f3fb5
to
b736760
Compare
Code Review by GeminiThe following issues and improvements are identified in the provided code changes: BugFile: The Exact lines to change: --- a/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
+++ b/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
@@ -4,19 +4,29 @@
generators:
- merge:
mergeKeys:
- nameNormalized
generators:
- clusters:
selector:
matchLabels:
appstudio.redhat.com/member-cluster: "true"
- values:
- sourceRoot: components/squid
- environment: staging
+ environment: development # Assuming clusters are labeled with 'environment: development'
+ values:
+ sourceRoot: components/squid
+ environment: development
+ clusterDir: ""
+ - clusters:
+ selector:
+ matchLabels:
+ appstudio.redhat.com/member-cluster: "true"
+ environment: staging # Assuming clusters are labeled with 'environment: staging'
+ values:
+ sourceRoot: components/squid
+ environment: staging
clusterDir: ""
- list:
elements: []
template:
metadata:
name: squid-{{nameNormalized}}
annotations:
ImprovementFiles: Both Exact lines to change for --- a/components/squid/development/squid-helm-generator.yaml
+++ b/components/squid/development/squid-helm-generator.yaml
@@ -14,4 +14,4 @@
cert-manager:
enabled: false
- environment: release
+ environment: development
resources:
requests:
Exact lines to change for --- a/components/squid/staging/squid-helm-generator.yaml
+++ b/components/squid/staging/squid-helm-generator.yaml
@@ -14,4 +14,4 @@
cert-manager:
enabled: false
- environment: release
+ environment: staging
resources:
requests:
|
b736760
to
9901705
Compare
Code Review by GeminiThe following issues and improvements have been identified: Issue 1: ApplicationSet hardcodes
|
Code Review by Gemini### `argo-cd-apps/base/member/infra-deployments/squid/squid.yaml`
**Bug:** The `merge` generator will not produce any applications because one of its sub-generators, the `list` generator, is empty. For the `merge` generator to produce output, all its sub-generators must produce at least one element.
```diff
--- a/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
+++ b/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
@@ -3,19 +3,14 @@
name: squid
spec:
generators:
- - merge:
- mergeKeys:
- - nameNormalized
- generators:
- - clusters:
- selector:
- matchLabels:
- appstudio.redhat.com/member-cluster: "true"
- values:
- sourceRoot: components/squid
- environment: staging
- clusterDir: ""
- - list:
- elements: []
+ - clusters:
+ selector:
+ matchLabels:
+ appstudio.redhat.com/member-cluster: "true"
+ values:
+ sourceRoot: components/squid
+ environment: staging
+ clusterDir: ""
template:
metadata:
name: squid-{{nameNormalized}}
|
996d306
to
c0bb5ed
Compare
Code Review by Gemini### `argo-cd-apps/base/member/infra-deployments/squid/squid.yaml`
**Bug:** The `merge` generator will not produce any applications because one of its sub-generators, the `list` generator, is empty. For the `merge` generator to produce output, all its sub-generators must produce at least one element.
```diff
--- a/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
+++ b/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
@@ -3,19 +3,14 @@
name: squid
spec:
generators:
- - merge:
- mergeKeys:
- - nameNormalized
- generators:
- - clusters:
- selector:
- matchLabels:
- appstudio.redhat.com/member-cluster: "true"
- values:
- sourceRoot: components/squid
- environment: staging
- clusterDir: ""
- - list:
- elements: []
+ - clusters:
+ selector:
+ matchLabels:
+ appstudio.redhat.com/member-cluster: "true"
+ values:
+ sourceRoot: components/squid
+ environment: staging
+ clusterDir: ""
template:
metadata:
name: squid-{{nameNormalized}}
|
c0bb5ed
to
fa76ad6
Compare
ERROR: # Code Review by Gemini |
fa76ad6
to
0e24bab
Compare
Code Review by GeminiThe changes introduce several new components related to the "sandbox" and "toolchain" operators, along with a new Here are the identified issues and suggested improvements: 1. Inconsistent Deployment Scope for New "Sandbox" ComponentsThe commit message explicitly states that
Given the commit message's focus on 2. Inconsistent Namespace Definitions for Monitoring ResourcesSeveral new monitoring-related resources explicitly define
3. Missing Namespace Creation for New OperatorsThe ApplicationSets for
|
Code Review by Gemini--- a/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
+++ b/argo-cd-apps/base/member/infra-deployments/squid/squid.yaml
@@ -10,7 +10,7 @@
matchLabels:
appstudio.redhat.com/member-cluster: "true"
values:
sourceRoot: components/squid
- environment: staging
+ environment: '{{labels.environment}}'
clusterDir: ""
- list:
elements: []
@@ -27,7 +27,7 @@
syncPolicy:
automated:
prune: true
- selfHeal: false
+ selfHeal: true
syncOptions:
- CreateNamespace=true
retry: Issue: The Suggestion:
--- a/components/release/development/kustomization.yaml
+++ b/components/release/development/kustomization.yaml
@@ -13,10 +13,6 @@
namespace: release-service
patches:
- target:
group: apps
version: v1
kind: Deployment
name: controller-manager
path: sealights-token-patch.yaml
- - target:
- group: apps
- version: v1
- kind: Deployment
- name: release-service-controller-manager
- path: manager-memory-patch.yaml --- /dev/null
+++ b/components/release/development/manager-memory-patch.yaml
@@ -0,0 +1,13 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: release-service-controller-manager
+ namespace: system
+spec:
+ template:
+ spec:
+ containers:
+ - name: manager
+ resources:
+ limits:
+ memory: 256Mi Issue: The changes related to Suggestion: Revert these changes and submit them in a separate pull request focused solely on the --- a/components/squid/development/squid-helm-generator.yaml
+++ b/components/squid/development/squid-helm-generator.yaml
@@ -0,0 +1,23 @@
+apiVersion: builtin
+kind: HelmChartInflationGenerator
+metadata:
+ name: squid-helm
+name: squid-helm
+repo: oci://quay.io/konflux-ci/caching
+version: 0.1.275+b74f6fd
+valuesInline:
+ installCertManagerComponents: false
+ mirrord:
+ enabled: false
+ test:
+ enabled: false
+ cert-manager:
+ enabled: false
+ environment: release
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 200m
+ memory: 256Mi --- a/components/squid/staging/squid-helm-generator.yaml
+++ b/components/squid/staging/squid-helm-generator.yaml
@@ -0,0 +1,23 @@
+apiVersion: builtin
+kind: HelmChartInflationGenerator
+metadata:
+ name: squid-helm
+name: squid-helm
+repo: oci://quay.io/konflux-ci/caching
+version: 0.1.275+b74f6fd
+valuesInline:
+ installCertManagerComponents: false
+ mirrord:
+ enabled: false
+ test:
+ enabled: false
+ cert-manager:
+ enabled: false
+ environment: release
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 200m
+ memory: 256Mi Issue:
Suggestion:
|
bc4136b
to
5497c24
Compare
/retest |
- Add squid applicationset - Add squid to development and staging environments - Exclude squid from production - Add OWNERS file Signed-off-by: Avi Biton <[email protected]>
2352029
to
40b2b44
Compare
@avi-biton: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
/retest-required |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: amisstea, avi-biton, psturc, yftacherzog The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
b1de394
into
redhat-appstudio:main