Skip to content

Conversation

@JAVGan
Copy link
Contributor

@JAVGan JAVGan commented Nov 28, 2025

Handling of merge-index-image for containerized IIB

Refers to CLOUDDST-29408

Signed-off-by: Jonathan Gangi [email protected]

Assisted-by: Cursor/Gemini

Summary by Sourcery

Introduce a containerized workflow for merge-index-image requests and refactor bundle-diff logic for reuse.

New Features:

  • Add a containerized merge task that orchestrates merging source and target index images via Git-driven FBC and Konflux pipelines.
  • Validate bundle pullspecs in parallel before merging and handle missing bundles and deprecations in the containerized flow.

Enhancements:

  • Extract reusable logic for computing bundles missing from the source index into a dedicated helper used by both legacy and containerized merge flows.
  • Wire the merge-index-image API endpoint to the new containerized merge task and register the task in worker configuration.

Summary by Sourcery

Introduce a containerized merge-index-image workflow and refactor bundle and deprecation handling to be reusable across legacy and containerized flows.

New Features:

  • Add a containerized merge task that orchestrates merging source and target index images via Git-backed FBC catalogs and external build pipelines.

Enhancements:

  • Extract reusable logic for computing bundles missing in the source index and for deprecating bundles directly in an index DB, and reuse it in both legacy and containerized merge flows.
  • Wire the merge-index-image API endpoint and worker configuration to the new containerized merge task, replacing the legacy merge handler.
  • Adjust index DB and imagestream artifact helpers to read configuration from worker config for improved flexibility.

Tests:

  • Add a comprehensive test suite for the containerized merge task covering success, MR-based flows, deprecations, invalid bundles, pipeline failures, and tag handling.
  • Update existing tests for artifact pullspec construction and manifest list creation to use mocked worker configuration and the new containerized merge entrypoint.

@sourcery-ai
Copy link

sourcery-ai bot commented Nov 28, 2025

Reviewer's Guide

Introduces a new containerized merge-index-image workflow wired into the existing merge_index_image API, refactors bundle-diff/deprecation helpers for reuse, and adjusts tests/config to support the new containerized path and configuration-driven behavior.

Sequence diagram for the new containerized merge-index-image workflow

sequenceDiagram
    actor Client
    participant API as IIB_API
    participant Celery as Celery_worker
    participant Utils as build_containerized_merge
    participant OpmOps as opm_operations
    participant Git as Git_repository
    participant Konflux as Konflux_pipeline
    participant Registry as Container_registry
    participant ArtifactStore as Index_db_artifact_store

    Client->>API: POST /merge-index-image
    API->>Celery: handle_containerized_merge_request.apply_async
    activate Celery
    Celery->>Utils: handle_containerized_merge_request(...)
    activate Utils

    Utils->>Utils: reset_docker_config
    Utils->>Utils: set_request_state(preparing request)
    Utils->>Utils: prepare_request_for_build(RequestConfigMerge)
    Utils->>Utils: Opm.set_opm_version(target_index_resolved)
    Utils->>Utils: _update_index_image_build_state

    %% Git repository preparation
    Utils->>Git: prepare_git_repository_for_build
    Git-->>Utils: index_git_repo, local_git_repo_path, localized_git_catalog_path

    %% Fetch index.db artifacts
    Utils->>ArtifactStore: fetch_and_verify_index_db_artifact(source_from_index_resolved)
    ArtifactStore-->>Utils: source_index_db_path
    Utils->>ArtifactStore: fetch_and_verify_index_db_artifact(target_index_resolved)
    ArtifactStore-->>Utils: target_index_db_path

    %% Read bundles from index.db
    Utils->>Utils: _get_present_bundles(source_index_db_path)
    Utils->>Utils: _get_present_bundles(target_index_db_path)

    %% Validate bundles' pullspecs
    Utils->>Utils: set_request_state(validating bundles)
    Utils->>Registry: validate_bundles_in_parallel(source+target bundles)
    Registry-->>Utils: validation results

    %% Compute missing and invalid bundles
    Utils->>Utils: set_request_state(adding bundles missing in source)
    Utils->>Utils: get_missing_bundles_from_target_to_source
    Utils-->>Utils: missing_bundles, invalid_bundles
    Utils->>Utils: add_max_ocp_version_property(missing_bundle_paths)

    %% Update intermediary index.db
    Utils->>OpmOps: _get_or_create_temp_index_db_file
    OpmOps-->>Utils: intermediary_db
    Utils->>OpmOps: _opm_registry_add(intermediary_db, missing_bundle_paths)

    %% Deprecations
    Utils->>Utils: get_bundles_from_deprecation_list
    Utils->>Utils: get_bundles_latest_version
    Utils->>OpmOps: deprecate_bundles_db(intermediary_db, deprecation_bundles)

    %% Migrate to FBC and merge catalogs
    Utils->>OpmOps: get_list_bundles(intermediary_db)
    OpmOps-->>Utils: bundles_in_db
    Utils->>OpmOps: opm_migrate(intermediary_db)
    OpmOps-->>Utils: fbc_dir
    Utils->>Utils: merge_catalogs_dirs(migrated FBC, Git catalog)
    Utils->>OpmOps: opm_validate(fbc_dir_path)

    %% Commit and trigger pipeline
    Utils->>Utils: write_build_metadata
    Utils->>Git: git_commit_and_create_mr_or_push
    Git-->>Utils: mr_details, last_commit_sha

    Utils->>Konflux: monitor_pipeline_and_extract_image(last_commit_sha)
    Konflux-->>Utils: image_url

    %% Replicate image and push index.db
    Utils->>Registry: replicate_image_to_tagged_destinations(image_url, build_tags)
    Registry-->>Utils: output_pull_specs
    Utils->>Utils: _update_index_image_pull_spec(output_pull_spec,...)

    Utils->>ArtifactStore: push_index_db_artifact(intermediary_db, operators_in_db)
    ArtifactStore-->>Utils: original_index_db_digest

    %% Cleanup and completion
    Utils->>Git: cleanup_merge_request_if_exists(mr_details)
    Utils->>Utils: set_request_state(complete, success message)
    deactivate Utils

    Celery-->>API: async task completion
    deactivate Celery
    API-->>Client: 202 Accepted
Loading

File-Level Changes

Change Details Files
Extract reusable helper to compute bundles missing from source vs target and reuse it in both legacy and new containerized merge flows.
  • Refactor bundle comparison logic into get_missing_bundles_from_target_to_source which returns missing bundles and invalid OCP-version bundles without performing any build work.
  • Update _add_bundles_missing_in_source to delegate bundle computation to the new helper and only handle request state, registry token, and opm operations.
  • Adjust logging and construction of missing_bundle_paths accordingly.
iib/workers/tasks/build_merge_index_image.py
Refactor deprecation logic to operate directly on an index.db file and reuse it for both FBC and containerized workflows.
  • Introduce deprecate_bundles_db which chunks bundles and calls opm_registry_deprecatetruncate on a provided index.db file.
  • Rewrite deprecate_bundles_fbc to obtain a temp index.db via _get_or_create_temp_index_db_file, call deprecate_bundles_db, and then run FBC migration and Dockerfile generation as before.
iib/workers/tasks/opm_operations.py
Introduce the containerized merge task that drives a Git/Konflux-based FBC merge pipeline and integrates with existing helpers.
  • Add new Celery task handle_containerized_merge_request in build_containerized_merge.py that prepares the request, fetches source/target index.db artifacts, validates bundles in parallel, computes missing and invalid bundles, and updates an intermediary index.db via _opm_registry_add and deprecate_bundles_db.
  • Integrate FBC migration (opm_migrate), catalog merge (merge_catalogs_dirs), and FBC validation (opm_validate), then commit changes to Git with git_commit_and_create_mr_or_push and trigger/monitor the Konflux pipeline via monitor_pipeline_and_extract_image.
  • Replicate the built image to final tags with replicate_image_to_tagged_destinations, update the IIB request’s pullspec and metadata using _update_index_image_pull_spec/_update_index_image_build_state, push the updated index.db artifact with push_index_db_artifact, and handle cleanup/rollback via cleanup_on_failure and cleanup_merge_request_if_exists.
  • Support overwrite vs MR-based flows, deprecation lists, automatic deprecation of invalid bundles, configurable build tags, parallel validation threads, and error propagation via IIBError with appropriate request state updates.
iib/workers/tasks/build_containerized_merge.py
Wire the public merge_index_image API to the new containerized merge task and register it in worker config.
  • Change merge_index_image API handler to call handle_containerized_merge_request.apply_async instead of handle_merge_request, preserving error callbacks and queue selection.
  • Add the new build_containerized_merge task module to Config.celery_task_routes so Celery can dispatch it.
  • Update merge_index_image API tests to patch handle_containerized_merge_request and validate behavior remains unchanged from a client perspective.
iib/web/api_v1.py
iib/workers/config.py
tests/test_web/test_api_v1.py
Make artifact-pullspec helper tests configuration-driven by mocking get_worker_config.
  • Update get_indexdb_artifact_pullspec and get_imagestream_artifact_pullspec tests in both test_utils and test_oras_utils to mock get_worker_config and supply registry/template settings explicitly.
  • Ensure invalid-pullspec tests also use mocked configuration, keeping behavior focused on input parsing rather than environment config.
tests/test_workers/test_tasks/test_utils.py
tests/test_workers/test_tasks/test_oras_utils.py
Adjust manifest-list creation tests to rely on mocked worker configuration.
  • Patch containerized_utils.get_worker_config in test_create_and_push_manifest_list to return a minimal iib_registry/iib_image_push_template configuration.
  • Preserve existing behavior of falling back when a manifest list is missing locally while isolating tests from real config.
tests/test_workers/test_tasks/test_build.py
Add comprehensive unit tests for the containerized merge workflow covering success, MR-based flows, edge cases, and failures.
  • Introduce test_build_containerized_merge.py with end-to-end style tests that mock all external interactions (git, Konflux, opm, registry, artifact push) and exercise: successful merges with overwrite, MR creation/closure, cases with no missing bundles, merges with deprecation lists, pipeline failures triggering cleanup, missing output_pull_spec errors, build_tags handling, and invalid bundles being auto-deprecated.
  • Verify that helper calls (e.g., get_missing_bundles_from_target_to_source, deprecate_bundles_db, validate_bundles_in_parallel) are invoked with expected arguments and that request state transitions and cleanup/reset_docker_config are performed appropriately.
tests/test_workers/test_tasks/test_build_containerized_merge.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@JAVGan JAVGan force-pushed the containerized_mr branch 5 times, most recently from a67129b to d105667 Compare December 1, 2025 19:45
@JAVGan JAVGan force-pushed the containerized_mr branch 4 times, most recently from e5f2275 to 2200ea4 Compare December 2, 2025 21:10
@JAVGan JAVGan changed the title WIP: Handling of merge-index-image for containerized IIB Handling of merge-index-image for containerized IIB Dec 2, 2025
@JAVGan JAVGan marked this pull request as ready for review December 2, 2025 21:20
Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • In handle_containerized_merge_request, output_pull_specs[0] is accessed unconditionally, so when replicate_image_to_tagged_destinations returns an empty list this will raise an IndexError before the intended IIBError; consider checking for an empty list before indexing and raising a clear error in that branch.
  • handle_containerized_merge_request has grown quite large and is handling multiple distinct concerns (prebuild setup, bundle diffing, deprecation processing, FBC migration, Git/MR handling, Konflux pipeline, replication, and index DB artifact push); factoring these into smaller helper functions would improve readability and make future changes easier to reason about.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `handle_containerized_merge_request`, `output_pull_specs[0]` is accessed unconditionally, so when `replicate_image_to_tagged_destinations` returns an empty list this will raise an `IndexError` before the intended `IIBError`; consider checking for an empty list before indexing and raising a clear error in that branch.
- `handle_containerized_merge_request` has grown quite large and is handling multiple distinct concerns (prebuild setup, bundle diffing, deprecation processing, FBC migration, Git/MR handling, Konflux pipeline, replication, and index DB artifact push); factoring these into smaller helper functions would improve readability and make future changes easier to reason about.

## Individual Comments

### Comment 1
<location> `iib/workers/tasks/build_containerized_merge.py:185-186` </location>
<code_context>
+            'in_progress',
+            'Validating whether the bundles have their pullspecs present in the registry',
+        )
+        validate_bundles_in_parallel(
+            bundles=source_index_bundles + target_index_bundles,
+            threads=parallel_threads,
+            wait=True,
</code_context>

<issue_to_address>
**suggestion (performance):** Avoid validating the same bundle more than once when source/target overlap.

`validate_bundles_in_parallel` currently receives `source_index_bundles + target_index_bundles`, so bundles present in both with the same pullspec are revalidated. Consider deduplicating the list first (e.g., by pullspec) to avoid redundant remote calls and speed up large merges.

Suggested implementation:

```python
        # Validate the bundles from source and target have their pullspecs present in the registry
        set_request_state(
            request_id,
            'in_progress',
            'Validating whether the bundles have their pullspecs present in the registry',
        )

        # Avoid validating the same bundle more than once when source/target overlap.
        # Deduplicate by pullspec (when available) to reduce redundant remote calls.
        all_bundles = source_index_bundles + target_index_bundles
        unique_bundles_by_pullspec = []
        seen_pullspecs = set()

        for bundle in all_bundles:
            # Prefer an explicit pullspec-like attribute if present; otherwise fall back
            # to the bundle object itself for deduplication.
            pullspec = getattr(bundle, 'pull_spec', None)
            if pullspec is None:
                pullspec = getattr(bundle, 'bundle_path', None)
            if pullspec is None:
                pullspec = getattr(bundle, 'image', None)
            if pullspec is None:
                pullspec = bundle

            if pullspec in seen_pullspecs:
                continue

            seen_pullspecs.add(pullspec)
            unique_bundles_by_pullspec.append(bundle)

        validate_bundles_in_parallel(
            bundles=unique_bundles_by_pullspec,
            threads=parallel_threads,
            wait=True,
        )

```

If the bundle objects in this codebase expose the pullspec under a specific, known attribute name (for example `bundle.pull_spec` or `bundle['pullspec']`), you should simplify the deduplication loop to use that attribute directly instead of trying multiple fallbacks. For instance, replace the `pullspec = getattr(...` block with `pullspec = bundle.pull_spec` (or the appropriate accessor) to keep the logic clearer and stricter.
</issue_to_address>

### Comment 2
<location> `iib/workers/tasks/build_containerized_merge.py:99` </location>
<code_context>
+        The format of the token must be in the format "user:password".
+    :param str distribution_scope: the scope for distribution of the index image, defaults to
+        ``None``.
+    :param build_tags: list of extra tag to use for intermetdiate index image
+    :param str graph_update_mode: Graph update mode that defines how channel graphs are updated
+        in the index.
</code_context>

<issue_to_address>
**nitpick (typo):** Fix the typo in the `build_tags` parameter description.

`intermetdiate` in the `build_tags` parameter description appears to be a typo; please change it to `intermediate`.

```suggestion
    :param build_tags: list of extra tag to use for intermediate index image
```
</issue_to_address>

### Comment 3
<location> `iib/workers/tasks/build_containerized_merge.py:145` </location>
<code_context>
@app.task
@request_logger
@instrument_tracing(
    span_name="workers.tasks.build.handle_containerized_merge_request",
    attributes=get_binary_versions(),
)
def handle_containerized_merge_request(
    source_from_index: str,
    deprecation_list: List[str],
    request_id: int,
    binary_image: Optional[str] = None,
    target_index: Optional[str] = None,
    overwrite_target_index: bool = False,
    overwrite_target_index_token: Optional[str] = None,
    distribution_scope: Optional[str] = None,
    binary_image_config: Optional[str] = None,
    build_tags: Optional[List[str]] = None,
    graph_update_mode: Optional[str] = None,
    ignore_bundle_ocp_version: Optional[bool] = False,
    index_to_gitlab_push_map: Optional[Dict[str, str]] = None,
    parallel_threads: int = 5,
) -> None:
    """
    Coordinate the work needed to merge old (N) index image with new (N+1) index image.

    :param str source_from_index: pull specification to be used as the base for building the new
        index image.
    :param str target_index: pull specification of content stage index image for the
        corresponding target index image.
    :param list deprecation_list: list of deprecated bundles for the target index image.
    :param int request_id: the ID of the IIB build request.
    :param str binary_image: the pull specification of the container image where the opm binary
        gets copied from.
    :param bool overwrite_target_index: if True, overwrite the input ``target_index`` with
        the built index image.
    :param str overwrite_target_index_token: the token used for overwriting the input
        ``target_index`` image. This is required to use ``overwrite_target_index``.
        The format of the token must be in the format "user:password".
    :param str distribution_scope: the scope for distribution of the index image, defaults to
        ``None``.
    :param build_tags: list of extra tag to use for intermetdiate index image
    :param str graph_update_mode: Graph update mode that defines how channel graphs are updated
        in the index.
    :param bool ignore_bundle_ocp_version: When set to `true` and image set as target_index is
        listed in `iib_no_ocp_label_allow_list` config then bundles without
        "com.redhat.openshift.versions" label set will be added in the result `index_image`.
    :raises IIBError: if the index image merge fails.
    :param dict index_to_gitlab_push_map: the dict mapping index images (keys) to GitLab repos
        (values) in order to push their catalogs into GitLab.
    :param int parallel_threads: the number of parallel threads to use for validating the bundles
    :raises IIBError: if the index image merge fails.
    """
    reset_docker_config()
    set_request_state(request_id, 'in_progress', 'Preparing request for merge')

    # Prepare request
    with set_registry_token(overwrite_target_index_token, target_index, append=True):
        prebuild_info = prepare_request_for_build(
            request_id,
            RequestConfigMerge(
                _binary_image=binary_image,
                overwrite_target_index_token=overwrite_target_index_token,
                source_from_index=source_from_index,
                target_index=target_index,
                distribution_scope=distribution_scope,
                binary_image_config=binary_image_config,
            ),
        )

    source_from_index_resolved = prebuild_info['source_from_index_resolved']
    target_index_resolved = prebuild_info['target_index_resolved']

    # Set OPM version
    Opm.set_opm_version(target_index_resolved)
    opm_version = Opm.opm_version

    _update_index_image_build_state(request_id, prebuild_info)

    mr_details: Optional[Dict[str, str]] = None
    local_git_repo_path: Optional[str] = None
    index_git_repo: Optional[str] = None
    last_commit_sha: Optional[str] = None
    output_pull_spec: Optional[str] = None
    original_index_db_digest: Optional[str] = None

    with tempfile.TemporaryDirectory(prefix=f'iib-{request_id}-') as temp_dir:
        # Setup and clone Git repository
        branch = prebuild_info['ocp_version']
        (
            index_git_repo,
            local_git_repo_path,
            localized_git_catalog_path,
        ) = prepare_git_repository_for_build(
            request_id=request_id,
            from_index=source_from_index,
            temp_dir=temp_dir,
            branch=branch,
            index_to_gitlab_push_map=index_to_gitlab_push_map or {},
        )

        # Pull both source and target index.db artifacts and read present bundle
        source_index_db_path = fetch_and_verify_index_db_artifact(
            source_from_index_resolved, temp_dir
        )
        target_index_db_path = fetch_and_verify_index_db_artifact(target_index_resolved, temp_dir)

        # Get the bundles from the index.db file
        with set_registry_token(overwrite_target_index_token, target_index, append=True):
            source_index_bundles, source_index_bundles_pull_spec = _get_present_bundles(
                source_index_db_path, temp_dir
            )
            log.debug("Source index bundles %s", source_index_bundles)
            log.debug("Source index bundles pull spec %s", source_index_bundles_pull_spec)

            target_index_bundles, target_index_bundles_pull_spec = _get_present_bundles(
                target_index_db_path, temp_dir
            )
            log.debug("Target index bundles %s", target_index_bundles)
            log.debug("Target index bundles pull spec %s", target_index_bundles_pull_spec)

        # Validate the bundles from source and target have their pullspecs present in the registry
        set_request_state(
            request_id,
            'in_progress',
            'Validating whether the bundles have their pullspecs present in the registry',
        )
        validate_bundles_in_parallel(
            bundles=source_index_bundles + target_index_bundles,
            threads=parallel_threads,
            wait=True,
        )

        set_request_state(request_id, 'in_progress', 'Adding bundles missing in source index image')
        log.info('Adding bundles from target index image which are missing from source index image')

        missing_bundles, invalid_bundles = get_missing_bundles_from_target_to_source(
            source_index_bundles=source_index_bundles,
            target_index_bundles=target_index_bundles,
            source_from_index=source_from_index_resolved,
            ocp_version=prebuild_info['target_ocp_version'],
            target_index=target_index_resolved,
            ignore_bundle_ocp_version=ignore_bundle_ocp_version,
        )

        missing_bundle_paths = [bundle['bundlePath'] for bundle in missing_bundles]
        if missing_bundle_paths:
            add_max_ocp_version_property(missing_bundle_paths, temp_dir)

        # Add the missing bundles to the index.db file
        set_request_state(
            request_id, 'in_progress', 'Adding the missing bundles to the index.db file'
        )
        intermediary_db = _get_or_create_temp_index_db_file(temp_dir, source_from_index_resolved)
        _opm_registry_add(temp_dir, intermediary_db, missing_bundle_paths)

        # Process the deprecation list
        set_request_state(request_id, 'in_progress', 'Processing the deprecation list')
        intermediate_bundles = missing_bundle_paths + source_index_bundles_pull_spec
        deprecation_bundles = get_bundles_from_deprecation_list(
            intermediate_bundles, deprecation_list
        )
        deprecation_bundles = deprecation_bundles + [
            bundle['bundlePath'] for bundle in invalid_bundles
        ]

        # process the deprecation list into the intermediary index.db file
        if deprecation_bundles:
            # We need to get the latest pullpecs from bundles in order to avoid failures
            # on "opm deprecatetruncate" due to versions already removed before.
            # Once we give the latest versions all lower ones get automatically deprecated by OPM.
            all_bundles = source_index_bundles + target_index_bundles
            deprecation_bundles = get_bundles_latest_version(deprecation_bundles, all_bundles)

            deprecate_bundles_db(
                base_dir=temp_dir, index_db_file=intermediary_db, bundles=deprecation_bundles
            )

        # Retrieve the operators from the intermediary index.db file
        # This will be required for pushing the updated index.db file to the IIB registry
        bundles_in_db = get_list_bundles(intermediary_db, temp_dir)
        operators_in_db = [bundle['packageName'] for bundle in bundles_in_db]

        # Migrate the intermediary index.db file to FBC and generate the Dockerfile
        set_request_state(
            request_id,
            'in_progress',
            'Migrating the intermediary index.db file to FBC and generating the Dockerfile',
        )
        fbc_dir, _ = opm_migrate(intermediary_db, temp_dir)

        # rename `catalog` directory because we need to use this name for
        # final destination of catalog (defined in Dockerfile)
        catalog_from_db = os.path.join(temp_dir, 'from_db')
        os.rename(fbc_dir, catalog_from_db)

        # Merge migrated FBC with existing FBC in Git repo
        # overwrite data in `catalog_from_index` by data from `catalog_from_db`
        # this adds changes on not opted in operators to final FBC
        log.info('Merging migrated catalog with Git catalog')
        merge_catalogs_dirs(catalog_from_db, localized_git_catalog_path)

        # We need to regenerate file-based catalog because we merged changes
        fbc_dir_path = os.path.join(temp_dir, 'catalog')
        if os.path.exists(fbc_dir_path):
            shutil.rmtree(fbc_dir_path)
        shutil.move(catalog_from_db, fbc_dir_path)

        # Validate the FBC config
        set_request_state(request_id, 'in_progress', 'Validating the FBC config')
        opm_validate(fbc_dir_path)

        # Generate the Dockerfile
        set_request_state(request_id, 'in_progress', 'Generating the Dockerfile')

        # Write build metadata to a file to be added with the commit
        set_request_state(request_id, 'in_progress', 'Writing build metadata')
        write_build_metadata(
            local_git_repo_path,
            opm_version,
            prebuild_info['target_ocp_version'],
            prebuild_info['distribution_scope'],
            prebuild_info['binary_image_resolved'],
            request_id,
        )

        try:
            # Commit changes and create PR or push directly
            mr_details, last_commit_sha = git_commit_and_create_mr_or_push(
                request_id=request_id,
                local_git_repo_path=local_git_repo_path,
                index_git_repo=index_git_repo,
                branch=branch,
                commit_message=(
                    f"IIB: Merge operators for request {request_id}\n\n"
                    f"Missing bundles: {', '.join(missing_bundle_paths)}"
                ),
                overwrite_from_index=overwrite_target_index,
            )

            # Wait for Konflux pipeline and extract built image UR
            image_url = monitor_pipeline_and_extract_image(
                request_id=request_id,
                last_commit_sha=last_commit_sha,
            )

            # Copy built index to all output pull specs
            output_pull_specs = replicate_image_to_tagged_destinations(
                request_id=request_id,
                image_url=image_url,
                build_tags=build_tags,
            )

            # Use the first output_pull_spec as the primary one for request updates
            output_pull_spec = output_pull_specs[0]
            # Update request with final output
            if not output_pull_spec:
                raise IIBError(
                    "output_pull_spec was not set. "
                    "This should not happen if the pipeline completed successfully."
                )

            _update_index_image_pull_spec(
                output_pull_spec=output_pull_spec,
                request_id=request_id,
                arches=prebuild_info['arches'],
                from_index=source_from_index,
                overwrite_from_index=overwrite_target_index,
                overwrite_from_index_token=overwrite_target_index_token,
                resolved_prebuild_from_index=source_from_index_resolved,
                add_or_rm=True,
                is_image_fbc=True,
                # Passing an empty index_repo_map is intentional. In IIB 1.0, if
                # the overwrite_from_index token is given, we push to git by default
                # at the end of a request. In IIB 2.0, the commit is pushed earlier to trigger
                # a Konflux pipelinerun. So the old workflow isn't needed.
                index_repo_map={},
            )

            # Push updated index.db if overwrite_target_index_token is provided
            # We can push it directly from temp_dir since we're still inside the
            # context manager. Do it as the last step to avoid rolling back the
            # index.db file if the pipeline fails.
            original_index_db_digest = push_index_db_artifact(
                request_id=request_id,
                from_index=source_from_index,
                index_db_path=intermediary_db,
                operators=operators_in_db,
                operators_in_db=set(operators_in_db),
                overwrite_from_index=overwrite_target_index,
                request_type='merge',
            )

            # Close MR if it was opened
            cleanup_merge_request_if_exists(mr_details, index_git_repo)

            # Update request with final output
            set_request_state(
                request_id,
                'complete',
                f"The operator(s) {operators_in_db} were successfully merged "
                "from the target index image into the source index image",
            )
        except Exception as e:
            cleanup_on_failure(
                mr_details=mr_details,
                last_commit_sha=last_commit_sha,
                index_git_repo=index_git_repo,
                overwrite_from_index=overwrite_target_index,
                request_id=request_id,
                from_index=source_from_index,
                index_repo_map={},
                original_index_db_digest=original_index_db_digest,
                reason=f"error: {e}",
            )
            # Reset Docker config for the next request. This is a fail safe.
            reset_docker_config()
            raise IIBError(f"Failed to merge operators: {e}")

</code_context>

<issue_to_address>
**issue (code-quality):** We've found these issues:

- Use named expression to simplify assignment and conditional ([`use-named-expression`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/use-named-expression/))
- Explicitly raise from a previous error ([`raise-from-previous-error`](https://docs.sourcery.ai/Reference/Default-Rules/suggestions/raise-from-previous-error/))
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@JAVGan JAVGan force-pushed the containerized_mr branch 3 times, most recently from 5dbcf53 to 9d449b1 Compare December 2, 2025 21:53
Handling of merge-index-image for containerized IIB

Refers to CLOUDDST-29408

Signed-off-by: Jonathan Gangi <[email protected]>

Assisted-by: Cursor/Gemini
Update `api_v2` to use the `handle_containerized_merge_request`

Refers to CLOUDDST-29408

Signed-off-by: Jonathan Gangi <[email protected]>

Assisted-by: Cursor/Gemini
@JAVGan
Copy link
Contributor Author

JAVGan commented Dec 5, 2025

@yashvardhannanavati PTAL again

Copy link
Collaborator

@yashvardhannanavati yashvardhannanavati left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

couple of minor suggestions. Looks great!

target_index=target_index_resolved,
ignore_bundle_ocp_version=ignore_bundle_ocp_version,
)
missing_bundle_paths = [bundle['bundlePath'] for bundle in missing_bundles]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you pass source_index_bundles_pull_spec and target_index_bundles_pull_spec in the function call on line 193, you can get rid of this line.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't get it... The function get_missing_bundles_from_target_to_source seems to return Bundle and not bundle_path... Am I missing something?

@mock.patch('iib.workers.tasks.build_containerized_merge.shutil.move')
@mock.patch('iib.workers.tasks.build_containerized_merge.os.path.exists')
@mock.patch('builtins.set', side_effect=_mock_set_for_bundles)
def test_handle_containerized_merge_request_with_deprecation(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can this be covered in the success use case? Or do you think a separate test case is necessary?

Copy link
Contributor Author

@JAVGan JAVGan Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I can adjust it!

I did the feature implementation but the unit-tests were vibe-coded, so probably Gemini "forgot" about that, but yes, I can add it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added test_handle_containerized_merge_request_success_with_deprecations

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


# Write build metadata to a file to be added with the commit
set_request_state(request_id, 'in_progress', 'Writing build metadata')
write_build_metadata(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please also include arches when calling this function. The change to the function signature is here: 5d59cd7

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will include it after your merge of #1274 then I can rebase and easily add it

Add unit-tests for containerized merge request

Refers to CLOUDDST-29408

Signed-off-by: Jonathan Gangi <[email protected]>

Assisted-by: Cursor/Gemini
This commit fixes the parallel validation class to validate the
bundlePath from the bundles dictionary

Refers to CLOUDDST-29408

Signed-off-by: Jonathan Gangi <[email protected]>

Assisted-by: Cursor/Gemini
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants