-
Notifications
You must be signed in to change notification settings - Fork 25
Handling of merge-index-image for containerized IIB #1263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Reviewer's GuideIntroduces a new containerized merge-index-image workflow wired into the existing merge_index_image API, refactors bundle-diff/deprecation helpers for reuse, and adjusts tests/config to support the new containerized path and configuration-driven behavior. Sequence diagram for the new containerized merge-index-image workflowsequenceDiagram
actor Client
participant API as IIB_API
participant Celery as Celery_worker
participant Utils as build_containerized_merge
participant OpmOps as opm_operations
participant Git as Git_repository
participant Konflux as Konflux_pipeline
participant Registry as Container_registry
participant ArtifactStore as Index_db_artifact_store
Client->>API: POST /merge-index-image
API->>Celery: handle_containerized_merge_request.apply_async
activate Celery
Celery->>Utils: handle_containerized_merge_request(...)
activate Utils
Utils->>Utils: reset_docker_config
Utils->>Utils: set_request_state(preparing request)
Utils->>Utils: prepare_request_for_build(RequestConfigMerge)
Utils->>Utils: Opm.set_opm_version(target_index_resolved)
Utils->>Utils: _update_index_image_build_state
%% Git repository preparation
Utils->>Git: prepare_git_repository_for_build
Git-->>Utils: index_git_repo, local_git_repo_path, localized_git_catalog_path
%% Fetch index.db artifacts
Utils->>ArtifactStore: fetch_and_verify_index_db_artifact(source_from_index_resolved)
ArtifactStore-->>Utils: source_index_db_path
Utils->>ArtifactStore: fetch_and_verify_index_db_artifact(target_index_resolved)
ArtifactStore-->>Utils: target_index_db_path
%% Read bundles from index.db
Utils->>Utils: _get_present_bundles(source_index_db_path)
Utils->>Utils: _get_present_bundles(target_index_db_path)
%% Validate bundles' pullspecs
Utils->>Utils: set_request_state(validating bundles)
Utils->>Registry: validate_bundles_in_parallel(source+target bundles)
Registry-->>Utils: validation results
%% Compute missing and invalid bundles
Utils->>Utils: set_request_state(adding bundles missing in source)
Utils->>Utils: get_missing_bundles_from_target_to_source
Utils-->>Utils: missing_bundles, invalid_bundles
Utils->>Utils: add_max_ocp_version_property(missing_bundle_paths)
%% Update intermediary index.db
Utils->>OpmOps: _get_or_create_temp_index_db_file
OpmOps-->>Utils: intermediary_db
Utils->>OpmOps: _opm_registry_add(intermediary_db, missing_bundle_paths)
%% Deprecations
Utils->>Utils: get_bundles_from_deprecation_list
Utils->>Utils: get_bundles_latest_version
Utils->>OpmOps: deprecate_bundles_db(intermediary_db, deprecation_bundles)
%% Migrate to FBC and merge catalogs
Utils->>OpmOps: get_list_bundles(intermediary_db)
OpmOps-->>Utils: bundles_in_db
Utils->>OpmOps: opm_migrate(intermediary_db)
OpmOps-->>Utils: fbc_dir
Utils->>Utils: merge_catalogs_dirs(migrated FBC, Git catalog)
Utils->>OpmOps: opm_validate(fbc_dir_path)
%% Commit and trigger pipeline
Utils->>Utils: write_build_metadata
Utils->>Git: git_commit_and_create_mr_or_push
Git-->>Utils: mr_details, last_commit_sha
Utils->>Konflux: monitor_pipeline_and_extract_image(last_commit_sha)
Konflux-->>Utils: image_url
%% Replicate image and push index.db
Utils->>Registry: replicate_image_to_tagged_destinations(image_url, build_tags)
Registry-->>Utils: output_pull_specs
Utils->>Utils: _update_index_image_pull_spec(output_pull_spec,...)
Utils->>ArtifactStore: push_index_db_artifact(intermediary_db, operators_in_db)
ArtifactStore-->>Utils: original_index_db_digest
%% Cleanup and completion
Utils->>Git: cleanup_merge_request_if_exists(mr_details)
Utils->>Utils: set_request_state(complete, success message)
deactivate Utils
Celery-->>API: async task completion
deactivate Celery
API-->>Client: 202 Accepted
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
a67129b to
d105667
Compare
e5f2275 to
2200ea4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey there - I've reviewed your changes - here's some feedback:
- In
handle_containerized_merge_request,output_pull_specs[0]is accessed unconditionally, so whenreplicate_image_to_tagged_destinationsreturns an empty list this will raise anIndexErrorbefore the intendedIIBError; consider checking for an empty list before indexing and raising a clear error in that branch. handle_containerized_merge_requesthas grown quite large and is handling multiple distinct concerns (prebuild setup, bundle diffing, deprecation processing, FBC migration, Git/MR handling, Konflux pipeline, replication, and index DB artifact push); factoring these into smaller helper functions would improve readability and make future changes easier to reason about.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `handle_containerized_merge_request`, `output_pull_specs[0]` is accessed unconditionally, so when `replicate_image_to_tagged_destinations` returns an empty list this will raise an `IndexError` before the intended `IIBError`; consider checking for an empty list before indexing and raising a clear error in that branch.
- `handle_containerized_merge_request` has grown quite large and is handling multiple distinct concerns (prebuild setup, bundle diffing, deprecation processing, FBC migration, Git/MR handling, Konflux pipeline, replication, and index DB artifact push); factoring these into smaller helper functions would improve readability and make future changes easier to reason about.
## Individual Comments
### Comment 1
<location> `iib/workers/tasks/build_containerized_merge.py:185-186` </location>
<code_context>
+ 'in_progress',
+ 'Validating whether the bundles have their pullspecs present in the registry',
+ )
+ validate_bundles_in_parallel(
+ bundles=source_index_bundles + target_index_bundles,
+ threads=parallel_threads,
+ wait=True,
</code_context>
<issue_to_address>
**suggestion (performance):** Avoid validating the same bundle more than once when source/target overlap.
`validate_bundles_in_parallel` currently receives `source_index_bundles + target_index_bundles`, so bundles present in both with the same pullspec are revalidated. Consider deduplicating the list first (e.g., by pullspec) to avoid redundant remote calls and speed up large merges.
Suggested implementation:
```python
# Validate the bundles from source and target have their pullspecs present in the registry
set_request_state(
request_id,
'in_progress',
'Validating whether the bundles have their pullspecs present in the registry',
)
# Avoid validating the same bundle more than once when source/target overlap.
# Deduplicate by pullspec (when available) to reduce redundant remote calls.
all_bundles = source_index_bundles + target_index_bundles
unique_bundles_by_pullspec = []
seen_pullspecs = set()
for bundle in all_bundles:
# Prefer an explicit pullspec-like attribute if present; otherwise fall back
# to the bundle object itself for deduplication.
pullspec = getattr(bundle, 'pull_spec', None)
if pullspec is None:
pullspec = getattr(bundle, 'bundle_path', None)
if pullspec is None:
pullspec = getattr(bundle, 'image', None)
if pullspec is None:
pullspec = bundle
if pullspec in seen_pullspecs:
continue
seen_pullspecs.add(pullspec)
unique_bundles_by_pullspec.append(bundle)
validate_bundles_in_parallel(
bundles=unique_bundles_by_pullspec,
threads=parallel_threads,
wait=True,
)
```
If the bundle objects in this codebase expose the pullspec under a specific, known attribute name (for example `bundle.pull_spec` or `bundle['pullspec']`), you should simplify the deduplication loop to use that attribute directly instead of trying multiple fallbacks. For instance, replace the `pullspec = getattr(...` block with `pullspec = bundle.pull_spec` (or the appropriate accessor) to keep the logic clearer and stricter.
</issue_to_address>
### Comment 2
<location> `iib/workers/tasks/build_containerized_merge.py:99` </location>
<code_context>
+ The format of the token must be in the format "user:password".
+ :param str distribution_scope: the scope for distribution of the index image, defaults to
+ ``None``.
+ :param build_tags: list of extra tag to use for intermetdiate index image
+ :param str graph_update_mode: Graph update mode that defines how channel graphs are updated
+ in the index.
</code_context>
<issue_to_address>
**nitpick (typo):** Fix the typo in the `build_tags` parameter description.
`intermetdiate` in the `build_tags` parameter description appears to be a typo; please change it to `intermediate`.
```suggestion
:param build_tags: list of extra tag to use for intermediate index image
```
</issue_to_address>
### Comment 3
<location> `iib/workers/tasks/build_containerized_merge.py:145` </location>
<code_context>
@app.task
@request_logger
@instrument_tracing(
span_name="workers.tasks.build.handle_containerized_merge_request",
attributes=get_binary_versions(),
)
def handle_containerized_merge_request(
source_from_index: str,
deprecation_list: List[str],
request_id: int,
binary_image: Optional[str] = None,
target_index: Optional[str] = None,
overwrite_target_index: bool = False,
overwrite_target_index_token: Optional[str] = None,
distribution_scope: Optional[str] = None,
binary_image_config: Optional[str] = None,
build_tags: Optional[List[str]] = None,
graph_update_mode: Optional[str] = None,
ignore_bundle_ocp_version: Optional[bool] = False,
index_to_gitlab_push_map: Optional[Dict[str, str]] = None,
parallel_threads: int = 5,
) -> None:
"""
Coordinate the work needed to merge old (N) index image with new (N+1) index image.
:param str source_from_index: pull specification to be used as the base for building the new
index image.
:param str target_index: pull specification of content stage index image for the
corresponding target index image.
:param list deprecation_list: list of deprecated bundles for the target index image.
:param int request_id: the ID of the IIB build request.
:param str binary_image: the pull specification of the container image where the opm binary
gets copied from.
:param bool overwrite_target_index: if True, overwrite the input ``target_index`` with
the built index image.
:param str overwrite_target_index_token: the token used for overwriting the input
``target_index`` image. This is required to use ``overwrite_target_index``.
The format of the token must be in the format "user:password".
:param str distribution_scope: the scope for distribution of the index image, defaults to
``None``.
:param build_tags: list of extra tag to use for intermetdiate index image
:param str graph_update_mode: Graph update mode that defines how channel graphs are updated
in the index.
:param bool ignore_bundle_ocp_version: When set to `true` and image set as target_index is
listed in `iib_no_ocp_label_allow_list` config then bundles without
"com.redhat.openshift.versions" label set will be added in the result `index_image`.
:raises IIBError: if the index image merge fails.
:param dict index_to_gitlab_push_map: the dict mapping index images (keys) to GitLab repos
(values) in order to push their catalogs into GitLab.
:param int parallel_threads: the number of parallel threads to use for validating the bundles
:raises IIBError: if the index image merge fails.
"""
reset_docker_config()
set_request_state(request_id, 'in_progress', 'Preparing request for merge')
# Prepare request
with set_registry_token(overwrite_target_index_token, target_index, append=True):
prebuild_info = prepare_request_for_build(
request_id,
RequestConfigMerge(
_binary_image=binary_image,
overwrite_target_index_token=overwrite_target_index_token,
source_from_index=source_from_index,
target_index=target_index,
distribution_scope=distribution_scope,
binary_image_config=binary_image_config,
),
)
source_from_index_resolved = prebuild_info['source_from_index_resolved']
target_index_resolved = prebuild_info['target_index_resolved']
# Set OPM version
Opm.set_opm_version(target_index_resolved)
opm_version = Opm.opm_version
_update_index_image_build_state(request_id, prebuild_info)
mr_details: Optional[Dict[str, str]] = None
local_git_repo_path: Optional[str] = None
index_git_repo: Optional[str] = None
last_commit_sha: Optional[str] = None
output_pull_spec: Optional[str] = None
original_index_db_digest: Optional[str] = None
with tempfile.TemporaryDirectory(prefix=f'iib-{request_id}-') as temp_dir:
# Setup and clone Git repository
branch = prebuild_info['ocp_version']
(
index_git_repo,
local_git_repo_path,
localized_git_catalog_path,
) = prepare_git_repository_for_build(
request_id=request_id,
from_index=source_from_index,
temp_dir=temp_dir,
branch=branch,
index_to_gitlab_push_map=index_to_gitlab_push_map or {},
)
# Pull both source and target index.db artifacts and read present bundle
source_index_db_path = fetch_and_verify_index_db_artifact(
source_from_index_resolved, temp_dir
)
target_index_db_path = fetch_and_verify_index_db_artifact(target_index_resolved, temp_dir)
# Get the bundles from the index.db file
with set_registry_token(overwrite_target_index_token, target_index, append=True):
source_index_bundles, source_index_bundles_pull_spec = _get_present_bundles(
source_index_db_path, temp_dir
)
log.debug("Source index bundles %s", source_index_bundles)
log.debug("Source index bundles pull spec %s", source_index_bundles_pull_spec)
target_index_bundles, target_index_bundles_pull_spec = _get_present_bundles(
target_index_db_path, temp_dir
)
log.debug("Target index bundles %s", target_index_bundles)
log.debug("Target index bundles pull spec %s", target_index_bundles_pull_spec)
# Validate the bundles from source and target have their pullspecs present in the registry
set_request_state(
request_id,
'in_progress',
'Validating whether the bundles have their pullspecs present in the registry',
)
validate_bundles_in_parallel(
bundles=source_index_bundles + target_index_bundles,
threads=parallel_threads,
wait=True,
)
set_request_state(request_id, 'in_progress', 'Adding bundles missing in source index image')
log.info('Adding bundles from target index image which are missing from source index image')
missing_bundles, invalid_bundles = get_missing_bundles_from_target_to_source(
source_index_bundles=source_index_bundles,
target_index_bundles=target_index_bundles,
source_from_index=source_from_index_resolved,
ocp_version=prebuild_info['target_ocp_version'],
target_index=target_index_resolved,
ignore_bundle_ocp_version=ignore_bundle_ocp_version,
)
missing_bundle_paths = [bundle['bundlePath'] for bundle in missing_bundles]
if missing_bundle_paths:
add_max_ocp_version_property(missing_bundle_paths, temp_dir)
# Add the missing bundles to the index.db file
set_request_state(
request_id, 'in_progress', 'Adding the missing bundles to the index.db file'
)
intermediary_db = _get_or_create_temp_index_db_file(temp_dir, source_from_index_resolved)
_opm_registry_add(temp_dir, intermediary_db, missing_bundle_paths)
# Process the deprecation list
set_request_state(request_id, 'in_progress', 'Processing the deprecation list')
intermediate_bundles = missing_bundle_paths + source_index_bundles_pull_spec
deprecation_bundles = get_bundles_from_deprecation_list(
intermediate_bundles, deprecation_list
)
deprecation_bundles = deprecation_bundles + [
bundle['bundlePath'] for bundle in invalid_bundles
]
# process the deprecation list into the intermediary index.db file
if deprecation_bundles:
# We need to get the latest pullpecs from bundles in order to avoid failures
# on "opm deprecatetruncate" due to versions already removed before.
# Once we give the latest versions all lower ones get automatically deprecated by OPM.
all_bundles = source_index_bundles + target_index_bundles
deprecation_bundles = get_bundles_latest_version(deprecation_bundles, all_bundles)
deprecate_bundles_db(
base_dir=temp_dir, index_db_file=intermediary_db, bundles=deprecation_bundles
)
# Retrieve the operators from the intermediary index.db file
# This will be required for pushing the updated index.db file to the IIB registry
bundles_in_db = get_list_bundles(intermediary_db, temp_dir)
operators_in_db = [bundle['packageName'] for bundle in bundles_in_db]
# Migrate the intermediary index.db file to FBC and generate the Dockerfile
set_request_state(
request_id,
'in_progress',
'Migrating the intermediary index.db file to FBC and generating the Dockerfile',
)
fbc_dir, _ = opm_migrate(intermediary_db, temp_dir)
# rename `catalog` directory because we need to use this name for
# final destination of catalog (defined in Dockerfile)
catalog_from_db = os.path.join(temp_dir, 'from_db')
os.rename(fbc_dir, catalog_from_db)
# Merge migrated FBC with existing FBC in Git repo
# overwrite data in `catalog_from_index` by data from `catalog_from_db`
# this adds changes on not opted in operators to final FBC
log.info('Merging migrated catalog with Git catalog')
merge_catalogs_dirs(catalog_from_db, localized_git_catalog_path)
# We need to regenerate file-based catalog because we merged changes
fbc_dir_path = os.path.join(temp_dir, 'catalog')
if os.path.exists(fbc_dir_path):
shutil.rmtree(fbc_dir_path)
shutil.move(catalog_from_db, fbc_dir_path)
# Validate the FBC config
set_request_state(request_id, 'in_progress', 'Validating the FBC config')
opm_validate(fbc_dir_path)
# Generate the Dockerfile
set_request_state(request_id, 'in_progress', 'Generating the Dockerfile')
# Write build metadata to a file to be added with the commit
set_request_state(request_id, 'in_progress', 'Writing build metadata')
write_build_metadata(
local_git_repo_path,
opm_version,
prebuild_info['target_ocp_version'],
prebuild_info['distribution_scope'],
prebuild_info['binary_image_resolved'],
request_id,
)
try:
# Commit changes and create PR or push directly
mr_details, last_commit_sha = git_commit_and_create_mr_or_push(
request_id=request_id,
local_git_repo_path=local_git_repo_path,
index_git_repo=index_git_repo,
branch=branch,
commit_message=(
f"IIB: Merge operators for request {request_id}\n\n"
f"Missing bundles: {', '.join(missing_bundle_paths)}"
),
overwrite_from_index=overwrite_target_index,
)
# Wait for Konflux pipeline and extract built image UR
image_url = monitor_pipeline_and_extract_image(
request_id=request_id,
last_commit_sha=last_commit_sha,
)
# Copy built index to all output pull specs
output_pull_specs = replicate_image_to_tagged_destinations(
request_id=request_id,
image_url=image_url,
build_tags=build_tags,
)
# Use the first output_pull_spec as the primary one for request updates
output_pull_spec = output_pull_specs[0]
# Update request with final output
if not output_pull_spec:
raise IIBError(
"output_pull_spec was not set. "
"This should not happen if the pipeline completed successfully."
)
_update_index_image_pull_spec(
output_pull_spec=output_pull_spec,
request_id=request_id,
arches=prebuild_info['arches'],
from_index=source_from_index,
overwrite_from_index=overwrite_target_index,
overwrite_from_index_token=overwrite_target_index_token,
resolved_prebuild_from_index=source_from_index_resolved,
add_or_rm=True,
is_image_fbc=True,
# Passing an empty index_repo_map is intentional. In IIB 1.0, if
# the overwrite_from_index token is given, we push to git by default
# at the end of a request. In IIB 2.0, the commit is pushed earlier to trigger
# a Konflux pipelinerun. So the old workflow isn't needed.
index_repo_map={},
)
# Push updated index.db if overwrite_target_index_token is provided
# We can push it directly from temp_dir since we're still inside the
# context manager. Do it as the last step to avoid rolling back the
# index.db file if the pipeline fails.
original_index_db_digest = push_index_db_artifact(
request_id=request_id,
from_index=source_from_index,
index_db_path=intermediary_db,
operators=operators_in_db,
operators_in_db=set(operators_in_db),
overwrite_from_index=overwrite_target_index,
request_type='merge',
)
# Close MR if it was opened
cleanup_merge_request_if_exists(mr_details, index_git_repo)
# Update request with final output
set_request_state(
request_id,
'complete',
f"The operator(s) {operators_in_db} were successfully merged "
"from the target index image into the source index image",
)
except Exception as e:
cleanup_on_failure(
mr_details=mr_details,
last_commit_sha=last_commit_sha,
index_git_repo=index_git_repo,
overwrite_from_index=overwrite_target_index,
request_id=request_id,
from_index=source_from_index,
index_repo_map={},
original_index_db_digest=original_index_db_digest,
reason=f"error: {e}",
)
# Reset Docker config for the next request. This is a fail safe.
reset_docker_config()
raise IIBError(f"Failed to merge operators: {e}")
</code_context>
<issue_to_address>
**issue (code-quality):** We've found these issues:
- Use named expression to simplify assignment and conditional ([`use-named-expression`](https://docs.sourcery.ai/Reference/Default-Rules/refactorings/use-named-expression/))
- Explicitly raise from a previous error ([`raise-from-previous-error`](https://docs.sourcery.ai/Reference/Default-Rules/suggestions/raise-from-previous-error/))
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
5dbcf53 to
9d449b1
Compare
9d449b1 to
4857f4c
Compare
Handling of merge-index-image for containerized IIB Refers to CLOUDDST-29408 Signed-off-by: Jonathan Gangi <[email protected]> Assisted-by: Cursor/Gemini
Update `api_v2` to use the `handle_containerized_merge_request` Refers to CLOUDDST-29408 Signed-off-by: Jonathan Gangi <[email protected]> Assisted-by: Cursor/Gemini
4857f4c to
ebb8496
Compare
|
@yashvardhannanavati PTAL again |
yashvardhannanavati
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
couple of minor suggestions. Looks great!
| target_index=target_index_resolved, | ||
| ignore_bundle_ocp_version=ignore_bundle_ocp_version, | ||
| ) | ||
| missing_bundle_paths = [bundle['bundlePath'] for bundle in missing_bundles] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you pass source_index_bundles_pull_spec and target_index_bundles_pull_spec in the function call on line 193, you can get rid of this line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't get it... The function get_missing_bundles_from_target_to_source seems to return Bundle and not bundle_path... Am I missing something?
| @mock.patch('iib.workers.tasks.build_containerized_merge.shutil.move') | ||
| @mock.patch('iib.workers.tasks.build_containerized_merge.os.path.exists') | ||
| @mock.patch('builtins.set', side_effect=_mock_set_for_bundles) | ||
| def test_handle_containerized_merge_request_with_deprecation( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can this be covered in the success use case? Or do you think a separate test case is necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I can adjust it!
I did the feature implementation but the unit-tests were vibe-coded, so probably Gemini "forgot" about that, but yes, I can add it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added test_handle_containerized_merge_request_success_with_deprecations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
|
||
| # Write build metadata to a file to be added with the commit | ||
| set_request_state(request_id, 'in_progress', 'Writing build metadata') | ||
| write_build_metadata( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also include arches when calling this function. The change to the function signature is here: 5d59cd7
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will include it after your merge of #1274 then I can rebase and easily add it
Add unit-tests for containerized merge request Refers to CLOUDDST-29408 Signed-off-by: Jonathan Gangi <[email protected]> Assisted-by: Cursor/Gemini
This commit fixes the parallel validation class to validate the bundlePath from the bundles dictionary Refers to CLOUDDST-29408 Signed-off-by: Jonathan Gangi <[email protected]> Assisted-by: Cursor/Gemini
ebb8496 to
872f775
Compare
Handling of merge-index-image for containerized IIB
Refers to CLOUDDST-29408
Signed-off-by: Jonathan Gangi [email protected]
Assisted-by: Cursor/Gemini
Summary by Sourcery
Introduce a containerized workflow for merge-index-image requests and refactor bundle-diff logic for reuse.
New Features:
Enhancements:
Summary by Sourcery
Introduce a containerized merge-index-image workflow and refactor bundle and deprecation handling to be reusable across legacy and containerized flows.
New Features:
Enhancements:
Tests: