Skip to content

Conversation

@Iroy30
Copy link
Member

@Iroy30 Iroy30 commented Oct 30, 2025

Description

Issue

Checklist

  • I am familiar with the Contributing Guidelines.
  • Testing
    • New or existing tests cover these changes
    • Added tests
    • Created an issue to follow-up
    • NA
  • Documentation
    • The documentation is up to date with these changes
    • Added new documentation
    • NA

Summary by CodeRabbit

  • New Features

    • Comprehensive regression testing framework supporting routing, LP, and MIP solver benchmarks
    • Multi-GPU parallel benchmark execution
    • Automated HTML report generation for test results
    • Slack integration for regression notifications
    • Dataset downloading and management utilities
  • Documentation

    • Configuration guides and templates for setting up benchmark tests
    • Instructions for dataset preparation and test creation

@copy-pr-bot
Copy link

copy-pr-bot bot commented Oct 30, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@rg20 rg20 added this to the 25.12 milestone Oct 30, 2025
@rg20 rg20 added feature request New feature or request non-breaking Introduces a non-breaking change labels Oct 30, 2025
@rgsl888prabhu
Copy link
Collaborator

@CodeRabbit review

@coderabbitai
Copy link

coderabbitai bot commented Oct 30, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

GPUS_PER_NODE=8

# Path to the squashs file containing the container image
IMAGE="nvidia/cuopt:25.10.0a-cuda12.9-py3.12"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets try to get branch name from git tag or something else so we don't need to change this in each release.




## Getting started
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think this file is more like template, may be we need to populate it with more details.

@coderabbitai
Copy link

coderabbitai bot commented Oct 30, 2025

Walkthrough

Adds a comprehensive regression testing and benchmarking framework for optimization solvers. Introduces Python utilities for building models, executing benchmarks, recording metrics, downloading datasets, and updating databases; shell scripts for orchestration, environment setup, and reporting; and configuration files defining test parameters and result metrics across LP, MIP, and routing problem types.

Changes

Cohort / File(s) Summary
Core Benchmarking Python Framework
regression/benchmark_scripts/benchmark.py, regression/benchmark_scripts/run_test_and_record.py
Multi-GPU benchmarking execution with per-test and per-GPU worker orchestration; result recording with CSV output and regression markdown generation; baseline-change (bks_change) metric computation and threshold-based regression detection.
Benchmark Utilities & Configuration
regression/benchmark_scripts/utils.py, regression/benchmark_scripts/configs/README.md, regression/benchmark_scripts/configs/example_test_creation.py
Helper functions for building routing and LP/MIP data models from JSON/MPS; metric schema definitions (RoutingMetrics, LPMetrics); configuration loading; example test creation script for CVRP scenarios.
Test Configuration Templates
regression/benchmark_scripts/configs/test_name_config.json, regression/benchmark_scripts/configs/test_name_data.json
JSON test configuration with metrics, thresholds, and units; sample optimization problem data (cost matrices, task/fleet definitions, solver settings).
Dataset & Results Management
regression/get_datasets.py, regression/save_benchmark_results.py, regression/update_asv_database.py
LP/MIP dataset downloading and extraction from Mittelmann/MIPLIB sources; benchmark result CSV aggregation per commit; ASV database updates with system metadata and metric results.
Orchestration & Execution Scripts
regression/config.sh, regression/functions.sh, regression/cronjob.sh, regression/run_regression.sh
Environment configuration and working directory setup; shell utility functions (logging, tee redirection, directory management, Slurm integration); multi-GPU regression test orchestration with optional ASV and Slack reporting.
Test & Benchmark Entry Points
regression/lp_regression_test.sh, regression/mip_regression_test.sh, regression/routing_regression_test.sh, regression/test-container.sh
Per-test-type entry scripts invoking benchmark.py with appropriate dataset paths, GPU counts, and result directories; container import verification.
Reporting & Metadata
regression/create-html-reports.sh, regression/send-slack-report.sh, regression/write-meta-data.sh, regression/save_benchmarks.sh
HTML report generation from result artifacts with per-run and per-log formatting; Slack webhook integration for result notifications; metadata collection (version, commit, timestamps); result commit and push to repository.
Configuration & Messages
regression/config.sh, regression/lp_config.json, regression/mip_config.json, regression/slack_msg.json
Shell environment defaults (paths, GPU settings); LP/MIP test metric thresholds; Slack message template with result links and status.
Documentation
regression/README.md
GitLab-focused README template covering project setup, collaboration, testing, and contributing workflows.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Areas requiring extra attention:

  • regression/benchmark_scripts/benchmark.py & run_test_and_record.py: Multi-GPU worker orchestration logic, regression detection thresholds, CSV/markdown output formatting; verify baseline-change (bks_change) computation and threshold handling for special-prefixed metrics.
  • regression/cronjob.sh: Multi-step orchestration with background process management, conditional ASV/Slack reporting, container and benchmark execution sequencing; ensure proper error propagation and cleanup.
  • regression/create-html-reports.sh: Complex HTML generation from artifact files with conditional metadata sourcing and optional ASV integration; validate path handling and report link generation.
  • regression/get_datasets.py: External dataset downloads and archive extraction with compilation step (emps.c); verify SSL bypass scope and extraction robustness for corrupt/missing datasets.
  • Data model building (utils.py): MPS parsing and routing data model construction; ensure proper handling of invalid inputs and edge cases (empty fleets, missing matrices).

Poem

🐰 A framework so vast, with benchmarks so fine,
Regression tests marching, one by one in a line,
With datasets and metrics, and GPUs to spare,
We'll measure the solvers with mathemat'cal flair!
From LP to routing, the results do appear—
Hop forward with confidence, the data is clear! 🐇✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The PR title "Add regression testing" directly and clearly relates to the primary objective of the changeset. The PR introduces a comprehensive regression testing framework including Python benchmarking scripts (benchmark.py, run_test_and_record.py, utils.py, get_datasets.py), shell orchestration scripts (cronjob.sh, create-html-reports.sh, functions.sh, config.sh), configuration files (lp_config.json, mip_config.json), test definitions, and supporting documentation. The title is concise, uses plain language without noise or emojis, and accurately captures the main change at an appropriate level of abstraction. A teammate scanning project history would immediately understand that this PR introduces regression testing capabilities.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 25

🧹 Nitpick comments (19)
regression/save_benchmarks.sh (2)

8-8: Quote command substitutions to prevent word splitting.

Line 8 uses unquoted command substitution which could break if paths contain spaces.

Apply this diff:

-export PROJECT_DIR=${PROJECT_DIR:-$(cd $(dirname ${BASH_SOURCE[0]}) && pwd)}
+export PROJECT_DIR=${PROJECT_DIR:-$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)}

30-32: Add descriptive labels to echo statements for clarity.

The echo statements output variable values without context, making debugging harder.

Apply this diff:

-echo $1
-echo ${RESULTS_DIR}
-echo ${PROJECT_VERSION}
+echo "Commit hash: $1"
+echo "Results dir: ${RESULTS_DIR}"
+echo "Project version: ${PROJECT_VERSION}"
regression/benchmark_scripts/configs/example_test_creation.py (1)

64-70: Remove redundant explicit file close() calls.

The with statement (context manager) automatically closes files, making the explicit fp.close() calls redundant.

Apply this diff:

 with open(test_config_file_name, 'w') as fp:
     json.dump(test_config, fp)
-    fp.close()
 
 with open(model_data_file_name, 'w') as fp:
     json.dump(model_dict, fp)
-    fp.close()
regression/save_benchmark_results.py (1)

7-8: Validate and create output directory if needed.

The function assumes the output directory exists but doesn't validate or create it, which could lead to file write errors.

Add directory creation after line 8:

 def create_update_benchmark_db(benchmark_path, output_path, commit_hash):
     bench_path = Path(benchmark_path)/"benchmarks"
     out_path = output_path + "/benchmarks/"
+    
+    # Ensure output directory exists
+    Path(out_path).mkdir(parents=True, exist_ok=True)
regression/test-container.sh (1)

18-18: Quote command substitutions to prevent word splitting.

Line 18 uses unquoted command substitution which could break if paths contain spaces.

Apply this diff:

-export PROJECT_DIR=${PROJECT_DIR:-$(cd $(dirname ${BASH_SOURCE[0]}) && pwd)}
+export PROJECT_DIR=${PROJECT_DIR:-$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)}
regression/config.sh (2)

10-10: Quote command substitution to prevent word splitting.

While this file is meant to be sourced (so the lack of shebang is acceptable), the command substitution should still be quoted for safety.

Apply this diff:

-THIS_DIR=$(cd $(dirname ${BASH_SOURCE[0]}) && pwd)
+THIS_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)

13-13: Document or validate the SCRATCH_DIR environment variable.

Line 13 references $SCRATCH_DIR which is not defined in this file. This variable must be set externally, but there's no validation or documentation indicating this requirement.

Add validation and documentation:

+# SCRATCH_DIR must be defined externally (e.g., in environment or calling script)
+if [ -z "$SCRATCH_DIR" ]; then
+    echo "ERROR: SCRATCH_DIR environment variable must be set" >&2
+    return 1
+fi
+
 CUOPT_SCRIPTS_DIR=$THIS_DIR
 OUTPUT_DIR=$SCRATCH_DIR/benchmark_runs/
regression/benchmark_scripts/configs/README.md (1)

10-11: Specify language for fenced code block.

Code blocks should include a language identifier for syntax highlighting and consistency with markdown best practices.

-   ```
+   ```bash
    aws s3 cp /path/to/files s3://cuopt-datasets/regression_datasets/
-   ```
+   ```
regression/mip_regression_test.sh (2)

15-15: Remove unused variable or clarify its purpose.

The DELAY variable is defined but never used. If it's intended for future use, consider adding a comment; otherwise, remove it.


18-18: Quote variable expansion to prevent word splitting.

The ${BASH_SOURCE[0]} expansion should be quoted to safely handle paths with spaces.

- export PROJECT_DIR=${PROJECT_DIR:-$(cd $(dirname ${BASH_SOURCE[0]}) && pwd)}
+ export PROJECT_DIR=${PROJECT_DIR:-$(cd $(dirname "${BASH_SOURCE[0]}") && pwd)}
regression/lp_regression_test.sh (2)

15-15: Remove unused variable or clarify its purpose.

The DELAY variable is defined but never used. If it's intended for future use, consider adding a comment; otherwise, remove it.


18-18: Quote variable expansion to prevent word splitting.

The ${BASH_SOURCE[0]} expansion should be quoted to safely handle paths with spaces.

- export PROJECT_DIR=${PROJECT_DIR:-$(cd $(dirname ${BASH_SOURCE[0]}) && pwd)}
+ export PROJECT_DIR=${PROJECT_DIR:-$(cd $(dirname "${BASH_SOURCE[0]}") && pwd)}
regression/write-meta-data.sh (2)

15-15: Remove unused variable or clarify its purpose.

The DELAY variable is defined but never used. If it's intended for future use, consider adding a comment; otherwise, remove it.


18-18: Quote variable expansion to prevent word splitting.

The ${BASH_SOURCE[0]} expansion should be quoted to safely handle paths with spaces.

- export PROJECT_DIR=${PROJECT_DIR:-$(cd $(dirname ${BASH_SOURCE[0]}) && pwd)}
+ export PROJECT_DIR=${PROJECT_DIR:-$(cd $(dirname "${BASH_SOURCE[0]}") && pwd)}
regression/run_regression.sh (1)

12-12: Validate path before destructive rm -rf operation.

The rm -rf command operates on $SCRATCH_DIR/routing_configs/* without checking if SCRATCH_DIR is set or is an expected value. If SCRATCH_DIR is undefined or misconfigured, this could delete unintended files. Consider adding a safety check.

+ if [[ -z "$SCRATCH_DIR" ]]; then
+   echo "Error: SCRATCH_DIR is not set" >&2
+   exit 1
+ fi
  rm -rf $SCRATCH_DIR/routing_configs/*
regression/functions.sh (1)

101-109: Consider adding error handling for squeue failures.

The function polls squeue without checking if the command succeeds. If squeue fails or is unavailable, the loop may behave unexpectedly.

Consider adding basic error detection:

 function waitForSlurmJobsToComplete {
     ids=$*
     jobs=$(python -c "print(\",\".join(\"$ids\".split()))") # make a comma-separated list
-    jobsInQueue=$(squeue --noheader --jobs=$jobs)
+    jobsInQueue=$(squeue --noheader --jobs=$jobs 2>/dev/null)
+    if [[ $? -ne 0 ]]; then
+        logger "Warning: squeue command failed"
+        return 1
+    fi
     while [[ $jobsInQueue != "" ]]; do
         sleep 2
-        jobsInQueue=$(squeue --noheader --jobs=$jobs)
+        jobsInQueue=$(squeue --noheader --jobs=$jobs 2>/dev/null)
     done
 }
regression/benchmark_scripts/utils.py (2)

100-119: Add spaces after colons in type annotations.

The NamedTuple definitions are correct but lack spaces after colons, which is the standard Python style.

Apply this diff:

 class RoutingMetrics(NamedTuple):

-    total_objective_value:float = -1
-    vehicle_count:int = -1
-    cost:float = -1
-    prize:float = -1
-    travel_time:float = -1
-    solver_time:float = -1
-    gpu_memory_usage:float = -1
+    total_objective_value: float = -1
+    vehicle_count: int = -1
+    cost: float = -1
+    prize: float = -1
+    travel_time: float = -1
+    solver_time: float = -1
+    gpu_memory_usage: float = -1
     git_commit: str = ""
     date_time: str = ""

 class LPMetrics(NamedTuple):

-    primal_objective_value:float = -1
-    solver_time:float = -1
-    gpu_memory_usage:float = -1
+    primal_objective_value: float = -1
+    solver_time: float = -1
+    gpu_memory_usage: float = -1
     git_commit: str = ""
     date_time: str = ""

169-187: Consider using os.path.join for path construction.

The function logic is correct, but string concatenation for paths (lines 176, 185) is less robust than using os.path.join.

Consider this improvement:

     if d_type == "lp" or d_type == "mip":
-        with open(data_file_path+"/"+d_type+"_config.json") as f:
+        with open(os.path.join(data_file_path, f"{d_type}_config.json")) as f:
             data = json.load(f)
         test_name = data_file.split('/')[-1].split('.')[0]
         data_model, solver_settings = build_datamodel_from_mps(data_file)
         requested_metrics = data["metrics"]
     else:
         with open(data_file) as f:
             data = json.load(f)
         test_name = data["test_name"]
-        data_model, solver_settings = build_routing_datamodel_from_json(data_file_path+"/"+data["file_name"])
+        data_model, solver_settings = build_routing_datamodel_from_json(
+            os.path.join(data_file_path, data["file_name"])
+        )
         requested_metrics = data["metrics"]
regression/benchmark_scripts/benchmark.py (1)

72-89: Use identity comparison for None check.

Line 79 uses == None which should be is None per Python style guidelines.

Apply this diff:

         if "bks" in value.keys():
             bks = value["bks"]["value"]
-            if bks == None:
+            if bks is None:
                 continue
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2248e0d and 3b6be07.

⛔ Files ignored due to path filters (1)
  • regression/benchmark_scripts/results/test_name.csv is excluded by !**/*.csv
📒 Files selected for processing (26)
  • regression/README.md (1 hunks)
  • regression/benchmark_scripts/benchmark.py (1 hunks)
  • regression/benchmark_scripts/configs/README.md (1 hunks)
  • regression/benchmark_scripts/configs/example_test_creation.py (1 hunks)
  • regression/benchmark_scripts/configs/test_name_config.json (1 hunks)
  • regression/benchmark_scripts/configs/test_name_data.json (1 hunks)
  • regression/benchmark_scripts/run_test_and_record.py (1 hunks)
  • regression/benchmark_scripts/utils.py (1 hunks)
  • regression/config.sh (1 hunks)
  • regression/create-html-reports.sh (1 hunks)
  • regression/cronjob.sh (1 hunks)
  • regression/functions.sh (1 hunks)
  • regression/get_datasets.py (1 hunks)
  • regression/lp_config.json (1 hunks)
  • regression/lp_regression_test.sh (1 hunks)
  • regression/mip_config.json (1 hunks)
  • regression/mip_regression_test.sh (1 hunks)
  • regression/routing_regression_test.sh (1 hunks)
  • regression/run_regression.sh (1 hunks)
  • regression/save_benchmark_results.py (1 hunks)
  • regression/save_benchmarks.sh (1 hunks)
  • regression/send-slack-report.sh (1 hunks)
  • regression/slack_msg.json (1 hunks)
  • regression/test-container.sh (1 hunks)
  • regression/update_asv_database.py (1 hunks)
  • regression/write-meta-data.sh (1 hunks)
🧰 Additional context used
🪛 Biome (2.1.2)
regression/lp_config.json

[error] 7-7: Expected a property but instead found '}'.

Expected a property here.

(parse)

🪛 Flake8 (7.3.0)
regression/benchmark_scripts/run_test_and_record.py

[error] 12-12: 'utils.get_configuration' imported but unused

(F401)


[error] 15-15: 'cuopt.utilities' imported but unused

(F401)


[error] 19-19: 'glob' imported but unused

(F401)


[error] 20-20: 'logging as log' imported but unused

(F401)


[error] 23-23: 'argparse' imported but unused

(F401)


[error] 76-76: comparison to None should be 'if cond is None:'

(E711)

regression/get_datasets.py

[error] 86-86: continuation line missing indentation or outdented

(E122)


[error] 87-87: continuation line missing indentation or outdented

(E122)


[error] 88-88: continuation line missing indentation or outdented

(E122)


[error] 89-89: continuation line missing indentation or outdented

(E122)


[error] 90-90: continuation line missing indentation or outdented

(E122)


[error] 91-91: continuation line missing indentation or outdented

(E122)


[error] 92-92: continuation line missing indentation or outdented

(E122)


[error] 93-93: continuation line missing indentation or outdented

(E122)


[error] 94-94: continuation line missing indentation or outdented

(E122)


[error] 95-95: continuation line missing indentation or outdented

(E122)


[error] 96-96: continuation line missing indentation or outdented

(E122)


[error] 97-97: continuation line missing indentation or outdented

(E122)


[error] 98-98: continuation line missing indentation or outdented

(E122)


[error] 99-99: continuation line missing indentation or outdented

(E122)


[error] 100-100: continuation line missing indentation or outdented

(E122)


[error] 101-101: continuation line missing indentation or outdented

(E122)


[error] 102-102: continuation line missing indentation or outdented

(E122)


[error] 103-103: continuation line missing indentation or outdented

(E122)


[error] 104-104: continuation line missing indentation or outdented

(E122)


[error] 105-105: continuation line missing indentation or outdented

(E122)


[error] 106-106: continuation line missing indentation or outdented

(E122)


[error] 107-107: continuation line missing indentation or outdented

(E122)


[error] 108-108: continuation line missing indentation or outdented

(E122)


[error] 109-109: continuation line missing indentation or outdented

(E122)


[error] 110-110: continuation line missing indentation or outdented

(E122)


[error] 111-111: continuation line missing indentation or outdented

(E122)


[error] 112-112: continuation line missing indentation or outdented

(E122)


[error] 113-113: continuation line missing indentation or outdented

(E122)


[error] 114-114: continuation line missing indentation or outdented

(E122)


[error] 115-115: continuation line missing indentation or outdented

(E122)


[error] 116-116: continuation line missing indentation or outdented

(E122)


[error] 117-117: continuation line missing indentation or outdented

(E122)


[error] 118-118: continuation line missing indentation or outdented

(E122)


[error] 119-119: continuation line missing indentation or outdented

(E122)


[error] 120-120: continuation line missing indentation or outdented

(E122)


[error] 121-121: continuation line missing indentation or outdented

(E122)


[error] 122-122: continuation line missing indentation or outdented

(E122)


[error] 123-123: continuation line missing indentation or outdented

(E122)


[error] 124-124: continuation line missing indentation or outdented

(E122)


[error] 125-125: continuation line missing indentation or outdented

(E122)


[error] 126-126: continuation line missing indentation or outdented

(E122)


[error] 127-127: continuation line missing indentation or outdented

(E122)


[error] 128-128: continuation line missing indentation or outdented

(E122)


[error] 129-129: continuation line missing indentation or outdented

(E122)


[error] 130-130: continuation line missing indentation or outdented

(E122)


[error] 131-131: continuation line missing indentation or outdented

(E122)


[error] 132-132: continuation line missing indentation or outdented

(E122)


[error] 133-133: continuation line missing indentation or outdented

(E122)


[error] 134-134: continuation line missing indentation or outdented

(E122)


[error] 135-135: continuation line missing indentation or outdented

(E122)


[error] 136-136: continuation line missing indentation or outdented

(E122)


[error] 137-137: continuation line missing indentation or outdented

(E122)


[error] 138-138: continuation line missing indentation or outdented

(E122)


[error] 139-139: continuation line missing indentation or outdented

(E122)


[error] 140-140: continuation line missing indentation or outdented

(E122)


[error] 141-141: continuation line missing indentation or outdented

(E122)


[error] 142-142: continuation line missing indentation or outdented

(E122)


[error] 143-143: continuation line missing indentation or outdented

(E122)


[error] 144-144: continuation line missing indentation or outdented

(E122)


[error] 145-145: continuation line missing indentation or outdented

(E122)


[error] 146-146: continuation line missing indentation or outdented

(E122)


[error] 147-147: continuation line missing indentation or outdented

(E122)


[error] 148-148: continuation line missing indentation or outdented

(E122)


[error] 149-149: continuation line missing indentation or outdented

(E122)


[error] 150-150: continuation line missing indentation or outdented

(E122)


[error] 151-151: continuation line missing indentation or outdented

(E122)


[error] 152-152: continuation line missing indentation or outdented

(E122)


[error] 153-153: continuation line missing indentation or outdented

(E122)


[error] 154-154: continuation line missing indentation or outdented

(E122)


[error] 155-155: continuation line missing indentation or outdented

(E122)


[error] 156-156: continuation line missing indentation or outdented

(E122)


[error] 157-157: continuation line missing indentation or outdented

(E122)


[error] 158-158: continuation line missing indentation or outdented

(E122)


[error] 159-159: continuation line missing indentation or outdented

(E122)


[error] 160-160: continuation line missing indentation or outdented

(E122)


[error] 161-161: continuation line missing indentation or outdented

(E122)


[error] 162-162: continuation line missing indentation or outdented

(E122)


[error] 163-163: continuation line missing indentation or outdented

(E122)


[error] 164-164: continuation line missing indentation or outdented

(E122)


[error] 165-165: continuation line missing indentation or outdented

(E122)


[error] 166-166: continuation line missing indentation or outdented

(E122)


[error] 167-167: continuation line missing indentation or outdented

(E122)


[error] 168-168: continuation line missing indentation or outdented

(E122)


[error] 169-169: continuation line missing indentation or outdented

(E122)


[error] 170-170: continuation line missing indentation or outdented

(E122)


[error] 171-171: continuation line missing indentation or outdented

(E122)


[error] 172-172: continuation line missing indentation or outdented

(E122)


[error] 173-173: continuation line missing indentation or outdented

(E122)


[error] 174-174: continuation line missing indentation or outdented

(E122)


[error] 175-175: continuation line missing indentation or outdented

(E122)


[error] 176-176: continuation line missing indentation or outdented

(E122)


[error] 177-177: continuation line missing indentation or outdented

(E122)


[error] 178-178: continuation line missing indentation or outdented

(E122)


[error] 179-179: continuation line missing indentation or outdented

(E122)


[error] 180-180: continuation line missing indentation or outdented

(E122)


[error] 181-181: continuation line missing indentation or outdented

(E122)


[error] 182-182: continuation line missing indentation or outdented

(E122)


[error] 183-183: continuation line missing indentation or outdented

(E122)


[error] 184-184: continuation line missing indentation or outdented

(E122)


[error] 185-185: continuation line missing indentation or outdented

(E122)


[error] 186-186: continuation line missing indentation or outdented

(E122)


[error] 187-187: continuation line missing indentation or outdented

(E122)


[error] 188-188: continuation line missing indentation or outdented

(E122)


[error] 189-189: continuation line missing indentation or outdented

(E122)


[error] 190-190: continuation line missing indentation or outdented

(E122)


[error] 191-191: continuation line missing indentation or outdented

(E122)


[error] 192-192: continuation line missing indentation or outdented

(E122)


[error] 193-193: continuation line missing indentation or outdented

(E122)


[error] 194-194: continuation line missing indentation or outdented

(E122)


[error] 195-195: continuation line missing indentation or outdented

(E122)


[error] 196-196: continuation line missing indentation or outdented

(E122)


[error] 197-197: continuation line missing indentation or outdented

(E122)


[error] 198-198: continuation line missing indentation or outdented

(E122)


[error] 199-199: continuation line missing indentation or outdented

(E122)


[error] 200-200: continuation line missing indentation or outdented

(E122)


[error] 201-201: continuation line missing indentation or outdented

(E122)


[error] 202-202: continuation line missing indentation or outdented

(E122)


[error] 203-203: continuation line missing indentation or outdented

(E122)


[error] 204-204: continuation line missing indentation or outdented

(E122)


[error] 205-205: continuation line missing indentation or outdented

(E122)


[error] 206-206: continuation line missing indentation or outdented

(E122)


[error] 207-207: continuation line missing indentation or outdented

(E122)


[error] 208-208: continuation line missing indentation or outdented

(E122)


[error] 209-209: continuation line missing indentation or outdented

(E122)


[error] 210-210: continuation line missing indentation or outdented

(E122)


[error] 211-211: continuation line missing indentation or outdented

(E122)


[error] 212-212: continuation line missing indentation or outdented

(E122)


[error] 213-213: continuation line missing indentation or outdented

(E122)


[error] 214-214: continuation line missing indentation or outdented

(E122)


[error] 215-215: continuation line missing indentation or outdented

(E122)


[error] 216-216: continuation line missing indentation or outdented

(E122)


[error] 217-217: continuation line missing indentation or outdented

(E122)


[error] 218-218: continuation line missing indentation or outdented

(E122)


[error] 219-219: continuation line missing indentation or outdented

(E122)


[error] 220-220: continuation line missing indentation or outdented

(E122)


[error] 221-221: continuation line missing indentation or outdented

(E122)


[error] 222-222: continuation line missing indentation or outdented

(E122)


[error] 223-223: continuation line missing indentation or outdented

(E122)


[error] 224-224: continuation line missing indentation or outdented

(E122)


[error] 225-225: continuation line missing indentation or outdented

(E122)


[error] 226-226: continuation line missing indentation or outdented

(E122)


[error] 227-227: continuation line missing indentation or outdented

(E122)


[error] 228-228: continuation line missing indentation or outdented

(E122)


[error] 229-229: continuation line missing indentation or outdented

(E122)


[error] 230-230: continuation line missing indentation or outdented

(E122)


[error] 231-231: continuation line missing indentation or outdented

(E122)


[error] 232-232: continuation line missing indentation or outdented

(E122)


[error] 233-233: continuation line missing indentation or outdented

(E122)


[error] 234-234: continuation line missing indentation or outdented

(E122)


[error] 235-235: continuation line missing indentation or outdented

(E122)


[error] 236-236: continuation line missing indentation or outdented

(E122)


[error] 237-237: continuation line missing indentation or outdented

(E122)


[error] 238-238: continuation line missing indentation or outdented

(E122)


[error] 239-239: continuation line missing indentation or outdented

(E122)


[error] 240-240: continuation line missing indentation or outdented

(E122)


[error] 241-241: continuation line missing indentation or outdented

(E122)


[error] 242-242: continuation line missing indentation or outdented

(E122)


[error] 243-243: continuation line missing indentation or outdented

(E122)


[error] 244-244: continuation line missing indentation or outdented

(E122)


[error] 245-245: continuation line missing indentation or outdented

(E122)


[error] 246-246: continuation line missing indentation or outdented

(E122)


[error] 247-247: continuation line missing indentation or outdented

(E122)


[error] 248-248: continuation line missing indentation or outdented

(E122)


[error] 249-249: continuation line missing indentation or outdented

(E122)


[error] 250-250: continuation line missing indentation or outdented

(E122)


[error] 251-251: continuation line missing indentation or outdented

(E122)


[error] 252-252: continuation line missing indentation or outdented

(E122)


[error] 253-253: continuation line missing indentation or outdented

(E122)


[error] 254-254: continuation line missing indentation or outdented

(E122)


[error] 255-255: continuation line missing indentation or outdented

(E122)


[error] 256-256: continuation line missing indentation or outdented

(E122)


[error] 257-257: continuation line missing indentation or outdented

(E122)


[error] 258-258: continuation line missing indentation or outdented

(E122)


[error] 259-259: continuation line missing indentation or outdented

(E122)


[error] 260-260: continuation line missing indentation or outdented

(E122)


[error] 261-261: continuation line missing indentation or outdented

(E122)


[error] 262-262: continuation line missing indentation or outdented

(E122)


[error] 263-263: continuation line missing indentation or outdented

(E122)


[error] 264-264: continuation line missing indentation or outdented

(E122)


[error] 265-265: continuation line missing indentation or outdented

(E122)


[error] 266-266: continuation line missing indentation or outdented

(E122)


[error] 267-267: continuation line missing indentation or outdented

(E122)


[error] 268-268: continuation line missing indentation or outdented

(E122)


[error] 269-269: continuation line missing indentation or outdented

(E122)


[error] 270-270: continuation line missing indentation or outdented

(E122)


[error] 271-271: continuation line missing indentation or outdented

(E122)


[error] 272-272: continuation line missing indentation or outdented

(E122)


[error] 273-273: continuation line missing indentation or outdented

(E122)


[error] 274-274: continuation line missing indentation or outdented

(E122)


[error] 275-275: continuation line missing indentation or outdented

(E122)


[error] 276-276: continuation line missing indentation or outdented

(E122)


[error] 277-277: continuation line missing indentation or outdented

(E122)


[error] 278-278: continuation line missing indentation or outdented

(E122)


[error] 279-279: continuation line missing indentation or outdented

(E122)


[error] 280-280: continuation line missing indentation or outdented

(E122)


[error] 281-281: continuation line missing indentation or outdented

(E122)


[error] 282-282: continuation line missing indentation or outdented

(E122)


[error] 283-283: continuation line missing indentation or outdented

(E122)


[error] 284-284: continuation line missing indentation or outdented

(E122)


[error] 285-285: continuation line missing indentation or outdented

(E122)


[error] 286-286: continuation line missing indentation or outdented

(E122)


[error] 287-287: continuation line missing indentation or outdented

(E122)


[error] 288-288: continuation line missing indentation or outdented

(E122)


[error] 289-289: continuation line missing indentation or outdented

(E122)


[error] 290-290: continuation line missing indentation or outdented

(E122)


[error] 291-291: continuation line missing indentation or outdented

(E122)


[error] 292-292: continuation line missing indentation or outdented

(E122)


[error] 293-293: continuation line missing indentation or outdented

(E122)


[error] 294-294: continuation line missing indentation or outdented

(E122)


[error] 295-295: continuation line missing indentation or outdented

(E122)


[error] 296-296: continuation line missing indentation or outdented

(E122)


[error] 297-297: continuation line missing indentation or outdented

(E122)


[error] 298-298: continuation line missing indentation or outdented

(E122)


[error] 299-299: continuation line missing indentation or outdented

(E122)


[error] 300-300: continuation line missing indentation or outdented

(E122)


[error] 301-301: continuation line missing indentation or outdented

(E122)


[error] 302-302: continuation line missing indentation or outdented

(E122)


[error] 303-303: continuation line missing indentation or outdented

(E122)


[error] 304-304: continuation line missing indentation or outdented

(E122)


[error] 305-305: continuation line missing indentation or outdented

(E122)


[error] 306-306: continuation line missing indentation or outdented

(E122)


[error] 307-307: continuation line missing indentation or outdented

(E122)


[error] 308-308: continuation line missing indentation or outdented

(E122)


[error] 309-309: continuation line missing indentation or outdented

(E122)


[error] 310-310: continuation line missing indentation or outdented

(E122)


[error] 311-311: continuation line missing indentation or outdented

(E122)


[error] 312-312: continuation line missing indentation or outdented

(E122)


[error] 313-313: continuation line missing indentation or outdented

(E122)


[error] 314-314: continuation line missing indentation or outdented

(E122)


[error] 315-315: continuation line missing indentation or outdented

(E122)


[error] 316-316: continuation line missing indentation or outdented

(E122)


[error] 317-317: continuation line missing indentation or outdented

(E122)


[error] 318-318: continuation line missing indentation or outdented

(E122)


[error] 319-319: continuation line missing indentation or outdented

(E122)


[error] 320-320: continuation line missing indentation or outdented

(E122)


[error] 321-321: continuation line missing indentation or outdented

(E122)


[error] 322-322: continuation line missing indentation or outdented

(E122)


[error] 323-323: continuation line missing indentation or outdented

(E122)


[error] 324-324: continuation line missing indentation or outdented

(E122)


[error] 325-325: continuation line missing indentation or outdented

(E122)


[error] 326-326: continuation line missing indentation or outdented

(E122)


[error] 820-820: block comment should start with '# '

(E265)


[error] 848-848: block comment should start with '# '

(E265)


[error] 892-892: continuation line over-indented for visual indent

(E127)


[error] 902-902: continuation line over-indented for visual indent

(E127)


[error] 905-905: continuation line over-indented for visual indent

(E127)


[error] 907-907: continuation line over-indented for visual indent

(E127)


[error] 910-910: continuation line over-indented for visual indent

(E127)

regression/benchmark_scripts/benchmark.py

[error] 13-13: block comment should start with '# '

(E265)


[error] 14-14: block comment should start with '# '

(E265)


[error] 15-15: block comment should start with '# '

(E265)


[error] 16-16: block comment should start with '# '

(E265)


[error] 20-20: 'glob' imported but unused

(F401)


[error] 23-23: redefinition of unused 'os' from line 11

(F811)


[error] 79-79: comparison to None should be 'if cond is None:'

(E711)


[error] 202-202: redefinition of unused 'glob' from line 20

(F811)


[error] 214-214: 'rmm._cuda.gpu.CUDARuntimeError' imported but unused

(F401)


[error] 214-214: 'rmm._cuda.gpu.getDevice' imported but unused

(F401)


[error] 214-214: 'rmm._cuda.gpu.setDevice' imported but unused

(F401)


[error] 223-223: continuation line under-indented for visual indent

(E128)

regression/benchmark_scripts/utils.py

[error] 12-12: 'cuopt.routing' imported but unused

(F401)


[error] 36-36: redefinition of unused 'json' from line 16

(F811)


[error] 37-37: redefinition of unused 'os' from line 15

(F811)


[error] 48-48: redefinition of unused 'build_routing_datamodel_from_json' from line 11

(F811)


[error] 124-124: continuation line missing indentation or outdented

(E122)


[error] 128-128: continuation line missing indentation or outdented

(E122)


[error] 132-132: continuation line missing indentation or outdented

(E122)


[error] 136-136: continuation line missing indentation or outdented

(E122)


[error] 140-140: continuation line missing indentation or outdented

(E122)


[error] 144-144: continuation line missing indentation or outdented

(E122)


[error] 148-148: continuation line missing indentation or outdented

(E122)


[error] 149-149: undefined name 'dtype'

(F821)


[error] 151-151: continuation line missing indentation or outdented

(E122)


[error] 159-159: continuation line missing indentation or outdented

(E122)


[error] 163-163: continuation line missing indentation or outdented

(E122)


[error] 167-167: continuation line missing indentation or outdented

(E122)

regression/benchmark_scripts/configs/example_test_creation.py

[error] 4-4: 'cuopt' imported but unused

(F401)


[error] 5-5: 'cuopt.routing' imported but unused

(F401)

regression/update_asv_database.py

[error] 14-14: 'asvdb.utils' imported but unused

(F401)


[error] 17-17: 'time' imported but unused

(F401)


[error] 68-68: continuation line over-indented for hanging indent

(E126)


[error] 69-69: continuation line unaligned for hanging indent

(E131)


[error] 70-70: continuation line unaligned for hanging indent

(E131)


[error] 90-90: block comment should start with '# '

(E265)


[error] 95-95: block comment should start with '# '

(E265)

🪛 LanguageTool
regression/README.md

[style] ~49-~49: Consider using a less common alternative to make your writing sound more unique and professional.
Context: ...le and use the handy template below (or feel free to structure it however you want - this is...

(FEEL_FREE_TO_STYLE_ME)


[grammar] ~52-~52: Use a hyphen to join words.
Context: ...e template are suggestions for most open source projects. Also keep in mind that ...

(QB_NEW_EN_HYPHEN)


[style] ~61-~61: Consider shortening this phrase to just ‘whether’, unless you mean ‘regardless of whether’.
Context: ...ll images that convey metadata, such as whether or not all the tests are passing for the proje...

(WHETHER)


[style] ~81-~81: Consider shortening or rephrasing this to strengthen your wording.
Context: ...accepting them. For people who want to make changes to your project, it's helpful to have some...

(MAKE_CHANGES)


[grammar] ~89-~89: Use a hyphen to join words.
Context: ...ted to the project. ## License For open source projects, say how it is licensed....

(QB_NEW_EN_HYPHEN)

🪛 markdownlint-cli2 (0.18.1)
regression/README.md

16-16: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

regression/benchmark_scripts/configs/README.md

10-10: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🪛 Ruff (0.14.2)
regression/benchmark_scripts/run_test_and_record.py

76-76: Comparison to None should be cond is None

Replace with cond is None

(E711)


99-99: Consider iterable unpacking instead of concatenation

Replace with iterable unpacking

(RUF005)


120-120: Unused function argument: test_status_file

(ARG001)

regression/get_datasets.py

1-1: Unused blanket noqa directive

Remove unused noqa directive

(RUF100)


871-871: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


873-873: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)


885-885: subprocess call with shell=True identified, security issue

(S602)


889-889: subprocess call with shell=True identified, security issue

(S602)


891-891: subprocess call with shell=True identified, security issue

(S602)


894-894: Create your own exception

(TRY002)


894-894: Avoid specifying long messages outside the exception class

(TRY003)


901-901: subprocess call with shell=True identified, security issue

(S602)


904-904: subprocess call with shell=True identified, security issue

(S602)


906-906: subprocess call with shell=True identified, security issue

(S602)


909-909: subprocess call with shell=True identified, security issue

(S602)


914-914: Create your own exception

(TRY002)


914-914: Avoid specifying long messages outside the exception class

(TRY003)

regression/save_benchmark_results.py

39-39: Avoid specifying long messages outside the exception class

(TRY003)

regression/benchmark_scripts/benchmark.py

23-23: Redefinition of unused os from line 11

Remove definition: os

(F811)


79-79: Comparison to None should be cond is None

Replace with cond is None

(E711)


102-102: Consider iterable unpacking instead of concatenation

Replace with iterable unpacking

(RUF005)


123-123: Unused function argument: test_status_file

(ARG001)


202-202: Redefinition of unused glob from line 20

Remove definition: glob

(F811)


236-236: Do not catch blind exception: Exception

(BLE001)


237-237: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

regression/benchmark_scripts/utils.py

21-21: Unused blanket noqa directive

Remove unused noqa directive

(RUF100)


36-36: Redefinition of unused json from line 16

Remove definition: json

(F811)


37-37: Redefinition of unused os from line 15

Remove definition: os

(F811)


48-48: Redefinition of unused build_routing_datamodel_from_json from line 11

(F811)


60-63: Avoid specifying long messages outside the exception class

(TRY003)


91-94: Avoid specifying long messages outside the exception class

(TRY003)


149-149: Undefined name dtype

(F821)

🪛 Shellcheck (0.11.0)
regression/test-container.sh

[warning] 18-18: Quote this to prevent word splitting.

(SC2046)


[warning] 27-27: Use single quotes, otherwise this expands now rather than when signalled.

(SC2064)

regression/lp_regression_test.sh

[warning] 15-15: DELAY appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 18-18: Quote this to prevent word splitting.

(SC2046)

regression/routing_regression_test.sh

[warning] 15-15: DELAY appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 18-18: Quote this to prevent word splitting.

(SC2046)

regression/cronjob.sh

[warning] 25-25: Quote this to prevent word splitting.

(SC2046)


[warning] 30-30: BUILD_FAILED appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 41-41: RUN_TESTS appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 144-144: ShellCheck can't follow non-constant source. Use a directive to specify location.

(SC1090)

regression/run_regression.sh

[error] 1-1: Tips depend on target shell and yours is unknown. Add a shebang or a 'shell' directive.

(SC2148)

regression/send-slack-report.sh

[warning] 9-9: Quote this to prevent word splitting.

(SC2046)


[warning] 13-13: ShellCheck can't follow non-constant source. Use a directive to specify location.

(SC1090)


[warning] 13-13: Quote this to prevent word splitting.

(SC2046)


[warning] 63-63: Quote this to prevent word splitting.

(SC2046)


[warning] 79-79: ShellCheck can't follow non-constant source. Use a directive to specify location.

(SC1090)


[warning] 96-96: Declare and assign separately to avoid masking return values.

(SC2155)

regression/functions.sh

[error] 1-1: Tips depend on target shell and yours is unknown. Add a shebang or a 'shell' directive.

(SC2148)


[error] 26-26: Argument mixes string and array. Use * or separate argument.

(SC2145)


[warning] 122-122: Use 'pushd ... || exit' or 'pushd ... || return' in case pushd fails.

(SC2164)


[warning] 132-132: Use 'popd ... || exit' or 'popd ... || return' in case popd fails.

(SC2164)

regression/config.sh

[error] 1-1: Tips depend on target shell and yours is unknown. Add a shebang or a 'shell' directive.

(SC2148)


[warning] 10-10: Quote this to prevent word splitting.

(SC2046)


[warning] 12-12: CUOPT_SCRIPTS_DIR appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 15-15: ACCOUNT appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 16-16: PARTITION appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 17-17: GPUS_PER_NODE appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 20-20: IMAGE appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 23-23: ALL_CONFIGS_PATH appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 24-24: ROUTING_CONFIGS_PATH appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 25-25: ROUTING_DATASETS_PATH appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 26-26: LP_DATASETS_PATH appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 27-27: MIP_DATASETS_PATH appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 29-29: STATUS_FILE appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 32-32: DATASETS_DIR appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 34-34: REPO_DIR_NAME appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 35-35: RESULT_DIR_NAME appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 36-36: SSH_CREDS appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 41-41: S3_FILE_PREFIX appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 42-42: S3_URL_PREFIX appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 53-53: METADATA_FILE appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 55-55: TESTING_DIR appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 56-56: BENCHMARK_DIR appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 57-57: SCRIPTS_DIR appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 59-59: BUILD_LOG_FILE appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 67-67: TESTING_RESULTS_DIR appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 68-68: BENCHMARK_RESULTS_DIR appears unused. Verify use (or export if used externally).

(SC2034)

regression/mip_regression_test.sh

[warning] 15-15: DELAY appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 18-18: Quote this to prevent word splitting.

(SC2046)

regression/write-meta-data.sh

[warning] 15-15: DELAY appears unused. Verify use (or export if used externally).

(SC2034)


[warning] 18-18: Quote this to prevent word splitting.

(SC2046)

regression/create-html-reports.sh

[warning] 16-16: Quote this to prevent word splitting.

(SC2046)


[warning] 31-31: ShellCheck can't follow non-constant source. Use a directive to specify location.

(SC1090)

regression/save_benchmarks.sh

[warning] 8-8: Quote this to prevent word splitting.

(SC2046)


[warning] 12-12: ShellCheck can't follow non-constant source. Use a directive to specify location.

(SC1090)


[warning] 12-12: Quote this to prevent word splitting.

(SC2046)

🔇 Additional comments (14)
regression/benchmark_scripts/configs/test_name_data.json (1)

1-117: LGTM!

The JSON configuration is well-formed and represents a valid routing problem with consistent dimensions across all data structures.

regression/mip_config.json (1)

1-29: LGTM!

The JSON configuration is well-formed with consistent structure across all metric definitions. No syntax errors detected.

regression/mip_regression_test.sh (1)

29-29: Verify hardcoded GIT_COMMIT value.

GIT_COMMIT is set to the literal string "abc". Confirm this is intentional for development/testing, or ensure the commented production code is restored before merging.

regression/lp_regression_test.sh (1)

29-29: Verify hardcoded GIT_COMMIT value.

GIT_COMMIT is set to the literal string "abc". Confirm this is intentional for development/testing, or ensure the commented production code is restored before merging.

regression/write-meta-data.sh (1)

22-25: Verify hardcoded environment paths and values are appropriate for production.

Lines 22 and 25 contain hardcoded paths (/opt/cuopt/COMMIT_SHA) and values (repository URL, branch, timestamp). Confirm these are suitable for all deployment environments or whether they should be made configurable via environment variables or configuration files.

regression/slack_msg.json (1)

1-68: Slack message payload structure looks good.

The JSON is well-formed, follows Slack Block Kit conventions, and provides a clean layout with header, divider, and three actionable sections (Results Report, ASV Dashboard, Logs). The use of placeholder variables is consistent and clear.

regression/functions.sh (3)

29-66: LGTM: Output redirection implementation is sound.

The setTee/unsetTee functions correctly manage file descriptor redirection using named pipes. The approach of saving original descriptors (3, 4) and restoring them later is appropriate.


68-85: LGTM: Results directory setup is well-structured.

The function appropriately creates the required directory structure, manages symlinks, and preserves prior ASV results when available.


135-144: LGTM: Conditional function definition pattern is appropriate.

The approach of defining activateCondaEnv only when not already present allows project-specific overrides while providing a sensible default implementation.

regression/benchmark_scripts/utils.py (1)

48-97: LGTM: Model building functions are well-structured.

Both build_routing_datamodel_from_json and build_datamodel_from_mps correctly validate input types, parse data, and return appropriate tuples of (data_model, solver_settings).

regression/benchmark_scripts/benchmark.py (4)

28-70: LGTM: Regression reporting functions are well-designed.

The create_regression_markdown and record_regressions functions appropriately generate regression reports with a reasonable limit on displayed entries and correct threshold-based regression detection.


91-113: LGTM: Result recording logic is sound.

The function appropriately combines BKS metrics with required metrics, handles both new and existing result files, and detects regressions.


186-196: LGTM: RMM initialization is correctly configured.

The reinitialize_rmm function appropriately sets up RMM with a pool allocator and statistics adaptor for GPU memory tracking.


320-346: LGTM: Main entry point is well-structured.

The argument parser and orchestration logic appropriately support multi-GPU benchmarking with clear parameter definitions.

Comment on lines +11 to +26
import os
from multiprocessing import Process
#from utils import get_configuration, LPMetrics, RoutingMetrics
#from cuopt import linear_programming
#from cuopt import routing
#from cuopt import utilities
import rmm
import time
import pandas as pd
import glob
import logging as log
from datetime import datetime
import os
import argparse

log.getLogger().setLevel(log.INFO)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Clean up commented and duplicate imports.

Several import issues need attention:

  1. Lines 13-16: Commented-out imports should be removed or uncommented if needed.
  2. Line 23: os is imported twice (lines 11 and 23).
  3. Line 20: glob is imported but only used inside the worker function where it's re-imported.

Apply this diff:

 import os
 from multiprocessing import Process
-#from utils import get_configuration, LPMetrics, RoutingMetrics
-#from cuopt import linear_programming
-#from cuopt import routing
-#from cuopt import utilities
 import rmm
 import time
 import pandas as pd
-import glob
 import logging as log
 from datetime import datetime
-import os
 import argparse
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import os
from multiprocessing import Process
#from utils import get_configuration, LPMetrics, RoutingMetrics
#from cuopt import linear_programming
#from cuopt import routing
#from cuopt import utilities
import rmm
import time
import pandas as pd
import glob
import logging as log
from datetime import datetime
import os
import argparse
log.getLogger().setLevel(log.INFO)
import os
from multiprocessing import Process
import rmm
import time
import pandas as pd
import logging as log
from datetime import datetime
import argparse
🧰 Tools
🪛 Flake8 (7.3.0)

[error] 13-13: block comment should start with '# '

(E265)


[error] 14-14: block comment should start with '# '

(E265)


[error] 15-15: block comment should start with '# '

(E265)


[error] 16-16: block comment should start with '# '

(E265)


[error] 20-20: 'glob' imported but unused

(F401)


[error] 23-23: redefinition of unused 'os' from line 11

(F811)

🪛 Ruff (0.14.2)

23-23: Redefinition of unused os from line 11

Remove definition: os

(F811)

Comment on lines +116 to +184
def run_benchmark(
test_name,
data_model,
solver_settings,
required_metrics,
csv_path,
git_commit,
test_status_file,
d_type
):
import rmm
mr = rmm.mr.get_current_device_resource()

from utils import LPMetrics, RoutingMetrics
from cuopt import linear_programming
from cuopt import routing

start_time = time.time()
if d_type=="lp" or d_type=="mip":
metrics = LPMetrics()._asdict()
solver_settings.set_parameter("infeasibility_detection", False)
solver_settings.set_parameter("time_limit", 60)
solution = linear_programming.Solve(data_model, solver_settings)
else:
metrics = RoutingMetrics()._asdict()
solution = routing.Solve(data_model)
end_time = time.time()

metrics["gpu_memory_usage"] = int(mr.allocation_counts.peak_bytes/(1024*1024))
metrics["date_time"] = datetime.now().strftime("%m_%d_%Y_%H_%M_%S")
metrics["git_commit"] = git_commit

success_status = False

if d_type=="lp" or d_type=="mip":
## Optimal solution
acceptable_termination = ["Optimal", "TimeLimit", "FeasibleFound"]
if solution.get_termination_reason() in acceptable_termination:
success_status = True
metrics["solver_time"] = solution.get_solve_time()
metrics["primal_objective_value"] = solution.get_primal_objective()
if d_type == "lp":
lp_stats = solution.get_lp_stats()
metrics["nb_iterations"] = lp_stats["nb_iterations"]
else:
milp_stats = solution.get_milp_stats()
metrics["mip_gap"] = milp_stats["mip_gap"]
metrics["max_constraint_violation"] = milp_stats["max_constraint_violation"]
metrics["max_int_violation"] = milp_stats["max_int_violation"]
metrics["max_variable_bound_violation"] = milp_stats["max_variable_bound_violation"]
record_result(test_name, metrics, required_metrics, csv_path, d_type)
else:
if solution.get_status() == 0:
success_status = True
metrics["solver_time"] = end_time - start_time
metrics["total_objective_value"] = solution.get_total_objective()
metrics["vehicle_count"] = solution.get_vehicle_count()

objectives = solution.get_objective_values()
if "prize" in required_metrics:
metrics["prize"] = objectives[routing.Objective.PRIZE]
if "cost" in required_metrics:
metrics["cost"] = objectives[routing.Objective.COST]
if "travel_time" in required_metrics:
metrics["travel_time"] = objectives[routing.Objective.TRAVEL_TIME]

record_result(test_name, metrics, required_metrics, csv_path, d_type)

return "SUCCESS" if success_status is True else "FAILED"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove unused parameter test_status_file.

The test_status_file parameter (line 123) is never used within the run_benchmark function. Test status is written by the caller in the worker function.

Apply this diff:

 def run_benchmark(
     test_name,
     data_model,
     solver_settings,
     required_metrics,
     csv_path,
     git_commit,
-    test_status_file,
     d_type
 ):

And update the call site in worker function (around line 225):

             status = run_benchmark(
                 test_name,
                 data_model,
                 solver_settings,
                 requested_metrics,
                 csv_path,
                 git_commit,
-                test_status_file,
                 d_type
             )

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Ruff (0.14.2)

123-123: Unused function argument: test_status_file

(ARG001)

🤖 Prompt for AI Agents
In regression/benchmark_scripts/benchmark.py around lines 116 to 184, remove the
unused parameter test_status_file from the run_benchmark signature and any
references to it (it is never used inside the function); then update the caller
in the worker function (around line 225) to stop passing test_status_file
(adjust the positional argument list or named argument accordingly) so the call
matches the new signature. Ensure imports and calls still supply d_type and
other required args and run tests to verify no calls pass the removed parameter.

Comment on lines +198 to +248
def worker(gpu_id, dataset_file_path, csv_path, git_commit, log_path, test_status_file, n_gpus, d_type="routing"):
import os
os.environ["CUDA_VISIBLE_DEVICES"] = gpu_id

import glob
from utils import get_configuration
data_files = []
if d_type == "lp" or d_type == "mip":
data_files = glob.glob(dataset_file_path + "/*.mps")
else:
data_files = glob.glob(dataset_file_path + "/*_config.json")
idx = int(gpu_id)
n_files = len(data_files)

while idx < n_files:
mr, stats_mr = reinitialize_rmm()
from rmm._cuda.gpu import CUDARuntimeError, getDevice, setDevice

data_file = data_files[idx]
test_name = str(data_file)
status = "FAILED"
try:

test_name, data_model, solver_settings, requested_metrics = get_configuration(data_file, dataset_file_path, d_type)
log.basicConfig(level=log.INFO, filename=log_path+"/"+test_name+"_log.txt", filemode="a+",
format="%(asctime)-15s %(levelname)-8s %(message)s")
log.info(f"------------- Test Start : {test_name} gpu id : {gpu_id} -------------------")
status = run_benchmark(
test_name,
data_model,
solver_settings,
requested_metrics,
csv_path,
git_commit,
test_status_file,
d_type
)

except Exception as e:
log.error(str(e))

with open(test_status_file, "a") as f:
f.write("\n")
f.write(test_name +": " + status)

# Delete instance of rmm
del mr
del stats_mr

log.info(f"------------- Test End : {test_name} gpu id : {gpu_id} -------------------")
idx = idx + n_gpus
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Improve error handling and remove unused imports.

Several issues in the worker function:

  1. Line 214: Imports CUDARuntimeError, getDevice, and setDevice but never uses them.
  2. Lines 236-237: Catches bare Exception and uses log.error instead of log.exception, which loses stack trace information.
  3. Line 202: glob is re-imported here but already imported at module level (though the module-level import should be removed per earlier comment).

Apply this diff:

     while idx < n_files:
         mr, stats_mr = reinitialize_rmm()
-        from rmm._cuda.gpu import CUDARuntimeError, getDevice, setDevice

         data_file = data_files[idx]
         test_name = str(data_file)
         status = "FAILED"
         try:

             test_name, data_model, solver_settings, requested_metrics = get_configuration(data_file, dataset_file_path, d_type)
             log.basicConfig(level=log.INFO, filename=log_path+"/"+test_name+"_log.txt", filemode="a+",
-                        format="%(asctime)-15s %(levelname)-8s %(message)s")
+                            format="%(asctime)-15s %(levelname)-8s %(message)s")
             log.info(f"------------- Test Start : {test_name} gpu id : {gpu_id} -------------------")
             status = run_benchmark(
                 test_name,
                 data_model,
                 solver_settings,
                 requested_metrics,
                 csv_path,
                 git_commit,
-                test_status_file,
                 d_type
             )

         except Exception as e:
-            log.error(str(e))
+            log.exception("Test failed with exception")

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Flake8 (7.3.0)

[error] 202-202: redefinition of unused 'glob' from line 20

(F811)


[error] 214-214: 'rmm._cuda.gpu.CUDARuntimeError' imported but unused

(F401)


[error] 214-214: 'rmm._cuda.gpu.getDevice' imported but unused

(F401)


[error] 214-214: 'rmm._cuda.gpu.setDevice' imported but unused

(F401)


[error] 223-223: continuation line under-indented for visual indent

(E128)

🪛 Ruff (0.14.2)

202-202: Redefinition of unused glob from line 20

Remove definition: glob

(F811)


236-236: Do not catch blind exception: Exception

(BLE001)


237-237: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

🤖 Prompt for AI Agents
In regression/benchmark_scripts/benchmark.py around lines 198 to 248, remove the
unused imports on line 214 (CUDARuntimeError, getDevice, setDevice), delete the
duplicate local "import glob" (rely on a single top-level import instead), and
replace the bare except Exception: log.error(str(e)) with an except Exception:
log.exception("Error while running test %s", test_name) so the stack trace is
preserved; keep the rest of the flow (status write, rmm cleanup, idx increment)
unchanged.

Comment on lines +250 to +318
def run(dataset_file_path, csv_path, git_commit, log_path, test_status_file, n_gpus, d_type):

"""def worker(gpu_id, n_gpus):
import os
#log.info(f"------------- GPU id : {gpu_id} -------------------")
os.environ["CUDA_VISIBLE_DEVICES"] = gpu_id
import rmm
pool = rmm.mr.PoolMemoryResource(
rmm.mr.CudaMemoryResource()
)

rmm.mr.set_current_device_resource(pool)
idx = int(gpu_id)
n_files = len(config_files)

def reinitialize_rmm():

pool = rmm.mr.PoolMemoryResource(
rmm.mr.CudaMemoryResource()
)

rmm.mr.set_current_device_resource(pool)
#rmm.reinitialize(pool_allocator=True, initial_pool_size=pool_size)

#base_mr = rmm.mr.get_current_device_resource()
#stats_mr = rmm.mr.StatisticsResourceAdaptor(base_mr)
#rmm.mr.set_current_device_resource(stats_mr)

return "", ""

while idx < n_files:
config = config_files[idx]

test_name = str(config)
status = "FAILED"
try:

test_name, data_model, solver_settings, requested_metrics = get_configuration(config, config_file_path)

log.basicConfig(level=log.INFO, filename=log_path+"/"+test_name+"_log.txt", filemode="a+",
format="%(asctime)-15s %(levelname)-8s %(message)s")
log.info(f"------------- Test Start : {test_name} -------------------")
log.info(f"------------- GPU id : {gpu_id} -------------------")
#status = run_benchmark(
# test_name,
# data_model,
# solver_settings,
# requested_metrics,
# csv_path,
# git_commit,
# test_status_file
#)

except Exception as e:
log.error(str(e))

with open(test_status_file, "a") as f:
f.write("\n")
f.write(test_name +": " + status)"""

procs = []
for gpu_id in range(int(n_gpus)):
p = Process(target=worker, args=(str(gpu_id), dataset_file_path, csv_path, git_commit, log_path, test_status_file, int(n_gpus), d_type))
p.start()
procs.append(p)

for p in procs:
p.join()
print("All processes finished.")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Remove commented-out code.

Lines 252-308 contain a large block of commented-out code that duplicates functionality already present in the worker function. This adds clutter and technical debt.

Remove the commented code:

 def run(dataset_file_path, csv_path, git_commit, log_path, test_status_file, n_gpus, d_type):

-    """def worker(gpu_id, n_gpus):
-        import os
-        #log.info(f"------------- GPU id : {gpu_id} -------------------")
-        os.environ["CUDA_VISIBLE_DEVICES"] = gpu_id
-        import rmm
-        pool = rmm.mr.PoolMemoryResource(
-            rmm.mr.CudaMemoryResource()
-        )
-
-        rmm.mr.set_current_device_resource(pool)
-        idx = int(gpu_id)
-        n_files = len(config_files)
-
-        def reinitialize_rmm():
-
-            pool = rmm.mr.PoolMemoryResource(
-                rmm.mr.CudaMemoryResource()
-            )
-
-            rmm.mr.set_current_device_resource(pool)
-            #rmm.reinitialize(pool_allocator=True, initial_pool_size=pool_size)
-
-            #base_mr = rmm.mr.get_current_device_resource()
-            #stats_mr = rmm.mr.StatisticsResourceAdaptor(base_mr)
-            #rmm.mr.set_current_device_resource(stats_mr)
-
-            return "", ""
-
-        while idx < n_files:
-            config = config_files[idx]
-
-            test_name = str(config)
-            status = "FAILED"
-            try:
-
-                test_name, data_model, solver_settings, requested_metrics = get_configuration(config, config_file_path)
-
-                log.basicConfig(level=log.INFO, filename=log_path+"/"+test_name+"_log.txt", filemode="a+",
-                            format="%(asctime)-15s %(levelname)-8s %(message)s")
-                log.info(f"------------- Test Start : {test_name} -------------------")
-                log.info(f"------------- GPU id : {gpu_id} -------------------")
-                #status = run_benchmark(
-                #    test_name,
-                #    data_model,
-                #    solver_settings,
-                #    requested_metrics,
-                #    csv_path,
-                #    git_commit,
-                #    test_status_file
-                #)
-
-            except Exception as e:
-                log.error(str(e))
-
-            with open(test_status_file, "a") as f:
-                f.write("\n")
-                f.write(test_name +": " + status)"""
-
     procs = []
🤖 Prompt for AI Agents
In regression/benchmark_scripts/benchmark.py around lines 250 to 318, there is a
large commented-out block (lines ~252-308) that duplicates the worker logic and
is cluttering the file; remove that entire commented-out code (including the
enclosing triple-quoted string) so only the actual worker function and the
subsequent process spawning code remain, leaving no leftover commented fragments
or stray quotes and keeping formatting/indentation intact.

Comment on lines +4 to +5
import cuopt
from cuopt import routing
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Remove unused imports.

The imports cuopt and cuopt.routing are flagged as unused. Only cuopt.routing.utils is actually used in the script.

Apply this diff:

-import cuopt
-from cuopt import routing
 from cuopt.routing import utils
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import cuopt
from cuopt import routing
from cuopt.routing import utils
🧰 Tools
🪛 Flake8 (7.3.0)

[error] 4-4: 'cuopt' imported but unused

(F401)


[error] 5-5: 'cuopt.routing' imported but unused

(F401)

🤖 Prompt for AI Agents
In regression/benchmark_scripts/configs/example_test_creation.py around lines 4
to 5, the top-level imports "import cuopt" and "from cuopt import routing" are
unused; keep only the actually used module. Remove those two lines and replace
them with a single import for the used submodule (e.g., "from cuopt.routing
import utils" or "import cuopt.routing.utils as utils") so the file only imports
cuopt.routing.utils which is referenced elsewhere in the script.

@@ -0,0 +1,20 @@
# SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Add missing shebang.

Shell scripts must begin with a shebang line to specify the interpreter. This script is missing #!/bin/bash.

+ #!/bin/bash
+ # SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+ # SPDX-License-Identifier: LicenseRef-NvidiaProprietary
  
  # Get latest set of datasets

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Shellcheck (0.11.0)

[error] 1-1: Tips depend on target shell and yours is unknown. Add a shebang or a 'shell' directive.

(SC2148)

🤖 Prompt for AI Agents
In regression/run_regression.sh at the top of the file (line 1), the script is
missing a shebang; add a interpreter directive as the first line — e.g., insert
#!/bin/bash — so the system runs it with bash; after adding the shebang, ensure
the file has execute permissions (chmod +x) if not already.

import os
import argparse

def create_update_benchamrk_db(benchmark_path, output_path, commit_hash):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Fix typo in function name.

The function is named create_update_benchamrk_db but should be create_update_benchmark_db (note the misspelling of "benchmark").

Apply this diff:

-def create_update_benchamrk_db(benchmark_path, output_path, commit_hash):
+def create_update_benchmark_db(benchmark_path, output_path, commit_hash):

Also update the function call on line 37:

-    create_update_benchamrk_db(args.benchmark_path, args.output_path, args.commit_hash)
+    create_update_benchmark_db(args.benchmark_path, args.output_path, args.commit_hash)

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In regression/save_benchmark_results.py around line 6, the function name is
misspelled as create_update_benchamrk_db; rename the function to
create_update_benchmark_db and also update the corresponding function call at
line 37 to use the corrected name so definition and invocation match.

Comment on lines +13 to +26
for file in benchmark_result_list:
with open(file, 'r') as openfile:
data = pd.read_csv(openfile, index_col="test")
data["commit_hash"] = commit_hash
for index, rows in data.iterrows():
out_file = index.split(".")[0] + ".csv"
out_file_path = out_path + "/" + out_file

if os.path.exists(out_file_path):
data = pd.read_csv(out_file_path)
data = pd.concat([data, rows.to_frame().T], ignore_index=True)
data.to_csv(out_file_path, index=False)
else:
rows.to_frame().T.to_csv(out_file_path, index=False)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix critical logic bug: variable data is overwritten inside the loop.

On line 22, the variable data is reassigned when reading the existing output file, which overwrites the original DataFrame loaded on line 15. This causes the loop to only process the first row correctly, with subsequent iterations working on corrupted data.

Apply this diff to use a distinct variable name:

     for file in benchmark_result_list:
         with open(file, 'r') as openfile:
             data = pd.read_csv(openfile, index_col="test")
             data["commit_hash"] = commit_hash
             for index, rows in data.iterrows():
                 out_file = index.split(".")[0] + ".csv"
                 out_file_path = out_path + "/" + out_file
 
                 if os.path.exists(out_file_path):
-                    data = pd.read_csv(out_file_path)
-                    data = pd.concat([data, rows.to_frame().T], ignore_index=True)
-                    data.to_csv(out_file_path, index=False)
+                    existing_data = pd.read_csv(out_file_path)
+                    updated_data = pd.concat([existing_data, rows.to_frame().T], ignore_index=True)
+                    updated_data.to_csv(out_file_path, index=False)
                 else:
                     rows.to_frame().T.to_csv(out_file_path, index=False)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for file in benchmark_result_list:
with open(file, 'r') as openfile:
data = pd.read_csv(openfile, index_col="test")
data["commit_hash"] = commit_hash
for index, rows in data.iterrows():
out_file = index.split(".")[0] + ".csv"
out_file_path = out_path + "/" + out_file
if os.path.exists(out_file_path):
data = pd.read_csv(out_file_path)
data = pd.concat([data, rows.to_frame().T], ignore_index=True)
data.to_csv(out_file_path, index=False)
else:
rows.to_frame().T.to_csv(out_file_path, index=False)
for file in benchmark_result_list:
with open(file, 'r') as openfile:
data = pd.read_csv(openfile, index_col="test")
data["commit_hash"] = commit_hash
for index, rows in data.iterrows():
out_file = index.split(".")[0] + ".csv"
out_file_path = out_path + "/" + out_file
if os.path.exists(out_file_path):
existing_data = pd.read_csv(out_file_path)
updated_data = pd.concat([existing_data, rows.to_frame().T], ignore_index=True)
updated_data.to_csv(out_file_path, index=False)
else:
rows.to_frame().T.to_csv(out_file_path, index=False)
🤖 Prompt for AI Agents
In regression/save_benchmark_results.py around lines 13 to 26, the DataFrame
variable data (loaded from each input file on line 15) is accidentally
overwritten on line 22 when reading an existing output file; change the
temporary variable used for the existing output file (e.g., existing_df or
out_df) so you do not reassign data, then concat the current row onto that
existing_df and write existing_df back to disk; ensure the loop continues to use
the original data variable for remaining rows.


python ${CUOPT_SCRIPTS_DIR}/save_benchmark_results.py -b ${RESULTS_DIR} -o ${CUOPT_SCRIPTS_DIR} -c $1

cd ${CUOPT_SCRIPTS_DIR}; git add benchmarks/*; git commit -m "update benchmarks"; git push; cd -
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add validation and error handling for git operations.

The git operations lack safeguards and could fail silently or push incomplete results. Potential issues include:

  • No check whether there are changes to commit (git commit will fail if nothing changed)
  • No validation that the Python script completed successfully
  • Generic commit message provides no context
  • Push could fail due to network issues, conflicts, or permissions

Consider this improved approach:

-cd ${CUOPT_SCRIPTS_DIR}; git add benchmarks/*; git commit -m "update benchmarks"; git push; cd -
+cd "${CUOPT_SCRIPTS_DIR}" || exit 1
+
+# Check if there are changes to commit
+if git diff --quiet benchmarks/; then
+    echo "No benchmark changes to commit"
+else
+    git add benchmarks/*
+    git commit -m "Update benchmarks for commit $1 at ${DATE}"
+    
+    # Push with retry logic or explicit error handling
+    if ! git push; then
+        echo "ERROR: Failed to push benchmark updates"
+        exit 1
+    fi
+fi
+
+cd - || exit 1

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In regression/save_benchmarks.sh around line 36, the one-line sequence
performing git add/commit/push must be hardened: first validate the Python
benchmark step succeeded (check its exit status and abort on failure), then only
stage and commit if there are actual changes (use git diff --quiet or git status
check) to avoid failed empty commits; construct a descriptive commit message
including timestamp and benchmark identifier rather than a generic string; run
git push and detect failures (network/auth/conflict) and retry or exit non‑zero
with a clear error log; ensure the script returns a nonzero status on any of
these failures so CI/automation can detect issues.

logger "Testing container image $IMAGE"
python -c "import cuopt; print(cuopt)"

trap "${SCRIPTS_DIR}/write-meta-data.sh" EXIT
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix path typo and use single quotes in trap.

Two issues on this line:

  1. The variable reference appears to point to "regession/write-meta-data.sh" based on the comment context, but it should be "regression/write-meta-data.sh" (typo: "regession" → "regression")
  2. The trap uses double quotes, which expands variables immediately rather than when the trap is triggered

Apply this diff:

-trap "${SCRIPTS_DIR}/write-meta-data.sh" EXIT
+trap '${SCRIPTS_DIR}/write-meta-data.sh' EXIT

Note: Verify the actual path used in SCRIPTS_DIR points to the correct "regression" directory.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
trap "${SCRIPTS_DIR}/write-meta-data.sh" EXIT
trap '${SCRIPTS_DIR}/write-meta-data.sh' EXIT
🧰 Tools
🪛 Shellcheck (0.11.0)

[warning] 27-27: Use single quotes, otherwise this expands now rather than when signalled.

(SC2064)

🤖 Prompt for AI Agents
In regression/test-container.sh around line 27, fix the trap invocation: change
the path typo if present so the referenced script is
regression/write-meta-data.sh (not regession/...), use single quotes for the
trap command so variable expansion happens at trap time (trap
'${SCRIPTS_DIR}/write-meta-data.sh' EXIT), and verify SCRIPTS_DIR actually
points to the regression directory before committing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature request New feature or request non-breaking Introduces a non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants