-
Notifications
You must be signed in to change notification settings - Fork 2k
[TRTC-121] [feat] Add recipe selector UI to complement the recipe database #10125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Venky Ganesh <[email protected]>
Signed-off-by: Venky Ganesh <[email protected]>
|
/bot run --disable-fail-fast |
|
PR_Github #28986 [ run ] triggered by Bot. Commit: |
|
PR_Github #28986 [ run ] completed with state
|
| @@ -44,7 +44,7 @@ TensorRT LLM distributes the pre-built container on [NGC Catalog](https://catalo | |||
| You can launch the container using the following command: | |||
|
|
|||
| ```bash | |||
| docker run --rm -it --ipc host -p 8000:8000 --gpus all --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/tensorrt-llm/release:x.y.z | |||
| docker run --rm -it --ipc host -p 8000:8000 --gpus all --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/tensorrt-llm/release:1.2.0rc6 | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to revert these changes that got applied unintentionally due to make docs
Signed-off-by: Venky Ganesh <[email protected]>
📝 WalkthroughWalkthroughThis PR introduces an interactive configuration selector component for the TensorRT-LLM documentation, including a Sphinx directive, JavaScript-based UI widget, and refactored config generation tooling. It updates deployment guides with concrete Docker image tags and integrates a new recipe selector interface that dynamically filters configurations based on user selections. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (3)
tests/unittest/tools/test_generate_config_table.py (1)
26-26: Remove unusednoqadirective.Static analysis indicates that the
# noqa: E402comment is no longer needed since the imports appear at the module level after thesys.pathmanipulation. The E402 rule (module level import not at top of file) may not be triggering.🔎 Proposed fix
-from generate_config_table import generate_json, generate_rst # noqa: E402 +from generate_config_table import generate_json, generate_rstdocs/source/_ext/trtllm_config_selector.py (1)
23-29: Consider escaping HTML attribute values.While the
modelsandconfig_dbvalues come from RST source files (not user input), it's good practice to escape values inserted into HTML attributes to prevent any edge cases with special characters.🔎 Proposed fix using html.escape
+import html + class TRTLLMConfigSelector(Directive): """Embed the interactive config selector widget.""" has_content = False option_spec = { "models": directives.unchanged, "config_db": directives.unchanged, } def run(self): models = (self.options.get("models") or "").strip() config_db = (self.options.get("config_db") or "").strip() attrs = ['data-trtllm-config-selector="1"'] if models: - attrs.append(f'data-models="{models}"') + attrs.append(f'data-models="{html.escape(models, quote=True)}"') if config_db: - attrs.append(f'data-config-db="{config_db}"') + attrs.append(f'data-config-db="{html.escape(config_db, quote=True)}"') html_str = f"<div {' '.join(attrs)}></div>" return [nodes.raw("", html_str, format="html")]docs/source/_static/config_selector.js (1)
458-463: Review the copy button disabled logic.At line 462,
cmdCopyBtn.disabledis set based on!e.command, butformatCommand()(lines 157-165) generates the command frommodelandconfig_path, not frome.command. If an entry hasmodelandconfig_pathbut nocommandfield, the button would be disabled even though there's displayable content.Consider aligning the disabled logic with what's actually displayed:
🔎 Proposed fix
if (finalEntries.length === 1) { const e = finalEntries[0]; - code.textContent = formatCommand(e); - cmdCopyBtn.disabled = !e.command; + const cmdText = formatCommand(e); + code.textContent = cmdText; + cmdCopyBtn.disabled = !cmdText;
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
docs/source/_ext/trtllm_config_selector.py(1 hunks)docs/source/_static/config_selector.css(1 hunks)docs/source/_static/config_selector.js(1 hunks)docs/source/commands/trtllm-serve/run-benchmark-with-trtllm-serve.md(1 hunks)docs/source/conf.py(6 hunks)docs/source/deployment-guide/config_table.rst(5 hunks)docs/source/deployment-guide/deployment-guide-for-deepseek-r1-on-trtllm.md(2 hunks)docs/source/deployment-guide/deployment-guide-for-gpt-oss-on-trtllm.md(2 hunks)docs/source/deployment-guide/deployment-guide-for-llama3.3-70b-on-trtllm.md(1 hunks)docs/source/deployment-guide/deployment-guide-for-llama4-scout-on-trtllm.md(1 hunks)docs/source/deployment-guide/index.rst(1 hunks)docs/source/deployment-guide/note_sections.rst(1 hunks)docs/source/quick-start-guide.md(1 hunks)scripts/generate_config_table.py(5 hunks)tests/unittest/tools/test_generate_config_table.py(3 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces. Do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used
Python files should use snake_case naming:some_file.py
Python classes should use PascalCase naming:class SomeClass
Python functions and methods should use snake_case naming:def my_awesome_function():
Python local variables should use snake_case naming:my_variable = ...
Python variable names that start with a number should be prefixed with 'k':k_99th_percentile = ...
Python global variables should use upper snake_case with prefix 'G':G_MY_GLOBAL = ...
Python constants should use upper snake_case naming:MY_CONSTANT = ...
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings in Python for classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except to the smallest set of errors possible
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible, using the else block for logic
Files:
docs/source/conf.pydocs/source/_ext/trtllm_config_selector.pyscripts/generate_config_table.pytests/unittest/tools/test_generate_config_table.py
**/*.{cpp,h,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the year of its latest meaningful modification
Files:
docs/source/conf.pydocs/source/_ext/trtllm_config_selector.pyscripts/generate_config_table.pytests/unittest/tools/test_generate_config_table.py
🧠 Learnings (13)
📓 Common learnings
Learnt from: venkywonka
Repo: NVIDIA/TensorRT-LLM PR: 6029
File: .github/pull_request_template.md:45-53
Timestamp: 2025-08-27T17:50:13.264Z
Learning: For PR templates in TensorRT-LLM, avoid suggesting changes that would increase developer overhead, such as converting plain bullets to mandatory checkboxes. The team prefers guidance-style bullets that don't require explicit interaction to reduce friction in the PR creation process.
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").
Applied to files:
docs/source/deployment-guide/config_table.rst
📚 Learning: 2025-08-21T00:16:56.457Z
Learnt from: farshadghodsian
Repo: NVIDIA/TensorRT-LLM PR: 7101
File: docs/source/blogs/tech_blog/blog9_Deploying_GPT_OSS_on_TRTLLM.md:36-36
Timestamp: 2025-08-21T00:16:56.457Z
Learning: TensorRT-LLM container release tags in documentation should only reference published NGC container images. The README badge version may be ahead of the actual published container versions.
Applied to files:
docs/source/deployment-guide/deployment-guide-for-llama4-scout-on-trtllm.mddocs/source/quick-start-guide.mddocs/source/deployment-guide/deployment-guide-for-llama3.3-70b-on-trtllm.mddocs/source/commands/trtllm-serve/run-benchmark-with-trtllm-serve.mddocs/source/deployment-guide/deployment-guide-for-gpt-oss-on-trtllm.mddocs/source/deployment-guide/deployment-guide-for-deepseek-r1-on-trtllm.md
📚 Learning: 2025-11-27T09:23:18.742Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 9511
File: tests/integration/defs/examples/serve/test_serve.py:136-186
Timestamp: 2025-11-27T09:23:18.742Z
Learning: In TensorRT-LLM testing, when adding test cases based on RCCA commands, the command format should be copied exactly as it appears in the RCCA case, even if it differs from existing tests. For example, some RCCA commands for trtllm-serve may omit the "serve" subcommand while others include it.
Applied to files:
docs/source/deployment-guide/deployment-guide-for-llama4-scout-on-trtllm.mddocs/source/quick-start-guide.mddocs/source/deployment-guide/deployment-guide-for-llama3.3-70b-on-trtllm.mddocs/source/commands/trtllm-serve/run-benchmark-with-trtllm-serve.mddocs/source/deployment-guide/deployment-guide-for-gpt-oss-on-trtllm.mddocs/source/deployment-guide/deployment-guide-for-deepseek-r1-on-trtllm.md
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
Applied to files:
docs/source/deployment-guide/deployment-guide-for-llama4-scout-on-trtllm.mddocs/source/quick-start-guide.mddocs/source/deployment-guide/deployment-guide-for-llama3.3-70b-on-trtllm.mddocs/source/deployment-guide/deployment-guide-for-gpt-oss-on-trtllm.mddocs/source/deployment-guide/deployment-guide-for-deepseek-r1-on-trtllm.md
📚 Learning: 2025-09-23T15:12:38.312Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/thop/allreduceOp.cpp:352-446
Timestamp: 2025-09-23T15:12:38.312Z
Learning: In TensorRT-LLM NCCL device implementation, NCCL version 2.28+ requirements are handled at runtime in the nccl_device/config layer rather than with compile-time guards. This allows the allreduceOp to remain version-agnostic and delegates version compatibility validation to the appropriate lower-level components that can gracefully handle unsupported configurations.
Applied to files:
docs/source/quick-start-guide.md
📚 Learning: 2025-09-16T09:30:09.716Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7763
File: cpp/tensorrt_llm/CMakeLists.txt:297-301
Timestamp: 2025-09-16T09:30:09.716Z
Learning: In the TensorRT-LLM project, NCCL libraries are loaded earlier by PyTorch libraries or the bindings library, so the main shared library doesn't need NCCL paths in its RPATH - the libraries will already be available in the process address space when needed.
Applied to files:
docs/source/quick-start-guide.md
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Applied to files:
docs/source/quick-start-guide.md
📚 Learning: 2025-08-27T14:23:55.566Z
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/modules/rms_norm.py:17-17
Timestamp: 2025-08-27T14:23:55.566Z
Learning: The TensorRT-LLM project requires Python 3.10+ as evidenced by the use of TypeAlias from typing module, match/case statements, and union type | syntax throughout the codebase, despite some documentation still mentioning Python 3.8+.
Applied to files:
docs/source/quick-start-guide.md
📚 Learning: 2025-08-11T20:09:24.389Z
Learnt from: achartier
Repo: NVIDIA/TensorRT-LLM PR: 6763
File: tests/integration/defs/triton_server/conftest.py:16-22
Timestamp: 2025-08-11T20:09:24.389Z
Learning: In the TensorRT-LLM test infrastructure, the team prefers simple, direct solutions (like hard-coding directory traversal counts) over more complex but robust approaches when dealing with stable directory structures. They accept the maintenance cost of updating tests if the layout changes.
Applied to files:
docs/source/quick-start-guide.md
📚 Learning: 2025-08-18T09:08:07.687Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 6984
File: cpp/tensorrt_llm/CMakeLists.txt:297-299
Timestamp: 2025-08-18T09:08:07.687Z
Learning: In the TensorRT-LLM project, artifacts are manually copied rather than installed via `cmake --install`, so INSTALL_RPATH properties are not needed - only BUILD_RPATH affects the final artifacts.
Applied to files:
docs/source/quick-start-guide.md
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Applied to files:
docs/source/quick-start-guide.md
📚 Learning: 2025-08-27T17:50:13.264Z
Learnt from: venkywonka
Repo: NVIDIA/TensorRT-LLM PR: 6029
File: .github/pull_request_template.md:45-53
Timestamp: 2025-08-27T17:50:13.264Z
Learning: For PR templates in TensorRT-LLM, avoid suggesting changes that would increase developer overhead, such as converting plain bullets to mandatory checkboxes. The team prefers guidance-style bullets that don't require explicit interaction to reduce friction in the PR creation process.
Applied to files:
docs/source/deployment-guide/deployment-guide-for-gpt-oss-on-trtllm.md
🧬 Code graph analysis (4)
docs/source/conf.py (1)
docs/source/_ext/trtllm_config_selector.py (1)
run(19-30)
docs/source/_ext/trtllm_config_selector.py (1)
docs/source/_static/config_selector.js (1)
html(134-134)
scripts/generate_config_table.py (1)
examples/configs/database/database.py (2)
RecipeList(52-64)from_yaml(54-58)
tests/unittest/tools/test_generate_config_table.py (1)
scripts/generate_config_table.py (2)
generate_json(263-289)generate_rst(186-260)
🪛 Ruff (0.14.8)
docs/source/conf.py
221-221: subprocess call: check for execution of untrusted input
(S603)
221-221: Starting a process with a partial executable path
(S607)
235-235: subprocess call: check for execution of untrusted input
(S603)
235-235: Starting a process with a partial executable path
(S607)
docs/source/_ext/trtllm_config_selector.py
14-17: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
tests/unittest/tools/test_generate_config_table.py
26-26: Unused noqa directive (non-enabled: E402)
Remove unused noqa directive
(RUF100)
🔇 Additional comments (30)
docs/source/commands/trtllm-serve/run-benchmark-with-trtllm-serve.md (1)
47-47: Verify this change is intentional.A past review comment at this line indicated these changes were unintentionally applied due to
make docsand should be reverted. However, this PR includes the same Docker tag update.Based on past review comments, please confirm this change to
release:1.2.0rc6is now intentional and aligns with the PR objectives.docs/source/conf.py (5)
18-18: LGTM - Extension directory added to path.The addition of the
_extdirectory tosys.pathcorrectly enables the import of custom Sphinx extensions liketrtllm_config_selector.
47-52: LGTM - Conditional C++ documentation handling.The conditional logic properly detects the presence of C++ XML documentation and gracefully handles its absence by excluding
_cpp_gen/**from the build. This prevents build failures when C++ docs aren't generated.
69-73: LGTM - Conditional extension loading.The
trtllm_config_selectorextension is unconditionally loaded (which is correct for the new feature), whilebreatheis only loaded when C++ XML documentation is available. This prevents errors when building docs without C++ components.
154-158: LGTM - Breathe configuration properly guarded.The breathe configuration is correctly wrapped in the
HAS_CPP_XMLguard with an empty fallback, preventing errors when C++ documentation is unavailable.
219-237: Static analysis false positives on subprocess calls.The static analysis tool flags the
subprocess.run(['mkdir', '-p', ...])calls as potential security issues (S603, S607). These are false positives because:
- The command is hardcoded (
mkdir -p) with no user input- All paths are derived from
SCRIPT_DIRand internal constants- No external or untrusted data flows into these calls
The implementation is safe and appropriate for documentation generation.
docs/source/deployment-guide/note_sections.rst (1)
34-34: LGTM - Cross-reference updated for renamed section.The reference to "Preconfigured Recipes" correctly aligns with the section rename in
docs/source/deployment-guide/index.rst.docs/source/deployment-guide/config_table.rst (2)
1-5: LGTM - Markers added for content inclusion.The
start-config-table-noteandend-config-table-notemarkers enable selective inclusion of the traffic patterns note indocs/source/deployment-guide/index.rst, supporting the new documentation structure.
12-12: Verify heading level change is intentional.The section header underline changed from
^(subsection) to~(sub-subsection), which lowers the heading level. This may affect the document hierarchy and table of contents.Please confirm this heading level change is intentional and produces the desired document structure.
docs/source/deployment-guide/index.rst (3)
103-105: LGTM - Section renamed for clarity.The section rename from "Comprehensive Configuration Database" to "Preconfigured Recipes" improves clarity and aligns with the new structure that distinguishes between the interactive selector and the comprehensive table.
106-115: LGTM - Recipe selector integration.The new
trtllm_config_selectordirective correctly integrates the interactive configuration selector UI. The directive renders a container that will be populated by the JavaScript component defined indocs/source/_static/config_selector.js.The embedded traffic patterns note provides important context about ISL/OSL limits and chunked prefill support.
117-125: LGTM - Recipe database section restructured.The existing configuration table is now properly sectioned under "Recipe database" with an appropriate reference label. The
:start-after: .. end-config-table-notedirective ensures the traffic patterns note isn't duplicated (since it's already shown in the Recipe selector section above).docs/source/_static/config_selector.css (1)
1-130: LGTM - Well-structured CSS for the config selector widget.The styling follows a consistent BEM-like naming convention, includes proper responsive grid layout, and handles interactive states (hover, disabled) appropriately. The copy button disabled state with
cursor: not-allowedis a good accessibility touch.tests/unittest/tools/test_generate_config_table.py (1)
29-81: LGTM - Test correctly validates synchronization of generated files.The test appropriately:
- Checks for function availability before proceeding
- Validates existence of all required files
- Generates fresh content to temporary files for comparison
- Provides clear error messages with remediation steps
docs/source/deployment-guide/deployment-guide-for-deepseek-r1-on-trtllm.md (2)
50-50: Verify Docker image tag consistency.Same as the GPT-OSS deployment guide - ensure
1.2.0rc6is a published NGC container image tag.
433-460: Recipe selector and database structure is correct.Both model variants (
deepseek-ai/DeepSeek-R1-0528andnvidia/DeepSeek-R1-0528-FP4-v2) are properly referenced with all required anchors present inconfig_table.rst(lines 7, 146, 148, and 335).docs/source/_ext/trtllm_config_selector.py (1)
33-37: LGTM - Standard Sphinx extension setup.The setup function correctly registers the CSS/JS assets and the directive with appropriate parallel safety flags.
docs/source/_static/config_selector.js (4)
1-68: LGTM - Well-designed utility functions and DB loading.Good patterns observed:
- Promise caching in
loadDbprevents redundant fetches and handles race conditions correctly- The
el()helper cleanly handles various attribute typesdefaultDbUrl()has sensible fallback logic for script path detection
69-91: LGTM - Clipboard and HTML escaping utilities are correctly implemented.The clipboard fallback using
execCommandhandles older browsers gracefully, andescapeHtmlcovers all necessary characters for XSS prevention.
93-155: LGTM - YAML syntax highlighting with proper HTML escaping.The function correctly escapes all output via
escapeHtmlbefore inserting into the DOM, preventing XSS vulnerabilities even with malformed YAML content.
554-578: LGTM - Main initialization and event handling.The initialization correctly:
- Handles multiple selector containers on a page
- Uses the first container's
data-config-dbfor all widgets (shared DB)- Provides error feedback if DB loading fails
- Handles both
loadingand already-loaded document statesdocs/source/deployment-guide/deployment-guide-for-gpt-oss-on-trtllm.md (2)
381-404: Recipe selector and database sections are correct.The new structure provides both an interactive selector and a static database reference. The required anchors
.. start-openai/gpt-oss-120band.. end-openai/gpt-oss-120bare present inconfig_table.rst(lines 337 and 1076 respectively), so the include directives will work as intended.
46-46: Update Docker image tag to a published NGC version or confirm the tag is available.The referenced Docker image tag
1.2.0rc6was not found in current NGC catalog documentation. Consider using a verified published tag such as1.2.0rc5or checking the NGC catalog directly to confirm availability before merging.scripts/generate_config_table.py (7)
1-25: LGTM!License header is present, imports are appropriate, and constants follow the required naming conventions.
28-44: LGTM!Helper functions are well-structured with proper naming conventions. The dynamic import pattern is appropriate for avoiding circular dependencies while keeping the module importable in isolation.
47-63: LGTM!Constants follow proper naming conventions and provide clean configuration for model metadata and performance thresholds.
92-112: LGTM!The profile assignment logic correctly handles edge cases. The priority of checks ensures that for small lists (n=1 or n=2), profiles are assigned appropriately before the middle-position check is evaluated.
115-183: LGTM!The
build_rowsfunction correctly groups, sorts, and transforms recipe data into structuredRecipeRowobjects. The nested defaultdict pattern and sorting logic are well-implemented.
186-260: LGTM!RST generation is well-structured with proper list-table formatting, correct include directives, and appropriate column widths.
292-300: LGTM!The main block properly handles the missing YAML file case with a clear error message and generates both RST and JSON outputs.
| def _model_display_and_url(model: str) -> tuple[str, str]: | ||
| if model in MODEL_INFO: | ||
| info = MODEL_INFO[model] | ||
| return info["display_name"], info["url"] | ||
| return model, "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Python 3.8 compatibility issue with type hints.
The type hint tuple[str, str] uses PEP 585 syntax available in Python 3.9+. Per coding guidelines, code should conform to Python 3.8+. The same issue applies to list[int] on line 92 and list[RecipeRow] on line 115.
🔎 Proposed fix for Python 3.8 compatibility
Add a from __future__ import annotations at the top of the file (after the license header) to enable postponed evaluation of annotations, which allows using these type hints in Python 3.8:
# limitations under the License.
+from __future__ import annotations
+
import json
import osAlternatively, use typing module:
-from dataclasses import asdict, dataclass
+from dataclasses import asdict, dataclass
+from typing import List, Tuple
from pathlib import Path
-def _model_display_and_url(model: str) -> tuple[str, str]:
+def _model_display_and_url(model: str) -> Tuple[str, str]:
-def _profile_from_sorted_entries(concurrencies: list[int], idx: int) -> str:
+def _profile_from_sorted_entries(concurrencies: List[int], idx: int) -> str:
-def build_rows(yaml_path) -> list[RecipeRow]:
+def build_rows(yaml_path) -> List[RecipeRow]:Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In scripts/generate_config_table.py around lines 85-115, the function and other
annotations use PEP 585 generics like tuple[str, str], list[int], and
list[RecipeRow] which break on Python 3.8; fix by adding a single line "from
__future__ import annotations" right after the file license/header so
annotations are postponed (allowing PEP 585 syntax on 3.8), or if you prefer not
to use future imports, replace occurrences with typing.Tuple[str, str],
typing.List[int], and typing.List["RecipeRow"] and add the appropriate imports
from typing.
| source_path = Path(yaml_path) | ||
| source = ( | ||
| str(source_path.relative_to(REPO_ROOT)) | ||
| if source_path.is_relative_to(REPO_ROOT) | ||
| else str(source_path) | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Python 3.8 compatibility issue with Path.is_relative_to().
The Path.is_relative_to() method was added in Python 3.9. Per coding guidelines, code should conform to Python 3.8+.
🔎 Proposed fix for Python 3.8 compatibility
+def _is_relative_to(path: Path, parent: Path) -> bool:
+ """Check if path is relative to parent (Python 3.8 compatible)."""
+ try:
+ path.relative_to(parent)
+ return True
+ except ValueError:
+ return False
+
+
def generate_json(yaml_path, output_file):
rows = build_rows(yaml_path)
source_path = Path(yaml_path)
source = (
str(source_path.relative_to(REPO_ROOT))
- if source_path.is_relative_to(REPO_ROOT)
+ if _is_relative_to(source_path, REPO_ROOT)
else str(source_path)
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| source_path = Path(yaml_path) | |
| source = ( | |
| str(source_path.relative_to(REPO_ROOT)) | |
| if source_path.is_relative_to(REPO_ROOT) | |
| else str(source_path) | |
| ) | |
| def _is_relative_to(path: Path, parent: Path) -> bool: | |
| """Check if path is relative to parent (Python 3.8 compatible).""" | |
| try: | |
| path.relative_to(parent) | |
| return True | |
| except ValueError: | |
| return False | |
| source_path = Path(yaml_path) | |
| source = ( | |
| str(source_path.relative_to(REPO_ROOT)) | |
| if _is_relative_to(source_path, REPO_ROOT) | |
| else str(source_path) | |
| ) |
🤖 Prompt for AI Agents
In scripts/generate_config_table.py around lines 266 to 271,
Path.is_relative_to() is used which exists only in Python 3.9+; replace it with
a Python-3.8-compatible pattern: attempt to call
source_path.relative_to(REPO_ROOT) inside a try/except and on success use that
relative path string, and on ValueError fall back to str(source_path). This
preserves the original behavior without using is_relative_to().
Description
Example snapshot:
Summary by CodeRabbit
New Features
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.