Skip to content

Conversation

@dhansen-nvidia
Copy link
Collaborator

@dhansen-nvidia dhansen-nvidia commented Oct 30, 2025

Description

New Feature: NUMA-aware CPU affinity autoconfiguration:

  • Introduces the ability for TRT-LLM to query NVML for the optimal CPU affinity based upon NUMA topology and CUDA device ID and set it accordingly for each worker
  • This feature is only active if the user/environment did not otherwise constrain CPU affinity and the user didn't explicitly prohibit it (TLLM_NUMA_AWARE_WORKER_AFFINITY=0 or any non-1 value) or the user has explicitly set the environment variable TLLM_NUMA_AWARE_WORKER_AFFINITY=1. Thus, if the user chooses to set affinity by way of mpirun, numactl, bindpcie, etc, it will still be honored. If the user is unaware of such subtleties, TRT-LLM still gets optimal out-of-the-box CPU affinity
  • TRT-LLM has for sometime contained code that explicitly cleared affinity, which code was added due to a docker image update that set CPU affinity in a problematic way (all workers affined to a single core). While such misconfiguration is still a possibility, it is more important that the user's intent be preserved, so this code has instead been whittled down to just emitting a warning. If the warning shows unexpected CPU affinity, this should be treated as a misconfiguration issue and resolved accordingly (i.e. fix the docker image)

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • Bug Fixes
    • Enhanced worker CPU affinity management with NUMA-aware scheduling for improved resource allocation.
    • Workers now detect and warn when CPU affinity is constrained.
    • Replaced legacy CPU affinity clearing with intelligent configuration during worker initialization.
    • CPU/GPU affinity is now consistently applied during inter-process communication setup for more reliable performance.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 30, 2025

📝 Walkthrough

Walkthrough

Adds NUMA-aware CPU affinity configuration: new utility to derive CPU sets from NVML and a BaseWorker method that inspects current affinity and applies NUMA-aware affinity during worker rank/device setup; removes prior affinity-clearing logic.

Changes

Cohort / File(s) Change Summary
NUMA-aware CPU affinity utility
tensorrt_llm/llmapi/utils.py
Replaces previous affinity helpers with get_numa_aware_cpu_affinity(device_id) that uses NVML and bit-mask logic (ctypes, math) to return a Python list of CPU IDs. Removes set_sched_setaffinity and clear_sched_affinity.
Base worker affinity configuration
tensorrt_llm/executor/base_worker.py
Adds private _configure_affinity(self, device_id) that queries current affinity via psutil, logs if constrained, and applies NUMA-aware affinity (unless disabled). Imports os, psutil, and get_numa_aware_cpu_affinity. Invoked during _get_comm_ranks_device_id.
GPU worker integration
tensorrt_llm/executor/ray_gpu_worker.py
Calls self._configure_affinity(self.device_id) inside _get_comm_ranks_device_id after collecting communication ranks and device IDs.
Legacy affinity removal
tensorrt_llm/executor/worker.py
Removes import and usage of clear_sched_affinity and deletes the startup affinity clearing block; adjusts imports accordingly.

Sequence Diagram

sequenceDiagram
    participant Worker as Worker Process
    participant Base as BaseWorker._get_comm_ranks_device_id
    participant GPU as RayGPUWorker._get_comm_ranks_device_id
    participant Aff as BaseWorker._configure_affinity
    participant Ps as psutil
    participant NVML as NVML
    participant OS as os.sched_setaffinity

    Worker->>GPU: start/init
    GPU->>GPU: determine comm_ranks & device_id
    GPU->>Aff: _configure_affinity(device_id)
    Aff->>Ps: inspect current affinity
    Aff->>Aff: is affinity constrained?
    alt not constrained and NUMA enabled
        Aff->>NVML: get device CPU affinity mask
        NVML-->>Aff: mask
        Aff->>Aff: convert mask → CPU id list
        Aff->>OS: set process affinity to CPU ids
    else constrained or NUMA disabled
        Aff-->>Aff: log warning / skip setting
    end
    Aff-->>GPU: done
    GPU-->>Worker: continue startup
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Areas that need extra attention:

  • NVML usage and mask-to-CPU-id conversion in llmapi/utils.py (bit/word-size correctness across architectures).
  • Proper NVML init/shutdown and error handling when NVML is unavailable.
  • Race conditions or ordering impacts from configuring affinity inside _get_comm_ranks_device_id (multi-process/Ray scenarios).
  • Removal of clear_sched_affinity: verify no unintended affinity states remain for existing deployments.

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 40.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description Check ⚠️ Warning The PR description includes a well-written Description section that clearly explains the NUMA-aware CPU affinity feature, its activation logic, and the transition from clearing affinity to emitting warnings. However, the Test Coverage section is completely empty, which is an explicitly required section in the repository's template. The template specifically asks authors to "list clearly what are the relevant test(s) that can safeguard the changes in the PR," and this is particularly important for changes affecting core executor and worker logic with new dependencies (NVML, psutil). While the PR checklist mentions that test cases are provided for new code paths, no specific tests are described in the Test Coverage section, leaving a significant gap in the documentation. The author should complete the Test Coverage section by describing the relevant tests that validate the new NUMA-aware CPU affinity functionality and the changes to worker initialization. Specifically, they should identify which tests verify the _configure_affinity() method, environment variable handling (TLLM_NUMA_AWARE_WORKER_AFFINITY), affinity constraint detection, NVML integration, and backward compatibility with existing affinity configurations set by tools like mpirun or numactl.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The pull request title "[https://nvbugs/5527655][feat] Add NUMA-aware CPU affinity autoconfig" is directly related to the main changes in the changeset. It follows the required template format with a valid NVBugs ticket ID, a clear type indicator "[feat]", and a concise, specific summary of the primary change. The title accurately reflects the core objective of adding NUMA-aware CPU affinity management for workers, matching the file modifications across base_worker.py, ray_gpu_worker.py, worker.py, and llmapi/utils.py.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
tensorrt_llm/llmapi/utils.py (1)

532-532: Consider moving pynvml import to module level.

The pynvml import is placed inside the function, which is unconventional. While this might be intentional to avoid import errors on systems without NVIDIA drivers, it's typically better to import at module level and handle ImportError at the call site if needed.

If the lazy-import pattern is not required, apply this diff:

At the top of the file (after line 27):

import pynvml

Then remove line 532 from the function.

Alternatively, if lazy loading is desired, keep the current pattern but add a comment explaining why.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ec31363 and ba2021a.

📒 Files selected for processing (4)
  • tensorrt_llm/executor/base_worker.py (3 hunks)
  • tensorrt_llm/executor/ray_gpu_worker.py (1 hunks)
  • tensorrt_llm/executor/worker.py (1 hunks)
  • tensorrt_llm/llmapi/utils.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/executor/worker.py
  • tensorrt_llm/executor/base_worker.py
  • tensorrt_llm/executor/ray_gpu_worker.py
  • tensorrt_llm/llmapi/utils.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/executor/worker.py
  • tensorrt_llm/executor/base_worker.py
  • tensorrt_llm/executor/ray_gpu_worker.py
  • tensorrt_llm/llmapi/utils.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/executor/worker.py
  • tensorrt_llm/executor/base_worker.py
  • tensorrt_llm/executor/ray_gpu_worker.py
  • tensorrt_llm/llmapi/utils.py
🧠 Learnings (2)
📚 Learning: 2025-07-17T09:01:27.402Z
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.402Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks `is_adapter_in_cpu_cache()` and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.

Applied to files:

  • tensorrt_llm/executor/worker.py
  • tensorrt_llm/executor/base_worker.py
  • tensorrt_llm/executor/ray_gpu_worker.py
📚 Learning: 2025-09-24T03:31:28.908Z
Learnt from: tongyuantongyu
PR: NVIDIA/TensorRT-LLM#7520
File: tensorrt_llm/_torch/pyexecutor/resource_manager.py:605-613
Timestamp: 2025-09-24T03:31:28.908Z
Learning: In TensorRT-LLM Ray orchestrator mode, ProcessGroups are initialized with both Gloo and NCCL backends (e.g., "cuda:nccl,cpu:gloo"), allowing PyTorch distributed to automatically route CPU tensors through Gloo and GPU tensors through NCCL. This eliminates the need for manual device placement when performing allreduce operations on base types.

Applied to files:

  • tensorrt_llm/executor/ray_gpu_worker.py
🧬 Code graph analysis (3)
tensorrt_llm/executor/worker.py (1)
tensorrt_llm/llmapi/utils.py (4)
  • AsyncQueue (361-445)
  • ManagedThread (279-329)
  • _SyncQueue (448-515)
  • logger_debug (105-118)
tensorrt_llm/executor/base_worker.py (2)
tensorrt_llm/llmapi/utils.py (3)
  • get_numa_aware_cpu_affinity (518-549)
  • get (415-445)
  • get (498-515)
tensorrt_llm/executor/ray_gpu_worker.py (1)
  • _get_comm_ranks_device_id (188-200)
tensorrt_llm/executor/ray_gpu_worker.py (1)
tensorrt_llm/executor/base_worker.py (1)
  • _configure_affinity (97-123)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (5)
tensorrt_llm/executor/worker.py (1)

21-22: LGTM!

The import changes correctly remove the deprecated clear_sched_affinity and add the necessary utilities for the new queue and logging infrastructure.

tensorrt_llm/executor/ray_gpu_worker.py (1)

197-199: LGTM!

The affinity configuration is correctly placed after the device IDs are gathered and shared across workers, ensuring each worker can set its affinity based on the correct device.

tensorrt_llm/executor/base_worker.py (2)

5-5: LGTM!

The new imports (os, psutil, and get_numa_aware_cpu_affinity) are all used appropriately in the _configure_affinity method.

Also applies to: 11-11, 24-24


132-134: LGTM!

The affinity configuration is correctly invoked after the device ID is determined and inter-process coordination is complete.

tensorrt_llm/llmapi/utils.py (1)

3-3: LGTM!

The new imports (ctypes and math) are used appropriately for computing the affinity mask array size and bit manipulation.

Also applies to: 8-8

@pcastonguay pcastonguay requested a review from kaiyux October 30, 2025 18:12
@litaotju
Copy link
Collaborator

Do we have any performance numbers to show it actually helps?

Copy link
Collaborator

@achartier achartier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall. Just need to address handling NVML failures as pointed out by coderabbit

@dhansen-nvidia
Copy link
Collaborator Author

dhansen-nvidia commented Oct 31, 2025

Do we have any performance numbers to show it actually helps?

I've collected some preliminary numbers, but I'd like to collect numbers from many different launches/runs, since the biggest impact is a reduction in variability, which is hard to assess from comparing even multiple iterations of a benchmark from a single launch of trtllm-serve, for example. I'll update the review with the numbers later today.

@dhansen-nvidia
Copy link
Collaborator Author

dhansen-nvidia commented Oct 31, 2025

Here are the performance results. I launched trtllm-serve 3 separate times with each configuration (running sudo sysctl vm.drop_caches=3 in between launches). The test case here is together.ai's original configuration from the bug (no piecewise CUDA graph, since the host overhead is more pronounced without it). Overall, I see ~9% reduction in TTFT and a small, but modest decrease in run-to-run variation (~1%):

    NUMA-aware affinity     Unconstrained affinity     % Change    
  Prompt number Launch 1 TTFT (ms) Launch 2 TTFT (ms) Launch 3 TTFT (ms) Launch 1 TTFT (ms) Launch 2 TTFT (ms) Launch 3 TTFT (ms) Launch 1 Launch 2 Launch 3
  1 285.42 286.49 286.79 304.36 317.44 323.93      
  2 260.08 259.16 257.25 274.77 289.26 295.2      
  3 262.21 260.14 261.39 275.48 290.72 296.45      
  4 270.69 265.09 268.18 291.68 297.68 304.47      
  5 279.22 269.64 286.55 293.4 298.16 305.77      
  6 262.88 262.98 265.46 279.08 293.43 299.81      
  7 269.44 266.87 268.26 284.02 297.82 305.04      
  8 275.69 273.44 274.42 291.87 306.3 310.64      
  9 270.62 267.07 266.01 283.66 300.23 303.33      
  10 272.98 269.36 268.71 285.29 299.68 305.69      
  Per-launch Average 270.92 268.02 270.30 286.361 299.072 305.033 -5.39% -10.38% -11.39%
  Per-launch Coefficient of Variation 2.93% 2.92% 3.61% 3.19% 2.72% 2.66% -0.26% 0.21% 0.95%
                     
                     
Overall Average   269.75     296.82     -9.12%    
Overall Coefficient of Variation   3.09%     3.83%     -0.74%    

@dhansen-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23398 [ run ] triggered by Bot. Commit: a12b774

@Superjomn
Copy link
Collaborator

In the default mode, can you add the trtllm-bench comparison between this PR and the main base? Llama 3.1 8B FP8 on H100 HBM3 TP1 should be sufficient. Just hope to double-confirm the default performance just works as expected.

@kaiyux kaiyux requested a review from dongxuy04 November 4, 2025 02:13
@dhansen-nvidia
Copy link
Collaborator Author

In the default mode, can you add the trtllm-bench comparison between this PR and the main base? Llama 3.1 8B FP8 on H100 HBM3 TP1 should be sufficient. Just hope to double-confirm the default performance just works as expected.

Here are the results:

ISL OSL Main Numa-aware affinity (TLLM_NUMA_AWARE_WORKER_AFFINITY unset) Difference
128 128 27754.3592 27877.8312 0.44%
128 2048 21071.6126 21072.8544 0.01%
128 4096 12893.7385 12899.3162 0.04%
500 2000 17926.6645 17912.8494 -0.08%
1000 1000 15432.7683 15394.7287 -0.25%
1000 2000 13308.4586 13322.0467 0.10%
2048 128 3375.8275 3380.3266 0.13%
2048 2048 9389.6718 9395.3492 0.06%
5000 500 3291.7497 3288.9687 -0.08%
20000 2000 1306.3732 1306.716 0.03%

@dhansen-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23671 [ run ] triggered by Bot. Commit: cf4b5f0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23671 [ run ] completed with state SUCCESS. Commit: cf4b5f0
/LLM/main/L0_MergeRequest_PR pipeline #17810 completed with status: 'FAILURE'

Copy link
Collaborator

@Superjomn Superjomn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dhansen-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23688 [ run ] triggered by Bot. Commit: fb42c36

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23688 [ run ] completed with state SUCCESS. Commit: fb42c36
/LLM/main/L0_MergeRequest_PR pipeline #17823 completed with status: 'SUCCESS'

@dhansen-nvidia
Copy link
Collaborator Author

/bot skip --comment "CI already passed"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23770 [ skip ] triggered by Bot. Commit: 3c9f68a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23770 [ skip ] completed with state SUCCESS. Commit: 3c9f68a
Skipping testing for commit 3c9f68a

@pcastonguay pcastonguay merged commit ada93f1 into NVIDIA:main Nov 6, 2025
5 checks passed
dhansen-nvidia added a commit to dhansen-nvidia/TensorRT-LLM that referenced this pull request Nov 13, 2025
dhansen-nvidia added a commit to dhansen-nvidia/TensorRT-LLM that referenced this pull request Nov 13, 2025
Signed-off-by: Dan Hansen <[email protected]>
Co-authored-by: Dan Hansen <[email protected]>
Signed-off-by: Dan Hansen <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants