Skip to content

Conversation

@yuxianq
Copy link
Collaborator

@yuxianq yuxianq commented Oct 27, 2025

Summary by CodeRabbit

  • Tests

    • Added new throughput benchmarking test configurations for DeepSeekR1 model with multi-GPU pipeline parallelism setup.
  • Chores

    • Enhanced performance profiling and monitoring instrumentation across pipeline execution stages for improved observability.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Yuxian Qiu <[email protected]>
@yuxianq yuxianq requested review from jiaganc and kaiyux October 27, 2025 11:34
@yuxianq yuxianq requested a review from a team as a code owner October 27, 2025 11:34
@yuxianq yuxianq requested a review from amukkara October 27, 2025 11:37
@yuxianq
Copy link
Collaborator Author

yuxianq commented Oct 27, 2025

/bot run --disable-fail-fast

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 27, 2025

📝 Walkthrough

Walkthrough

The changes add NVTX profiling instrumentation to request queue and pipeline parallel executor operations, reorganize microbatch synchronization timing in the executor loop to perform sampler event synchronization before inter-PP communication, and expand test coverage by adding new multi-GPU throughput test configurations.

Changes

Cohort / File(s) Summary
NVTX Instrumentation in Executors
tensorrt_llm/_torch/pyexecutor/executor_request_queue.py, tensorrt_llm/_torch/pyexecutor/py_executor.py
Added @nvtx_range decorators and context managers around request broadcasting, inter-PP payload receive/send operations, and sample state transmission for runtime profiling. Modified sampler event synchronization timing in pipeline parallel executor loop—moved cross-microbatch synchronization to occur before Stage 2 inter-PP communication and removed immediate synchronization in inter-PP forward path.
Test Configuration Expansion
tests/integration/defs/accuracy/test_llm_api_pytorch.py
Added new parameterization configuration for test_nvfp4_multi_gpus with tp_size=1, pp_size=4, ep_size=1, mtp_nextn=1, fp8kv=True, attention_dp=False, enable_lm_head_tp_in_adp=True, cuda_graph=True, overlap_scheduler=True, max_batch_size=32, moe_backend="CUTLASS" and assigned identifier throughput_pp4_mtp.
Test List Updates
tests/integration/test_lists/qa/llm_function_core.txt, tests/integration/test_lists/qa/llm_function_core_sanity.txt, tests/integration/test_lists/test-db/l0_dgx_b300.yml
Added test identifiers nvfp4_multi_gpus[throughput_pp4_mtp], fp8_blockscale_chunked_prefill[latency], fp8_blockscale_chunked_prefill[throughput] to test lists and registered corresponding test entry in DGX B300 test database with 180-second timeout.

Sequence Diagram

sequenceDiagram
    participant Executor as Executor Loop (PP)
    participant Stage2 as Stage 2:<br/>Inter-PP Comm
    participant Sampler as Sampler
    participant NextStage as Next PP Stage
    
    Note over Executor,Sampler: Microbatch N Processing
    Executor->>Sampler: Schedule microbatch N
    Sampler-->>Executor: sampler_event_N (async)
    
    Note over Executor,Sampler: Before Stage 2: Sync Previous Batch
    rect rgb(220, 240, 255)
        Note over Executor: NEW: synchronize<br/>sampler_event_(N-1)<br/>before Stage 2
        Executor->>Sampler: Wait for sampler_event_(N-1)
        Sampler-->>Executor: Complete
    end
    
    Note over Executor,Stage2: Stage 2: Inter-PP Communication
    rect rgb(240, 220, 255)
        Note over Stage2: Wrap send in nvtx_range<br/>("send_sample_state")
        Stage2->>NextStage: isend_object(sample_state_N)
        NextStage-->>Stage2: Sent
    end
    
    Note over Executor,Sampler: Continue with next microbatch
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

  • Profiling instrumentation additions (@nvtx_range decorators, context managers): Low complexity, minimal logic impact
  • Synchronization reordering in py_executor.py: Requires verification that moving sampler event synchronization before Stage 2 doesn't introduce deadlocks or performance regressions; validate the removal of immediate synchronization in _forward_step_inter_pp is safe
  • Test configuration and list updates: Straightforward parameter additions and test identifier registrations with no logic changes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The PR description is entirely unfilled and contains only the template structure with placeholder comments and the GitHub Bot Help section. None of the required sections from the description template have been completed: the "Description" section (which should explain the issue and solution) is empty, the "Test Coverage" section (which should list relevant tests) is empty, and the PR Checklist remains completely unchecked. This represents a complete absence of the meaningful content that is essential for reviewers to understand the motivation, implementation, and testing approach for the changes.
Docstring Coverage ⚠️ Warning Docstring coverage is 40.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The PR title "[https://nvbugs/5599515][fix] Fix PP bubbles." is directly related to the main changes in the pull request. The raw summary indicates that the changes focus on reducing pipeline parallel (PP) execution inefficiencies through adding NVTX instrumentation, cross-microbatch synchronization points (sampler_event synchronization), and new test configurations for PP=4 scenarios. The title clearly identifies this core objective of fixing PP bubble inefficiencies, follows the proper format with a valid NVBugs ticket reference and lowercase type "[fix]", and is concise and clear enough that a teammate scanning the history would understand the primary focus.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)

927-933: Consider renaming the variable to clarify intent.

The variable prev_microbatch_id is calculated here and then recalculated with a different offset at line 938-940. While the logic appears correct (synchronizing the sampler event for microbatch N-1 before processing Stage 2 for a different microbatch), the variable name reuse could be confusing for maintainability.

Consider using distinct variable names to clarify the different semantic purposes:

-                prev_microbatch_id = (microbatch_id -
-                                      1) % self.num_micro_batches
-                previous_batch = self.micro_batches[prev_microbatch_id]
+                sync_microbatch_id = (microbatch_id -
+                                      1) % self.num_micro_batches
+                previous_batch = self.micro_batches[sync_microbatch_id]
                 if previous_batch is not None:
                     with nvtx_range("sync_previous_sampler_event"):
                         previous_batch.sample_state.sampler_event.synchronize()
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1026069 and dc477dd.

📒 Files selected for processing (6)
  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (2 hunks)
  • tensorrt_llm/_torch/pyexecutor/py_executor.py (2 hunks)
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py (1 hunks)
  • tests/integration/test_lists/qa/llm_function_core.txt (1 hunks)
  • tests/integration/test_lists/qa/llm_function_core_sanity.txt (1 hunks)
  • tests/integration/test_lists/test-db/l0_dgx_b300.yml (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/_torch/pyexecutor/executor_request_queue.py
  • tensorrt_llm/_torch/pyexecutor/py_executor.py
  • tests/integration/defs/accuracy/test_llm_api_pytorch.py
🧠 Learnings (1)
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
PR: NVIDIA/TensorRT-LLM#7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.

Applied to files:

  • tests/integration/test_lists/qa/llm_function_core.txt
  • tests/integration/test_lists/qa/llm_function_core_sanity.txt
🧬 Code graph analysis (2)
tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (3)
tensorrt_llm/_utils.py (1)
  • nvtx_range (904-923)
tensorrt_llm/_torch/distributed/communicator.py (9)
  • recv_object (376-377)
  • recv_object (592-605)
  • prev_pp_rank (95-96)
  • is_last_pp_rank (79-80)
  • isend_object (373-374)
  • isend_object (613-625)
  • next_pp_rank (91-92)
  • send_object (370-371)
  • send_object (608-610)
tensorrt_llm/mapping.py (3)
  • prev_pp_rank (268-272)
  • is_last_pp_rank (256-257)
  • next_pp_rank (274-278)
tensorrt_llm/_torch/pyexecutor/py_executor.py (3)
tensorrt_llm/_utils.py (1)
  • nvtx_range (904-923)
tensorrt_llm/_torch/distributed/communicator.py (3)
  • isend_object (373-374)
  • isend_object (613-625)
  • next_pp_rank (91-92)
tensorrt_llm/mapping.py (1)
  • next_pp_rank (274-278)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (7)
tests/integration/test_lists/qa/llm_function_core_sanity.txt (1)

42-42: Sanity list includes throughput_pp4_mtp — OK.

Duplication with core list is intentional for different testing contexts. Based on learnings.

tests/integration/test_lists/qa/llm_function_core.txt (1)

491-491: throughput_pp4_mtp entry verified — ready to merge.

The entry is correctly defined in the test parameters and consistently referenced across all configuration files (core list, sanity list, and test-db).

tests/integration/defs/accuracy/test_llm_api_pytorch.py (2)

2080-2082: Appended ID verified across all test definitions and lists—no inconsistencies detected.

The verification confirms throughput_pp4_mtp is correctly:

  • Defined in the test parameter list (line 2081)
  • Referenced consistently in all test configuration files with uniform naming and ordering
  • Properly integrated with other throughput variants

2064-2075: Configuration verified as correct and properly integrated.

All technical checks pass:

  • Param tuple correctly placed with tp=1, pp=4, ep=1, mtp_nextn=1 (line 2064–2075)
  • enable_lm_head_tp_in_adp=False is safe with tp=1 (no-op)
  • skip_less_mpi_world_size(4) matches pp_size=4
  • Corresponding ID "throughput_pp4_mtp" exists (line 2081)

The attention_dp=True with mtp_nextn=1 combination is a design choice; confirm with domain experts if this is the intended NVFP4 throughput profile for PP4.

tests/integration/test_lists/test-db/l0_dgx_b300.yml (1)

76-76: Test entry is valid and correctly configured — approve.

The test ID throughput_pp4_mtp is confirmed in the parametrization at line 2080 of test_llm_api_pytorch.py. The timeout value (180s) matches adjacent test variants. The GB110 hardware filtering mentioned in the original review is already explicitly defined in the file-level condition block (gpu wildcard: *gb110*, system_gpu_count: 4), confirming this is intentional by design for the l0_dgx_b300.yml target configuration.

tensorrt_llm/_torch/pyexecutor/py_executor.py (1)

961-966: LGTM! NVTX instrumentation improves observability.

The added profiling instrumentation for the inter-PP sample state communication will help identify communication bottlenecks in pipeline parallel execution.

tensorrt_llm/_torch/pyexecutor/executor_request_queue.py (1)

559-559: LGTM! NVTX instrumentation enhances profiling capabilities.

The added NVTX profiling around request broadcasting and inter-PP communication provides valuable observability for diagnosing pipeline parallel bottlenecks. The instrumentation is non-invasive and follows the same pattern used in py_executor.py.

Also applies to: 578-591

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22643 [ run ] triggered by Bot. Commit: dc477dd

Copy link
Collaborator

@amukkara amukkara left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22643 [ run ] completed with state SUCCESS. Commit: dc477dd
/LLM/main/L0_MergeRequest_PR pipeline #17069 completed with status: 'FAILURE'

@yuxianq
Copy link
Collaborator Author

yuxianq commented Oct 28, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22695 [ run ] triggered by Bot. Commit: 65f1d15

Signed-off-by: Yuxian Qiu <[email protected]>
@yuxianq
Copy link
Collaborator Author

yuxianq commented Oct 28, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22699 [ run ] triggered by Bot. Commit: 2dc85b4

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22695 [ run ] completed with state ABORTED. Commit: 65f1d15
LLM/main/L0_MergeRequest_PR #17113 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22699 [ run ] completed with state SUCCESS. Commit: 2dc85b4
/LLM/main/L0_MergeRequest_PR pipeline #17116 completed with status: 'FAILURE'

@kaiyux
Copy link
Member

kaiyux commented Oct 28, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22792 [ run ] triggered by Bot. Commit: 2dc85b4

@yuxianq
Copy link
Collaborator Author

yuxianq commented Oct 29, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22826 [ run ] triggered by Bot. Commit: ce45e46

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22792 [ run ] completed with state ABORTED. Commit: 2dc85b4
LLM/main/L0_MergeRequest_PR #17189 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22826 [ run ] completed with state SUCCESS. Commit: ce45e46
/LLM/main/L0_MergeRequest_PR pipeline #17217 completed with status: 'FAILURE'

@yuxianq
Copy link
Collaborator Author

yuxianq commented Oct 29, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22863 [ run ] triggered by Bot. Commit: ce45e46

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22863 [ run ] completed with state SUCCESS. Commit: ce45e46
/LLM/main/L0_MergeRequest_PR pipeline #17243 completed with status: 'FAILURE'

@yuxianq
Copy link
Collaborator Author

yuxianq commented Oct 30, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22963 [ run ] triggered by Bot. Commit: d907eff

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22963 [ run ] completed with state SUCCESS. Commit: d907eff
/LLM/main/L0_MergeRequest_PR pipeline #17313 completed with status: 'FAILURE'

@yuxianq
Copy link
Collaborator Author

yuxianq commented Oct 30, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23021 [ run ] triggered by Bot. Commit: a6f9ec6

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23021 [ run ] completed with state SUCCESS. Commit: a6f9ec6
/LLM/main/L0_MergeRequest_PR pipeline #17359 completed with status: 'FAILURE'

@yuxianq
Copy link
Collaborator Author

yuxianq commented Oct 31, 2025

/bot skip --comment "The failed test accuracy/test_llm_api_pytorch.py::TestDeepSeekV32::test_nvfp4_multi_gpus[baseline] is unrelated to this PR, skip CI"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23093 [ skip ] triggered by Bot. Commit: a6f9ec6

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23093 [ skip ] completed with state SUCCESS. Commit: a6f9ec6
Skipping testing for commit a6f9ec6

@yuxianq yuxianq merged commit 025d292 into NVIDIA:main Oct 31, 2025
5 checks passed
fredricz-20070104 pushed a commit to fredricz-20070104/TensorRT-LLM that referenced this pull request Nov 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants