Skip to content

Conversation

@ixlmar
Copy link
Collaborator

@ixlmar ixlmar commented Oct 20, 2025

Description

Restore performance optimization introduced in #7730

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • Refactor
    • Optimized internal token handling and representation logic for improved efficiency and maintainability in the sampling system.

@ixlmar ixlmar requested review from a team as code owners October 20, 2025 10:46
@ixlmar ixlmar requested review from dcampora, lfr-0531 and yweng0828 and removed request for lfr-0531 October 20, 2025 10:46
@ixlmar
Copy link
Collaborator Author

ixlmar commented Oct 20, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21905 [ run ] triggered by Bot. Commit: d89df31

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 20, 2025

📝 Walkthrough

Walkthrough

The changes refactor token handling in the sampler system to use Python nested lists instead of PyTorch tensors for token parameters. Method signatures across add_token and draft token processing methods are updated, with conversion logic introduced to transition from tensor formats to list formats at ingestion points. Test files are updated to match the new calling conventions.

Changes

Cohort / File(s) Change Summary
Sampler core implementation
tensorrt_llm/_torch/pyexecutor/sampler.py
Updated add_token method signature to accept list[list[list[int]]] instead of torch.Tensor. Refactored greedy, tree-based, and rejection-sampling draft token processing methods to accept both tensor and list forms (new_tokens_tensor and new_tokens_list) where applicable. Added conversion logic in update_requests and process_draft_tokens to call tolist() on host tokens. Internal logic updated to use list-based token representation for extraction and insertion.
MTP sampler integration
tensorrt_llm/_torch/speculative/mtp.py
Updated update_requests method to convert new_tokens to Python list via tolist() before passing to add_token and downstream processing.
Tree verification tests
tests/unittest/_torch/speculative/test_draft_token_tree_verification.py
Updated test call to TorchSampler._process_draft_tokens_tree to pass both new_tokens_tensor (original tensor) and new_tokens_list (converted list) parameters.
Rejection sampling tests
tests/unittest/_torch/speculative/test_torch_rejection_sampling.py
Added type casting of torch.multinomial(...).item() result to int via typing.cast for type consistency.

Sequence Diagram

sequenceDiagram
    participant Host
    participant Sampler as PyExecutor Sampler
    participant Token as Token Processor
    participant Draft as Draft Handler
    
    Note over Host,Sampler: Old flow (Tensor-based)
    Host->>Sampler: new_tokens: torch.Tensor
    Sampler->>Token: add_token(Tensor)
    
    Note over Host,Sampler: New flow (List-based)
    Host->>Sampler: new_tokens: torch.Tensor
    Sampler->>Sampler: new_tokens_list = new_tokens.tolist()
    Sampler->>Token: add_token(list[list[list[int]]])
    Note over Token: Extract tokens via direct indexing
    Token->>Draft: Token reference
    
    alt Drafting Strategy
        Sampler->>Draft: process_draft_tokens(tensor + list)
        rect rgb(200, 220, 240)
            Note over Draft: Greedy/Tree/Rejection-Sampling
            Draft->>Draft: Use list form for token access
            Draft->>Draft: Use tensor form for batch operations
        end
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The PR description is largely incomplete relative to the template requirements. The Description section provides only a single sentence ("Restore performance optimization introduced in #7730") without explaining what issue was addressed or why this solution is needed. The Test Coverage section is entirely empty with no tests listed to safeguard the changes, despite the raw_summary indicating multiple modified test files. While the template structure exists, the substantive content needed to understand the PR rationale and validation is severely lacking. The PR description should be expanded to include: (1) a meaningful explanation in the Description section detailing what performance issue was introduced and why reverting to list[list[list[int]]] resolves it, and (2) a comprehensive Test Coverage section listing the specific tests modified or affected by these changes (such as the updated tests in test_draft_token_tree_verification.py and test_torch_rejection_sampling.py). The single reference to PR #7730 should be supplemented with concrete details about the optimization being restored.
Docstring Coverage ⚠️ Warning Docstring coverage is 9.09% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title "restore list[list[list[int]]] in add_token" accurately captures the main change in the pull request, which involves modifying the add_token method signature and related methods to accept Python lists instead of torch tensors, specifically reverting to a list-based representation. The title is concise, specific, and uses technical notation appropriate for the change, making it clear to a teammate scanning the history that this PR is about restoring list-based type signatures in the sampler. The title directly aligns with the PR objectives, which state the PR "restores a performance optimization previously introduced," and all content in the title is reflected in the actual changeset.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tensorrt_llm/_torch/speculative/mtp.py (1)

243-250: Python 3.8 compatibility: builtin generics in type hints

list[list[int]] requires Python 3.9+ unless from __future__ import annotations is enabled. Our guidelines target Python 3.8+. Use typing.List (or add the future import) to avoid runtime issues on 3.8.

Apply one of these fixes:

+from __future__ import annotations
 from dataclasses import dataclass

Or change the annotation:

-def _request_common_handling(self, request: LlmRequest, next_draft_tokens: list[list[int]]):
+from typing import List
+def _request_common_handling(self, request: LlmRequest, next_draft_tokens: List[List[int]]):

As per coding guidelines.

🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/sampler.py (1)

942-951: Use py_seq_slot consistently

These assignments index new_tokens_tensor with request.seq_slot. Elsewhere we use request.py_seq_slot for Python-side bookkeeping. Recommend unifying to py_seq_slot to avoid surprises if the underlying binding’s field diverges.

Apply:

-            new_tokens_tensor[i, request.seq_slot, self.BEAM] = new_token
+            new_tokens_tensor[i, request.py_seq_slot, self.BEAM] = new_token
...
-            new_tokens_tensor[num_accepted, request.seq_slot, self.BEAM] = new_token
+            new_tokens_tensor[num_accepted, request.py_seq_slot, self.BEAM] = new_token
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b818a91 and d89df31.

📒 Files selected for processing (4)
  • tensorrt_llm/_torch/pyexecutor/sampler.py (13 hunks)
  • tensorrt_llm/_torch/speculative/mtp.py (1 hunks)
  • tests/unittest/_torch/speculative/test_draft_token_tree_verification.py (1 hunks)
  • tests/unittest/_torch/speculative/test_torch_rejection_sampling.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tests/unittest/_torch/speculative/test_torch_rejection_sampling.py
  • tensorrt_llm/_torch/speculative/mtp.py
  • tests/unittest/_torch/speculative/test_draft_token_tree_verification.py
  • tensorrt_llm/_torch/pyexecutor/sampler.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tests/unittest/_torch/speculative/test_torch_rejection_sampling.py
  • tensorrt_llm/_torch/speculative/mtp.py
  • tests/unittest/_torch/speculative/test_draft_token_tree_verification.py
  • tensorrt_llm/_torch/pyexecutor/sampler.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tests/unittest/_torch/speculative/test_torch_rejection_sampling.py
  • tensorrt_llm/_torch/speculative/mtp.py
  • tests/unittest/_torch/speculative/test_draft_token_tree_verification.py
  • tensorrt_llm/_torch/pyexecutor/sampler.py
🧬 Code graph analysis (2)
tests/unittest/_torch/speculative/test_draft_token_tree_verification.py (1)
tensorrt_llm/_torch/pyexecutor/sampler.py (1)
  • _process_draft_tokens_tree (725-822)
tensorrt_llm/_torch/pyexecutor/sampler.py (2)
tensorrt_llm/_torch/pyexecutor/llm_request.py (1)
  • LlmRequest (422-626)
tensorrt_llm/_torch/pyexecutor/sampling_utils.py (1)
  • sample_rejected (236-248)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (4)
tests/unittest/_torch/speculative/test_torch_rejection_sampling.py (1)

28-33: Typing-only cast is fine here

cast(int, torch.multinomial(...).item()) is a no-op at runtime but makes the type explicit for linters. The surrounding logic remains unchanged. LGTM.

tensorrt_llm/_torch/speculative/mtp.py (1)

259-261: Converting host tensor to nested Python lists

Switching to tolist() aligns with the list-based add_token and avoids per-access tensor overhead. Just note this allocates a nested structure of size [steps][seq_slots][beam]; keep an eye on peak memory for large batches.

Please confirm state.host.new_tokens has shape (max_draft_len+1, max_num_requests, 1) so add_token(..., step=i) indexes correctly across both context and generation paths.

tests/unittest/_torch/speculative/test_draft_token_tree_verification.py (1)

48-53: Test updated to new API and list path

Passing both new_tokens_tensor and new_tokens_list matches the updated _process_draft_tokens_tree signature. Converting with .tolist() ensures the list index path is exercised. LGTM.

tensorrt_llm/_torch/pyexecutor/sampler.py (1)

1939-1975: Verify decoder_state tensor shape in TRTLLM implementation

The indexing pattern in add_token (line 299: new_tokens[step][seq_slot][beam]) expects shape [steps][sequences][beams], but state.host.new_tokens originates from self.store["decoder_state"].all_new_tokens, which is populated by the C++/CUDA backend. The shape of this tensor cannot be verified from the Python layer alone.

Review concern is valid: if the decoder populates the tensor with sequences multiplexed by beam (shape [steps][sequences*beam_width]) rather than separate dimensions, the indexing will misalign. Confirm the actual shape of decoder_state.all_new_tokens in the TRTLLM decoder implementation and verify it matches the indexing expectations in add_token.

@ixlmar ixlmar requested a review from kris1025 October 20, 2025 11:02
@dcampora dcampora enabled auto-merge (squash) October 20, 2025 11:34
Copy link
Collaborator

@dcampora dcampora left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ixlmar ixlmar changed the title [None][fix] restore list[list[list[int]]] in add_token [TRTLLM-8436][fix] restore list[list[list[int]]] in add_token Oct 20, 2025
Copy link
Collaborator

@yweng0828 yweng0828 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious, is there any difference in performance between accessing data using lists and tensors?

@ixlmar
Copy link
Collaborator Author

ixlmar commented Oct 20, 2025

/bot run --add-multi-gpu-test

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21915 [ run ] triggered by Bot. Commit: d89df31

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21905 [ run ] completed with state ABORTED. Commit: d89df31
/LLM/main/L0_MergeRequest_PR pipeline #16513 completed with status: 'FAILURE'

@ixlmar
Copy link
Collaborator Author

ixlmar commented Oct 20, 2025

Just curious, is there any difference in performance between accessing data using lists and tensors?

Yes, this had been noticed in #7730. I could imagine that it has to do with C++ binding overheads in Tensor.__getitem__.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21915 [ run ] completed with state SUCCESS. Commit: d89df31
/LLM/main/L0_MergeRequest_PR pipeline #16520 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@dcampora dcampora merged commit 87eb508 into NVIDIA:main Oct 21, 2025
10 of 11 checks passed
@ixlmar ixlmar deleted the fix/revert-tensor-usage branch October 21, 2025 07:15
govind-ramnarayan pushed a commit to nv-auto-deploy/TensorRT-LLM that referenced this pull request Oct 21, 2025
yufeiwu-nv pushed a commit to yufeiwu-nv/TensorRT-LLM that referenced this pull request Oct 24, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 1, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants