Skip to content

Conversation

@syuoni
Copy link
Collaborator

@syuoni syuoni commented Dec 1, 2025

[TRTLLM-9372][feat] Enable CuteDSL MoE with Large EP

Description

This PR enables CuteDSL MoE with Large EP (based on ConfigurableMoE #9486):

  • Alltoall comm
  • EPLB

It supports B200/GB200 NVFP4.

cat > extra_llm_api_options.yaml <<EOF
enable_attention_dp: true
enable_lm_head_tp_in_adp: true
cuda_graph_config:
  max_batch_size: 128
  enable_padding: true
moe_config:
  backend: CUTEDSL
  max_num_tokens: 9216
  load_balancer:
    layer_updates_per_iter: 1
    num_slots: 288
speculative_config:
  decoding_type: MTP
  num_nextn_predict_layers: 3
EOF

trtllm-eval --model nvidia/DeepSeek-R1-FP4 \
    --tp_size 32 \
    --ep_size 32 \
    --max_num_tokens 6144 \
    --max_seq_len 6144 \
    --kv_cache_free_gpu_memory_fraction 0.8 \
    --extra_llm_api_options extra_llm_api_options.yaml \
    gsm8k

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

Release Notes

New Features

  • Enhanced MOE (Mixture of Experts) kernel optimization with dynamic per-SM block calculation for improved performance
  • Added output memory initialization capability for MOE operations

Improvements

  • Modernized CUDA synchronization from inline assembly to runtime APIs
  • Improved MOE backend selection logic and integration
  • Added input validation checks for MOE operations

Tests

  • New test coverage for MOE output memory operations

✏️ Tip: You can customize this high-level summary in your review settings.

@syuoni syuoni force-pushed the cutedsl-wideep branch 2 times, most recently from 4b6b292 to 5b3f90b Compare December 2, 2025 06:31
@syuoni syuoni self-assigned this Dec 2, 2025
@syuoni syuoni marked this pull request as ready for review December 2, 2025 13:01
@syuoni syuoni requested review from a team as code owners December 2, 2025 13:01
@syuoni
Copy link
Collaborator Author

syuoni commented Dec 2, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26618 [ run ] triggered by Bot. Commit: 54383d4

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 2, 2025

📝 Walkthrough

Walkthrough

This PR introduces dynamic CUDA occupancy-aware kernel launch sizing, adds MOE output memset functionality, replaces inline assembly synchronization with CUDA runtime APIs, and extends Torch bindings and MoE backend support for CuteDslFusedMoE with load-balancer integration.

Changes

Cohort / File(s) Summary
CUDA Occupancy Utility
cpp/include/tensorrt_llm/common/cudaUtils.h
Added getMaxActiveBlocksPerSM template function with static cache (unordered_map) to memoize occupancy results per kernel, reducing redundant cudaOccupancyMaxActiveBlocksPerMultiprocessor calls.
MOE Kernel Launch & Synchronization
cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu, cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
Replaced inline assembly grid-dependency synchronization (griddepcontrol.*) with CUDA runtime APIs (cudaGridDependencySynchronize, cudaTriggerProgrammaticLaunchCompletion). Replaced fixed per-SM block calculations with dynamic sizing using getMaxActiveBlocksPerSM in kernel launchers (moePermute, moeActivation, expandInputRowsKernelLauncher, finalizeMoeRoutingKernelLauncher).
MOE Output Memset Implementation
cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu, cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.h
Introduced moeOutputMemsetKernel (device kernel) and moeOutputMemset (launch wrapper) to allocate and zero-initialize MOE output tensors per token. Added instantiation macro and explicit templates for half and bf16 types.
Torch MOE Bindings
cpp/tensorrt_llm/thop/cuteDslMoeUtilsOp.cpp
Added moe_output_memset_inplace PyTorch operation with input validation (2D tensor, int32 indices). Enhanced moe_permute input validation to check permuted_idx_to_expanded_idx is int32. Registered operation in CUDA backend and Torch library.
Autotuner & Compilation Utilities
tensorrt_llm/_torch/autotuner.py, tensorrt_llm/_torch/compilation/utils.py
Updated fallback log message in autotuner to include operation name. Added inplace tensor mappings for moe_output_memset_inplace (input mutated) and cute_dsl_nvfp4_grouped_gemm_finalize_blackwell (output mutated).
Custom Ops & Fusion
tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
Extended finalize and grouped GEMM entry points to accept optional output tensors; propagate output through fusion pipeline. Updated mutates_args to include output. Enhanced shape validations and tuning constraints to accommodate output tensor flow.
Blockscaled GEMM Fusion
tensorrt_llm/_torch/cute_dsl_kernels/blackwell/blockscaled_contiguous_grouped_gemm_swiglu_fusion.py
Introduced intermediate-scale computation (scale_interm_size) and adjusted c_sf tensor layout dimensions to align with new scale-internal sizing while preserving A/B/C tensor shapes.
MoE Backend Selection & Configuration
tensorrt_llm/_torch/modules/fused_moe/configurable_moe.py, tensorrt_llm/_torch/modules/fused_moe/create_moe.py
Replaced string-based backend checks with isinstance-based discrimination. Added CuteDslFusedMoE import and handling. Extended backend selection to route to CuteDslFusedMoE when quantization config supports fp8_block_scales or nvfp4. Expanded load-balancer compatibility to include CuteDslFusedMoE. Updated warning/fallback messaging.
CuteDsl MoE Implementation
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
Added init_load_balancer parameter to constructor. Consolidated forward path by introducing quantize_input, run_moe_nvfp4, run_moe_fp8_block_scales, and run_moe dispatcher methods. Removed specialized forward_chunk_* variants; unified path now performs routing, quantization, optional DP allgather, then dispatches via run_moe.
FP4 Weight Loading
tensorrt_llm/_torch/modules/fused_moe/quantization.py
Added load_expert_w3_w1_weight and load_expert_w3_w1_weight_scale_nvfp4 methods to NVFP4CuteDslFusedMoEMethod. These interleave and store W3/W1 weights and scales into destination tensors for SwiGLU fusion, modifying interleave dimension handling.
Eval Command
tensorrt_llm/commands/eval.py
Added four new BuildConfig parameters (max_batch_size, max_num_tokens, max_beam_width, max_seq_len) to llm_args dictionary; propagates config values from BuildConfig into public LLM construction.
Tests
tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py
Added test_moe_output_memset_inplace test covering multiple configurations (tile_size, top_k, num_tokens, dtype). Updated test_nvfp4_grouped_gemm_finalize_blackwell call site to pass None for new output parameter.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Key areas requiring attention:

  • Kernel launch sizing logic (moeUtils.cu, moe_kernels.cu): Verify occupancy calculation correctness and that dynamic block counts do not exceed available GPU resources across different hardware SKUs.
  • Output memset kernel correctness: Validate the moeOutputMemsetKernel implementation and synchronization pattern (grid dependency synchronization replacement).
  • Torch tensor binding contracts: Verify shape/dtype validation logic in moe_output_memset_inplace and output tensor propagation through fusion pipeline in cute_dsl_custom_ops.py.
  • FP4 weight interleaving: Review interleave dimension changes (1 → 0) and scale layout transformations in quantization.py to ensure compatibility with SwiGLU fusion kernel expectations.
  • Backend selection logic: Confirm CuteDslFusedMoE routing conditions (fp8_block_scales/nvfp4 checks) and load-balancer compatibility constraints in create_moe.py.
  • Forward consolidation: Ensure the unified forward path and dispatcher methods (run_moe*) preserve behavior parity with previous specialized paths.

Suggested reviewers

  • limin2021
  • zongfeijing
  • hyukn
  • kaiyux
  • djns99
  • dongxuy04
  • xxi-nv
  • yuxianq

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 21.28% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main feature being introduced: enabling CuteDSL MoE with Large EP support.
Description check ✅ Passed PR description is mostly complete with clear title, objectives, and test coverage placeholder, but Test Coverage and PR Checklist sections are incomplete or contain generic items.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
tensorrt_llm/_torch/modules/fused_moe/create_moe.py (1)

381-385: Inconsistent error message.

The error message at line 385 only mentions TRTLLMGenFusedMoE, but ConfigurableMoE now also supports CuteDslFusedMoE (as shown at line 352 and in the warning at line 379). Update for consistency.

Apply this diff:

             else:
                 # For other incompatible backends, raise error
                 raise ValueError(
                     f"ENABLE_CONFIGURABLE_MOE is set but backend {moe_cls.__name__} is not supported. "
-                    f"ConfigurableMoE only supports TRTLLMGenFusedMoE backend.")
+                    f"ConfigurableMoE only supports TRTLLMGenFusedMoE and CuteDslFusedMoE backends.")
tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py (1)

1219-1246: Fake function ignores provided output parameter.

The fake function signature includes output: Optional[torch.Tensor] but always creates and returns a new tensor. Since mutates_args=("output",) is declared, the fake function should match the semantics by returning the provided output when non-None.

 def _(
     input: torch.Tensor,
     weight: torch.Tensor,
     input_scale: torch.Tensor,
     weight_scale: torch.Tensor,
     alpha: torch.Tensor,
     output: Optional[torch.Tensor],
     tile_idx_to_group_idx: torch.Tensor,
     tile_idx_to_mn_limit: torch.Tensor,
     permuted_idx_to_expanded_idx: torch.Tensor,
     num_non_exiting_tiles: torch.Tensor,
     token_final_scales: torch.Tensor,
     num_experts: int,
     top_k: int,
     num_local_experts: int,
     local_expert_offset: int,
     tile_size: int,
     output_dtype: torch.dtype,
     scaling_vector_size: int = 16,
 ) -> torch.Tensor:
+    if output is not None:
+        return output
     num_tokens = token_final_scales.size(0)
     n = weight.size(1)
     return torch.empty(num_tokens,
                        n,
                        dtype=output_dtype,
                        device=input.device)
🧹 Nitpick comments (4)
tensorrt_llm/_torch/modules/fused_moe/configurable_moe.py (1)

50-55: Backend kwargs wiring looks good, but TRTLLMGen comment is misleading

  • The new isinstance-based dispatch in _get_backend_kwargs and the CuteDslFusedMoE branch that forwards enable_alltoall into backend kwargs are consistent with the updated CuteDslFusedMoE.run_moe(...) API and communication handling.
  • The TRTLLMGen branch correctly passes router_logits only when _supports_load_balancer() is False (fused routing), which matches _forward_chunk_impl where routing is skipped in that case.

The inline comment around router_logits_arg currently says “If backend doesn't support load balancer, routing is done before communication; in that case, router_logits should be None”, which contradicts the actual behavior (for fused routing backends, routing is handled inside the backend and needs router_logits). It would be clearer to rephrase, e.g.:

-            # If backend doesn't support load balancer, routing is done before communication
-            # In that case, router_logits should be None (routing already done)
             router_logits_arg = None
             if not self.backend._supports_load_balancer():
-                # For fused routing backends, router_logits is only needed if routing hasn't been done yet
+                # Fused-routing backend: routing is performed inside the backend,
+                # so we must pass router_logits through.
                 router_logits_arg = router_logits

Please double-check that all CuteDSL backends report _supports_load_balancer() == True (separated routing) so that ConfigurableMoE does not attempt to hand router_logits to them.

Also applies to: 910-975

tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py (1)

230-275: Good test coverage for the new moe_output_memset_inplace functionality.

The test logic is correct: it verifies that tokens with valid permuted indices (i.e., expanded_idx_to_permuted_idx >= 0) get zeroed out while others remain unchanged.

Static analysis correctly identifies unused unpacked variables. Consider prefixing them with underscores to indicate intentional non-use:

     (
-        tile_idx_to_group_idx,
+        _tile_idx_to_group_idx,
         tile_idx_to_mn_limit,
         expanded_idx_to_permuted_idx,
         permuted_idx_to_expanded_idx,
-        total_num_padded_tokens,
+        _total_num_padded_tokens,
         num_non_exiting_tiles,
     ) = torch.ops.trtllm.moe_sort(
cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.h (1)

35-39: Header declaration aligns with implementation; consider harmonizing parameter naming

The new moeOutputMemset declaration matches the .cu implementation in type and parameter order, which is what matters for correctness. There is, however, a minor naming mismatch for the expanded_idx_to_permuted_idx parameter versus the .cu file comment/name — aligning those would make the API easier to follow.

cpp/tensorrt_llm/thop/cuteDslMoeUtilsOp.cpp (1)

258-308: moe_output_memset_inplace validation and dispatch match kernel expectations

The new inplace memset helper:

  • Enforces consistent 2D input, 2D expanded_idx_to_permuted_idx, 1D permuted_idx_to_expanded_idx/tile_idx_to_mn_limit, and single-element num_non_exiting_tiles.
  • Checks all index tensors are int32 and that max_num_permuted_tokens matches tile_tokens_dim * num_tiles and is ≥ num_tokens * top_k.
  • Dispatches cleanly to moeOutputMemset for half and bfloat16, with a clear error for other dtypes.

This matches the moeUtils.cu kernel signature and routing outputs, and the inplace contract is correctly expressed via the Torch schema (Tensor(a!) input).

If you find the shape/dtype checks in moe_permute and moe_output_memset_inplace evolving together, consider a small shared helper to keep them in sync, but it’s not required for this PR.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between be48cdf and 54383d4.

📒 Files selected for processing (15)
  • cpp/include/tensorrt_llm/common/cudaUtils.h (2 hunks)
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu (9 hunks)
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.h (1 hunks)
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu (3 hunks)
  • cpp/tensorrt_llm/thop/cuteDslMoeUtilsOp.cpp (4 hunks)
  • tensorrt_llm/_torch/autotuner.py (1 hunks)
  • tensorrt_llm/_torch/compilation/utils.py (1 hunks)
  • tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py (8 hunks)
  • tensorrt_llm/_torch/cute_dsl_kernels/blackwell/blockscaled_contiguous_grouped_gemm_swiglu_fusion.py (2 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/configurable_moe.py (2 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/create_moe.py (6 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py (7 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/quantization.py (1 hunks)
  • tensorrt_llm/commands/eval.py (1 hunks)
  • tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
**/*.{cpp,h,cu}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.{cpp,h,cu}: Closing braces of namespaces should have a comment saying the namespace it closes (e.g., } // namespace foo)
Prefer const or constexpr variables over #define whenever possible, as the latter are not visible to the compiler
A variable that is not modified after its initialization should be declared as const
Except 0 (only used in comparison for checking signness/existence/emptiness) and nullptr, true, false, all other literals should only be used for variable initialization and should be replaced with named constants
Use Allman indentation style for braces in C++
Put the semicolon for an empty for or while loop in a new line
The statement forming the body of a switch, while, do .. while or for statement shall be a compound statement (use brace-delimited statements)
If and else should always be followed by brace-delimited statements, even if empty or a single statement
C++ filenames should use camel case with first letter lowercase (e.g., thisIsASubDir and thisIsAFilename.cpp)
All filenames involved in compilation of a compilation target must have case-insensitive unique filenames
All types (including class names) should use camel case with uppercase first letter (e.g., FooBarClass)
Local variables, methods and namespaces should use camel case with first letter lowercase (e.g., localFooBar)
Non-magic-number global variables that are non-static and not defined in anonymous namespace should use camel case prefixed by a lower case 'g' (e.g., gDontUseGlobalFoos)
Non-magic-number global variables that are static or defined in an anonymous namespace should use camel case prefixed by a lower case 's' (e.g., sMutableStaticGlobal)
Locally visible static variables should use camel case with lowercase prefix 's' as the first letter of the name (e.g., static std::once_flag sFlag;)
Public, private and protected class member variables should use camel case prefixed with 'm' (e.g., mNbFooValues), though the 'm' pre...

Files:

  • cpp/include/tensorrt_llm/common/cudaUtils.h
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu
  • cpp/tensorrt_llm/thop/cuteDslMoeUtilsOp.cpp
**/*.h

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.h: Use a preprocessor guard in C++ header files with the guard name format TRTLLM_ followed by the filename in all caps (e.g., TRTLLM_FOO_BAR_HELLO_H for file FooBarHello.h); do not include directory names in the symbol
Do not use underscore prefix or suffix in C++ preprocessor guard symbols; they are reserved in C++ standard for compilers or implementation

Files:

  • cpp/include/tensorrt_llm/common/cudaUtils.h
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.h
**/*.{cpp,h,cu,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top

Files:

  • cpp/include/tensorrt_llm/common/cudaUtils.h
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.h
  • tensorrt_llm/_torch/autotuner.py
  • tensorrt_llm/_torch/compilation/utils.py
  • tensorrt_llm/commands/eval.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py
  • tensorrt_llm/_torch/modules/fused_moe/configurable_moe.py
  • tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu
  • tensorrt_llm/_torch/modules/fused_moe/create_moe.py
  • tensorrt_llm/_torch/cute_dsl_kernels/blackwell/blockscaled_contiguous_grouped_gemm_swiglu_fusion.py
  • cpp/tensorrt_llm/thop/cuteDslMoeUtilsOp.cpp
  • tensorrt_llm/_torch/modules/fused_moe/quantization.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., use from package.subpackage import foo and then foo.SomeClass() instead of from package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g., some_file.py)
Python class names should use PascalCase (e.g., class SomeClass)
Python function and method names should use snake_case (e.g., def my_awesome_function():)
Python local variable names should use snake_case, with prefix k for variable names that start with a number (e.g., k_99th_percentile = ...)
Python global variables should use upper snake_case with prefix G (e.g., G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g., MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g., self.x = 5 followed by """<type>: Description of 'x'""" )
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic

Files:

  • tensorrt_llm/_torch/autotuner.py
  • tensorrt_llm/_torch/compilation/utils.py
  • tensorrt_llm/commands/eval.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
  • tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py
  • tensorrt_llm/_torch/modules/fused_moe/configurable_moe.py
  • tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
  • tensorrt_llm/_torch/modules/fused_moe/create_moe.py
  • tensorrt_llm/_torch/cute_dsl_kernels/blackwell/blockscaled_contiguous_grouped_gemm_swiglu_fusion.py
  • tensorrt_llm/_torch/modules/fused_moe/quantization.py
**/*.cu

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

CUDA code must be compiled with a CUDA compiler and includes declarations/definitions with CUDA keywords (__device__, __managed__, __constant__, __global__), device functions, and kernel launching with <<<...>>> syntax

Files:

  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu
🧠 Learnings (25)
📓 Common learnings
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4010-4012
Timestamp: 2025-08-14T23:23:27.449Z
Learning: For MOE (Mixture of Experts) code reviews in TensorRT-LLM, avoid repeatedly suggesting finalize fusion validation checks and safety assertions. The user djns99 has indicated these suggestions are repetitive and unwanted across multiple MOE-related changes.
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4616-4626
Timestamp: 2025-08-19T03:35:20.866Z
Learning: In the MOE profiler TMA workspace preparation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu), the overlapping of TMA WS regions for NONE and FINALIZE variants is deliberate design to save memory space, as confirmed by djns99. The comment "reuse the same pointers to save space" reflects this intentional behavior.
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu:118-127
Timestamp: 2025-08-09T20:57:04.084Z
Learning: In the CUTLASS MoE finalize fusion implementation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu), when setting `fused_finalize_epilogue.stride_final_output` with shape `(hidden_size, num_output_tokens, 1)`, the `num_rows_in_final_output` should be set to `num_output_tokens` (not `hidden_size`) because of a swap+transpose operation that maps rows of the output tensor to `hidden_size` and columns to `num_output_tokens`.
📚 Learning: 2025-09-23T15:13:48.819Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/multimem.h:20-30
Timestamp: 2025-09-23T15:13:48.819Z
Learning: TRT-LLM targets modern CUDA toolkits that support FP8 datatypes, so cuda_fp8.h can be included unconditionally without version guards in TRT-LLM code.

Applied to files:

  • cpp/include/tensorrt_llm/common/cudaUtils.h
📚 Learning: 2025-08-14T21:04:50.248Z
Learnt from: thorjohnsen
Repo: NVIDIA/TensorRT-LLM PR: 6910
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-14T21:04:50.248Z
Learning: In KV cache onboarding logic during prefill in cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, when calculating which blocks fall within the attention window, use getTokensPerBlock() to advance token indices rather than block->getUniqueTokens().size(), because the calculation needs to consider the post-prefill state where blocks will be filled to capacity, not their current token count.

Applied to files:

  • cpp/include/tensorrt_llm/common/cudaUtils.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu
  • cpp/tensorrt_llm/thop/cuteDslMoeUtilsOp.cpp
📚 Learning: 2025-08-15T06:46:54.897Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:54.897Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp addToken function, newly allocated blocks are unshared by design. The beam search path in addToken (when sequence.getNumTokens() > windowSize) is currently broken/non-functional with SWA, so the block allocation doesn't follow a shared-then-unshared pattern.

Applied to files:

  • cpp/include/tensorrt_llm/common/cudaUtils.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu
📚 Learning: 2025-09-23T15:01:00.070Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:15-17
Timestamp: 2025-09-23T15:01:00.070Z
Learning: In TensorRT-LLM NCCL device kernels, the <sstream> header is not needed as an explicit include in config.cu because it's provided transitively through other headers. Local compilation testing confirms this works without the explicit include.

Applied to files:

  • cpp/include/tensorrt_llm/common/cudaUtils.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu
📚 Learning: 2025-08-20T06:56:02.889Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6768
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:577-579
Timestamp: 2025-08-20T06:56:02.889Z
Learning: In cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp, maxSequenceLength is now enforced as a non-optional argument in the BlockManager constructor, so concerns about std::nullopt defaulting to 0 are not applicable. When windowSize > maxSequenceLength, a warning should be added instead of handling optional parameter cases.

Applied to files:

  • cpp/include/tensorrt_llm/common/cudaUtils.h
📚 Learning: 2025-09-23T14:58:05.372Z
Learnt from: nv-lschneider
Repo: NVIDIA/TensorRT-LLM PR: 7910
File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49
Timestamp: 2025-09-23T14:58:05.372Z
Learning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.

Applied to files:

  • cpp/include/tensorrt_llm/common/cudaUtils.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu
  • cpp/tensorrt_llm/thop/cuteDslMoeUtilsOp.cpp
📚 Learning: 2025-08-19T03:35:20.866Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4616-4626
Timestamp: 2025-08-19T03:35:20.866Z
Learning: In the MOE profiler TMA workspace preparation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu), the overlapping of TMA WS regions for NONE and FINALIZE variants is deliberate design to save memory space, as confirmed by djns99. The comment "reuse the same pointers to save space" reflects this intentional behavior.

Applied to files:

  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py
  • tensorrt_llm/_torch/modules/fused_moe/configurable_moe.py
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu
  • cpp/tensorrt_llm/thop/cuteDslMoeUtilsOp.cpp
📚 Learning: 2025-08-09T20:57:04.084Z
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu:118-127
Timestamp: 2025-08-09T20:57:04.084Z
Learning: In the CUTLASS MoE finalize fusion implementation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu), when setting `fused_finalize_epilogue.stride_final_output` with shape `(hidden_size, num_output_tokens, 1)`, the `num_rows_in_final_output` should be set to `num_output_tokens` (not `hidden_size`) because of a swap+transpose operation that maps rows of the output tensor to `hidden_size` and columns to `num_output_tokens`.

Applied to files:

  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.h
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py
  • tensorrt_llm/_torch/modules/fused_moe/configurable_moe.py
  • tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu
  • cpp/tensorrt_llm/thop/cuteDslMoeUtilsOp.cpp
📚 Learning: 2025-08-21T02:39:12.009Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1475-1480
Timestamp: 2025-08-21T02:39:12.009Z
Learning: The min latency mode functionality in TensorRT-LLM MOE kernels (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu) is deprecated and no longer being maintained/updated, as confirmed by djns99. Bug reports and optimization suggestions for the computeStridesTmaWarpSpecializedLowLatencyKernel and related min latency code paths should be deprioritized.

Applied to files:

  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • tensorrt_llm/_torch/modules/fused_moe/configurable_moe.py
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu
  • tensorrt_llm/_torch/modules/fused_moe/create_moe.py
  • cpp/tensorrt_llm/thop/cuteDslMoeUtilsOp.cpp
📚 Learning: 2025-08-08T22:03:40.707Z
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:1198-1209
Timestamp: 2025-08-08T22:03:40.707Z
Learning: In the CUTLASS MoE kernels (cpp/tensorrt_llm/cutlass_extensions), when `layout_info.fusion` is set to `TmaWarpSpecializedGroupedGemmInput::EpilogueFusion::FINALIZE`, the `router_scales` parameter must be non-null by design. The fused finalize kernel epilogue does not perform nullptr checks and requires valid router scales to function correctly. This is an implicit contract that callers must satisfy when enabling the FINALIZE fusion mode.

Applied to files:

  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.h
  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • tensorrt_llm/_torch/modules/fused_moe/configurable_moe.py
  • tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu
  • tensorrt_llm/_torch/cute_dsl_kernels/blackwell/blockscaled_contiguous_grouped_gemm_swiglu_fusion.py
  • cpp/tensorrt_llm/thop/cuteDslMoeUtilsOp.cpp
📚 Learning: 2025-08-21T02:41:10.565Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_gemm_kernels.h:141-145
Timestamp: 2025-08-21T02:41:10.565Z
Learning: In TensorRT-LLM MOE GEMM kernels (cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_gemm_kernels.h), the stride_act and stride_weight pointers in TmaWarpSpecializedGroupedGemmInput are intentionally declared as void* rather than typed pointers because the actual stride type is determined at runtime based on factors like the swap_ab flag and layout decisions. This runtime type determination makes compile-time type safety impossible, so void* is the correct approach.

Applied to files:

  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.h
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.

Applied to files:

  • tensorrt_llm/commands/eval.py
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.

Applied to files:

  • tensorrt_llm/commands/eval.py
📚 Learning: 2025-11-27T09:23:18.742Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 9511
File: tests/integration/defs/examples/serve/test_serve.py:136-186
Timestamp: 2025-11-27T09:23:18.742Z
Learning: In TensorRT-LLM testing, when adding test cases based on RCCA commands, the command format should be copied exactly as it appears in the RCCA case, even if it differs from existing tests. For example, some RCCA commands for trtllm-serve may omit the "serve" subcommand while others include it.

Applied to files:

  • tensorrt_llm/commands/eval.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tensorrt_llm/commands/eval.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.

Applied to files:

  • tensorrt_llm/commands/eval.py
📚 Learning: 2025-08-20T07:43:36.447Z
Learnt from: ChristinaZ
Repo: NVIDIA/TensorRT-LLM PR: 7068
File: cpp/tensorrt_llm/kernels/moeTopKFuncs.cuh:169-172
Timestamp: 2025-08-20T07:43:36.447Z
Learning: In TensorRT-LLM MOE kernels, when processing up to 128 experts across 32 threads, each thread handles at most 4 experts (N < 5 constraint), where N represents candidates per thread rather than total system capacity.

Applied to files:

  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu
📚 Learning: 2025-09-19T21:28:13.751Z
Learnt from: jhaotingc
Repo: NVIDIA/TensorRT-LLM PR: 7856
File: cpp/tensorrt_llm/thop/fp8BlockScaleMoe.cpp:159-166
Timestamp: 2025-09-19T21:28:13.751Z
Learning: In TensorRT-LLM blockScaleMoe routing (cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe/runner.cu), the DeepSeek routing method performs reinterpret_cast<float*>(routingLogits) at line 89, which could cause issues if routing_logits are BF16. However, Qwen3-FP8 models use RenormalizeNaive routing method and are not affected by this dtype casting issue.

Applied to files:

  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • tensorrt_llm/_torch/modules/fused_moe/configurable_moe.py
  • cpp/tensorrt_llm/thop/cuteDslMoeUtilsOp.cpp
📚 Learning: 2025-11-14T11:22:03.729Z
Learnt from: nzmora-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 9163
File: tensorrt_llm/_torch/auto_deploy/custom_ops/quant.py:107-113
Timestamp: 2025-11-14T11:22:03.729Z
Learning: In TensorRT-LLM AutoDeploy custom ops, when adding hardware capability checks to select between kernel implementations (e.g., cuBLAS vs. CUDA kernel), use descriptive variable names that identify the specific GPU architectures or families being targeted (e.g., `is_blackwell_geforce_or_ada`) rather than generic names like `enable_cuda_core`. This makes it clear that the code is selecting an implementation path based on hardware capabilities, not enabling/disabling hardware features.

Applied to files:

  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
  • tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu
  • cpp/tensorrt_llm/thop/cuteDslMoeUtilsOp.cpp
📚 Learning: 2025-09-02T13:42:44.885Z
Learnt from: pcastonguay
Repo: NVIDIA/TensorRT-LLM PR: 7455
File: tensorrt_llm/_torch/pyexecutor/py_executor.py:1852-1860
Timestamp: 2025-09-02T13:42:44.885Z
Learning: In MPI communication within TensorRT-LLM pipeline parallelism, different communication types (tokens, logits, termination sync) must use disjoint tag namespaces to avoid message routing collisions when using the same source/destination patterns.

Applied to files:

  • cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu
📚 Learning: 2025-08-14T23:23:27.449Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 6915
File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu:4010-4012
Timestamp: 2025-08-14T23:23:27.449Z
Learning: For MOE (Mixture of Experts) code reviews in TensorRT-LLM, avoid repeatedly suggesting finalize fusion validation checks and safety assertions. The user djns99 has indicated these suggestions are repetitive and unwanted across multiple MOE-related changes.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/configurable_moe.py
  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu
  • tensorrt_llm/_torch/modules/fused_moe/create_moe.py
  • cpp/tensorrt_llm/thop/cuteDslMoeUtilsOp.cpp
📚 Learning: 2025-08-21T21:48:35.135Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/cutlass_extensions/include/cutlass_extensions/epilogue/fusion/sm90_visitor_scatter.hpp:399-417
Timestamp: 2025-08-21T21:48:35.135Z
Learning: CUTLASS extensions in TensorRT-LLM (located under cpp/tensorrt_llm/cutlass_extensions/) are designed to integrate with and extend functionality in the external CUTLASS repository. When analyzing these extensions, their consumers and functionality wiring may exist in the CUTLASS codebase rather than within TensorRT-LLM itself.

Applied to files:

  • tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py
📚 Learning: 2025-08-22T01:54:35.850Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h:999-1000
Timestamp: 2025-08-22T01:54:35.850Z
Learning: The `internal_cutlass_kernels` directory in TensorRT-LLM is a mirror of an internal NVIDIA repository and maintains its own implementation and API that may diverge from the public `cutlass_kernels` version. API inconsistencies between these two directories are intentional and by design, not bugs to be fixed.

Applied to files:

  • cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu
📚 Learning: 2025-08-08T05:10:38.906Z
Learnt from: sklevtsov-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 3294
File: cpp/tensorrt_llm/cutlass_extensions/include/cutlass_extensions/epilogue/fusion/sm90_visitor_scatter.hpp:0-0
Timestamp: 2025-08-08T05:10:38.906Z
Learning: The ScaledAccPerRowBiasPerColScaleScatter fusion in CUTLASS extensions (cpp/tensorrt_llm/cutlass_extensions/include/cutlass_extensions/epilogue/fusion/sm90_visitor_scatter.hpp) is specifically designed for per-column scaling factors only, so it uses a fixed Stride<_0,_1,int64_t> rather than conditional stride logic.

Applied to files:

  • tensorrt_llm/_torch/cute_dsl_kernels/blackwell/blockscaled_contiguous_grouped_gemm_swiglu_fusion.py
🧬 Code graph analysis (9)
cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.h (1)
cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu (6)
  • void (60-115)
  • void (184-237)
  • void (281-335)
  • void (381-457)
  • moeOutputMemset (338-365)
  • moeOutputMemset (338-341)
tensorrt_llm/_torch/compilation/utils.py (2)
cpp/tensorrt_llm/thop/cuteDslMoeUtilsOp.cpp (2)
  • moe_output_memset_inplace (258-308)
  • moe_output_memset_inplace (258-260)
tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py (1)
  • cute_dsl_nvfp4_grouped_gemm_finalize_blackwell (1169-1217)
tensorrt_llm/commands/eval.py (3)
tests/unittest/llmapi/apps/_test_openai_misc.py (1)
  • max_batch_size (30-31)
tests/unittest/_torch/modeling/test_modeling_out_of_tree.py (1)
  • max_num_tokens (63-66)
tensorrt_llm/_torch/attention_backend/trtllm.py (2)
  • max_seq_len (654-664)
  • max_seq_len (667-671)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py (4)
tensorrt_llm/_torch/utils.py (2)
  • Fp4QuantizedTensor (134-141)
  • shape (140-141)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (2)
  • quantize_input (252-310)
  • enable_alltoall (247-250)
tensorrt_llm/_torch/modules/fused_moe/interface.py (5)
  • quantize_input (522-550)
  • has_nvfp4 (687-690)
  • has_deepseek_fp8_block_scales (681-684)
  • enable_alltoall (717-720)
  • run_moe (553-585)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (3)
  • quantize_input (281-338)
  • enable_alltoall (200-203)
  • run_moe (340-605)
tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py (2)
cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe/runner.h (3)
  • top_k (275-275)
  • local_expert_offset (282-282)
  • local_num_experts (283-283)
cpp/tensorrt_llm/thop/cuteDslMoeUtilsOp.cpp (4)
  • moe_sort (112-125)
  • moe_sort (112-114)
  • moe_output_memset_inplace (258-308)
  • moe_output_memset_inplace (258-260)
tensorrt_llm/_torch/modules/fused_moe/configurable_moe.py (3)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py (1)
  • CuteDslFusedMoE (153-510)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (2)
  • WideEPMoE (26-985)
  • enable_alltoall (246-249)
tensorrt_llm/_torch/modules/fused_moe/interface.py (1)
  • enable_alltoall (717-720)
cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu (1)
cpp/include/tensorrt_llm/common/cudaUtils.h (2)
  • getMultiProcessorCount (407-469)
  • getMaxActiveBlocksPerSM (437-449)
tensorrt_llm/_torch/modules/fused_moe/create_moe.py (4)
tensorrt_llm/llmapi/llm_args.py (2)
  • quant_config (2729-2732)
  • quant_config (2735-2736)
tensorrt_llm/_torch/modules/fused_moe/configurable_moe.py (1)
  • has_nvfp4 (1051-1053)
tensorrt_llm/_torch/modules/fused_moe/interface.py (1)
  • has_nvfp4 (687-690)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py (1)
  • CuteDslFusedMoE (153-510)
cpp/tensorrt_llm/thop/cuteDslMoeUtilsOp.cpp (1)
cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe/runner.h (1)
  • top_k (275-275)
🪛 Ruff (0.14.7)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py

213-213: Unused method argument: post_quant_comm

(ARG002)


245-247: Avoid specifying long messages outside the exception class

(TRY003)


372-372: Unpacked variable permuted_token_selected_experts_tensor is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)


375-375: Unpacked variable permuted_token_final_scales_tensor is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)


440-440: Unused method argument: enable_alltoall

(ARG002)


473-475: Avoid specifying long messages outside the exception class

(TRY003)

tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py

246-246: Unpacked variable tile_idx_to_group_idx is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)


250-250: Unpacked variable total_num_padded_tokens is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)

tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py

1227-1227: Unused function argument: output

(ARG001)

🔇 Additional comments (28)
tensorrt_llm/commands/eval.py (1)

139-143: Confirm duplicated max_ fields between BuildConfig and llm_args and override semantics*

Forwarding max_batch_size, max_num_tokens, max_beam_width, and max_seq_len into llm_args looks aligned with the PR goal and should make these limits visible to both backends that consume **llm_args.

Two things are worth double‑checking, though:

  1. For both backends, verify that PyTorchLLM.__init__ and LLM.__init__ actually accept these four kwargs and that there are no unexpected TypeErrors in older code paths.
  2. With extra_llm_api_options, these fields now exist twice for the TensorRT backend: in build_config and as top‑level llm_args keys. If the YAML overrides only one of them (e.g., max_batch_size at top level but not inside build_config), you can end up with divergent values. Please confirm that:
    • either update_llm_args_with_extra_options updates both sources consistently, or
    • it is intentional that the runtime uses the top‑level values while build_config can lag behind, and that LLM internally treats this precedence in a well‑defined way.

If the intention is “top‑level overrides BuildConfig”, it may be worth enforcing or documenting that invariant in update_llm_args_with_extra_options or in the constructors to avoid subtle mismatches later.

tensorrt_llm/_torch/cute_dsl_kernels/blackwell/blockscaled_contiguous_grouped_gemm_swiglu_fusion.py (2)

2656-2656: LGTM! Corrects c_sf layout to align with C tensor dimensions.

This change fixes the scale factor tensor layout for C to properly reflect the output tensor's dimensions. The previous use of scale_k // 4 was incorrect since the C tensor has shape (m, interm_size, 1) and doesn't depend on the K dimension. The new scale_interm_size // 4 correctly aligns with C's intermediate size dimension.


2634-2634: Verify dimension alignment for scale factor computation.

The computation scale_interm_size = interm_size // scaling_vector_size aligns C's scale factors with its output dimensions. Ensure that interm_size (which equals n // 2) is always a multiple of scaling_vector_size, and that scale_interm_size is a multiple of 4 as required by the layout at line 2656. Check if appropriate validation or assertions exist for these dimension requirements.

tensorrt_llm/_torch/compilation/utils.py (1)

77-86: Inplace metadata for new MoE ops looks consistent

Index 1: "input" for moe_output_memset_inplace and 6: "output" for cute_dsl_nvfp4_grouped_gemm_finalize_blackwell match the C++ op signatures, so FX alias analysis should treat the mutated tensors correctly.

Please run a small torch.fx trace containing these ops to confirm aliasing behaves as expected during compilation.

tensorrt_llm/_torch/modules/fused_moe/fused_moe_cute_dsl.py (4)

477-510: forward_chunk path is consistent with new helpers

forward_chunk now delegates routing to routing_method.apply, calls the unified quantize_input, optionally DP‑allgathers [x, x_sf, token_selected_experts, token_final_scales], then goes through run_moe. This matches the new CuteDSL backend abstraction and keeps the API surface tight.

Please ensure allgather can handle x_sf=None for the DeepSeek FP8 path; if not, gating that branch to NVFP4‑only DP would be needed.


358-433: Tidy up unused unpacked values from moe_permute_op

In run_moe_fp8_block_scales, some values returned by torch.ops.trtllm.moe_permute_op are unused (permuted_token_selected_experts_tensor, permuted_token_final_scales_tensor), triggering static analysis warnings.

If they are intentionally unused, mark them as such to clarify intent:

-        (
-            permuted_row_to_unpermuted_row_tensor,
-            permuted_token_selected_experts_tensor,
-            permuted_data_tensor,
-            expert_first_token_offset_tensor,
-            permuted_token_final_scales_tensor,
-            unpermuted_row_to_permuted_row_tensor,
-        ) = torch.ops.trtllm.moe_permute_op(
+        (
+            permuted_row_to_unpermuted_row_tensor,
+            _permuted_token_selected_experts_tensor,
+            permuted_data_tensor,
+            expert_first_token_offset_tensor,
+            _permuted_token_final_scales_tensor,
+            unpermuted_row_to_permuted_row_tensor,
+        ) = torch.ops.trtllm.moe_permute_op(

Verify that these tensors are not needed for the DeepSeek FP8 path before applying this change.


211-252: quantize_input: unused post_quant_comm parameter and mode restriction

The implementation only supports has_nvfp4 and has_deepseek_fp8_block_scales, raising ValueError for any other quantization mode. However, the post_quant_comm parameter is not used within the method body and triggers static analysis warnings (ARG002).

To clarify intent and silence the warning, rename the parameter to _post_quant_comm:

-    def quantize_input(self,
-                       x: Union[torch.Tensor, Fp4QuantizedTensor],
-                       post_quant_comm: bool = True):
+    def quantize_input(
+        self,
+        x: Union[torch.Tensor, Fp4QuantizedTensor],
+        _post_quant_comm: bool = True,
+    ):

Confirm that CuteDslFusedMoE is instantiated only with quant_mode set to NVFP4 or DeepSeek FP8 block scales, ensuring the mode restriction in this method aligns with actual usage patterns.


253-357: Fix torch.ops moe_output_memset_inplace keyword names and propagate enable_alltoall flag

Within run_moe_nvfp4, the fused finalize setup has two issues:

  1. Wrong keyword names for moe_output_memset_inplace
    The C++/THOP signature expects:

    • input= (not output=)
    • tile_tokens_dim= (not tile_size=)

    The current call will fail at runtime with an argument mismatch error.

  2. enable_alltoall not propagated from run_moe
    The parameter is defined in run_moe_nvfp4 but run_moe calls it without passing enable_alltoall, so this path remains disabled even when the caller sets enable_alltoall=True.

    Update both run_moe_nvfp4 and run_moe_fp8_block_scales calls in run_moe to include:

    enable_alltoall=enable_alltoall,
tensorrt_llm/_torch/modules/fused_moe/quantization.py (1)

2004-2044: Avoid implicit device 0 in CuteDSL NVFP4 interleave helpers

Both CuteDSL NVFP4 overrides do dst_*.cuda() before interleaving:

w3_w1_weight = dst_w3_w1_weight.cuda().view(float4_e2m1x2)
...
w3_w1_weight_scale = dst_w3_w1_weight_scale.cuda().view(float4_sf_dtype)

This:

  • Assumes the current default CUDA device matches dst_*’s intended device.
  • Forces a host→device copy when dst_* is CPU (e.g., shared‑weight buffers for EPLB), even though the final storage remains on CPU.

For robustness and less surprising behavior across multi‑GPU setups, consider targeting the tensor’s own device explicitly:

-        w3_w1_weight = dst_w3_w1_weight.cuda().view(float4_e2m1x2)
+        w3_w1_weight = dst_w3_w1_weight.to(dst_w3_w1_weight.device, non_blocking=True).view(float4_e2m1x2)
@@
-        w3_w1_weight_scale = dst_w3_w1_weight_scale.cuda().view(float4_sf_dtype)
+        w3_w1_weight_scale = dst_w3_w1_weight_scale.to(
+            dst_w3_w1_weight_scale.device, non_blocking=True
+        ).view(float4_sf_dtype)

or, if you only need a CUDA path, to(device=torch.device('cuda')) with an explicit device choice.

[ suggest_recommended_refactor ]

Please confirm expected device placement for local_shared_* tensors; if they are CPU‑resident by design, you may want a pure‑CPU interleave path or gate the .cuda() usage behind a device check.

cpp/include/tensorrt_llm/common/cudaUtils.h (1)

41-41: Occupancy cache key is incomplete and static unordered_map requires synchronization

Two concerns in getMaxActiveBlocksPerSM:

  1. Incomplete cache key
    The cache is keyed only by kernel, but cudaOccupancyMaxActiveBlocksPerMultiprocessor depends on blockSize, dynamicSMemSize, and device properties. Callers passing the same kernel with different blockSize, dynamicSMemSize, or on different devices will receive stale cached values.

  2. Unsynchronized access to static unordered_map
    The static map is mutated without locking. Concurrent calls from multiple host threads result in undefined behavior.

Consider keying the cache on (kernel, blockSize, dynamicSMemSize, deviceId) and protecting mutations with a mutex:

-#include <unordered_map>
+#include <unordered_map>
+#include <tuple>
+#include <mutex>
...
 template <typename T>
 inline int getMaxActiveBlocksPerSM(T kernel, int blockSize, size_t dynamicSMemSize)
 {
-    static std::unordered_map<T, int> cache;
-    auto it = cache.find(kernel);
-    if (it != cache.end())
-    {
-        return it->second;
-    }
-    int numBlocks;
-    check_cuda_error(cudaOccupancyMaxActiveBlocksPerMultiprocessor(&numBlocks, kernel, blockSize, dynamicSMemSize));
-    cache[kernel] = numBlocks;
-    return numBlocks;
+    using KernelKey = std::tuple<void const*, int, size_t, int>;
+    static std::unordered_map<KernelKey, int> cache;
+    static std::mutex cacheMutex;
+
+    int deviceId{0};
+    check_cuda_error(cudaGetDevice(&deviceId));
+    auto key = KernelKey{
+        reinterpret_cast<void const*>(kernel),
+        blockSize,
+        dynamicSMemSize,
+        deviceId,
+    };
+
+    {
+        std::lock_guard<std::mutex> lock(cacheMutex);
+        auto it = cache.find(key);
+        if (it != cache.end())
+        {
+            return it->second;
+        }
+    }
+
+    int numBlocks{0};
+    check_cuda_error(
+        cudaOccupancyMaxActiveBlocksPerMultiprocessor(&numBlocks, kernel, blockSize, dynamicSMemSize));
+
+    {
+        std::lock_guard<std::mutex> lock(cacheMutex);
+        cache.emplace(key, numBlocks);
+    }
+    return numBlocks;
 }
tensorrt_llm/_torch/modules/fused_moe/create_moe.py (4)

36-45: LGTM!

The CUTEDSL backend selection logic correctly checks for fp8_block_scales or nvfp4 quantization modes and falls back to CutlassFusedMoE with an appropriate warning when conditions aren't met. This follows the same pattern as the TRTLLM backend handling below.


60-61: LGTM!

Minor formatting improvement to the warning message.


141-142: LGTM!

Adding CuteDslFusedMoE to the load balancer supported backends is consistent with the PR objective to enable EPLB support.


230-244: LGTM!

The init_load_balancer parameter is now correctly passed through to CuteDslFusedMoE, enabling load balancer support for this backend.

tests/unittest/_torch/thop/parallel/test_cute_dsl_moe.py (1)

532-551: LGTM!

The addition of None for the output parameter aligns with the updated function signature. Passing None indicates the function should allocate the output tensor internally.

tensorrt_llm/_torch/custom_ops/cute_dsl_custom_ops.py (4)

580-618: LGTM - Output propagation through finalize fusion path.

The inputs_pre_hook_finalize_fusion correctly handles the new output parameter at index 5 and propagates it through the return tuple.


997-1007: LGTM - Constraint specs updated correctly for new input ordering.

The constraint indices correctly reflect the new input list ordering after inserting output at index 5:

  • Index 5: output → infer_shape_num_tokens
  • Index 6: tile_idx_to_group_idx → infer_shape_max_num_tiles
  • Index 7: tile_idx_to_mn_limit → infer_shape_max_num_tiles
  • Index 8: permuted_idx_to_expanded_idx → infer_shape_max_num_permuted_tokens
  • Index 10: token_final_scales → infer_shape_num_tokens

1036-1052: LGTM - New output tensor validations.

The assertions correctly validate the output tensor's dtype, dimensionality, and shape consistency with num_tokens and n.


1165-1217: LGTM - Output tensor creation and mutation semantics.

The implementation correctly:

  1. Declares mutates_args=("output",) to indicate in-place mutation
  2. Creates output with torch.zeros when None is passed (enabling accumulation)
  3. Propagates output through the inputs list to the runner
cpp/tensorrt_llm/kernels/cuteDslKernels/moeUtils.cu (6)

70-71: LGTM - Replaced inline assembly with CUDA runtime APIs.

Using cudaGridDependencySynchronize() and cudaTriggerProgrammaticLaunchCompletion() is cleaner and more maintainable than inline PTX assembly. These are the proper CUDA runtime APIs for programmatic dependent launch (PDL) synchronization on SM >= 9.0.

Also applies to: 112-114


144-148: LGTM - Occupancy-aware block calculation.

Using getMaxActiveBlocksPerSM instead of a fixed heuristic provides accurate occupancy information and ensures optimal block counts across different GPU configurations.


280-335: LGTM - New moeOutputMemsetKernel for zeroing output before accumulation.

The kernel correctly:

  1. Identifies the first valid expert for each token by checking expanded_idx_to_permuted_idx
  2. Only zeros output for the first valid expert to avoid redundant writes
  3. Uses PDL synchronization pattern consistent with other kernels in this file

The early-exit optimization (lines 311-323) ensures each token's output is only zeroed once regardless of how many experts are selected.


337-377: LGTM - moeOutputMemset launcher with consistent launch configuration.

The launcher follows the same pattern as other functions in this file:

  • Occupancy-aware block calculation
  • PDL attributes for stream serialization
  • Proper alignment checks

398-400: LGTM - Consistent synchronization API usage in moeActivationKernel.

The inline assembly has been replaced with the same CUDA runtime APIs used in the other kernels, maintaining consistency across the file.

Also applies to: 454-456


522-526: LGTM - Occupancy-aware block calculation for moeActivation.

Consistent with the changes in moePermute, this now uses getMaxActiveBlocksPerSM for accurate occupancy-based grid sizing.

cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu (1)

1635-1640: Dynamic occupancy-based launches look correct and consistent across kernels

Using smCount * getMaxActiveBlocksPerSM(...) together with std::max(num_work_items, num_padding_tokens) and the existing for (idx = blockIdx.x; idx < limit; idx += gridDim.x) patterns gives correct coverage for both real tokens and padding while capping grid size by per-SM occupancy. API usage (func/kernel pointers) is consistent with CUDA occupancy queries.

Also applies to: 1899-1907, 2304-2309

cpp/tensorrt_llm/thop/cuteDslMoeUtilsOp.cpp (2)

142-143: Additional dtype check for permuted_idx_to_expanded_idx is appropriate

Ensuring permuted_idx_to_expanded_idx is int32 before passing it to the CUDA kernel removes a silent footgun and aligns with how the indices are consumed on device.


479-480: Torch schema and CUDA registration for moe_output_memset_inplace are consistent

The TORCH_LIBRARY_FRAGMENT schema for moe_output_memset_inplace matches the C++ signature and aliasing semantics, and the CUDA backend correctly binds to torch_ext::moe_output_memset_inplace, making the op available from Python.

Also applies to: 498-498

@syuoni
Copy link
Collaborator Author

syuoni commented Dec 2, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26632 [ run ] triggered by Bot. Commit: fef155a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26618 [ run ] completed with state ABORTED. Commit: 54383d4
LLM/main/L0_MergeRequest_PR #20244 (Blue Ocean) completed with status: ABORTED

Copy link
Collaborator

@djns99 djns99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CUTLASS MOE/kernel side of things looks good to me

@syuoni
Copy link
Collaborator Author

syuoni commented Dec 3, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26712 [ run ] triggered by Bot. Commit: 56b85d4

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26632 [ run ] completed with state ABORTED. Commit: fef155a
LLM/main/L0_MergeRequest_PR #20258 (Blue Ocean) completed with status: ABORTED

syuoni added 12 commits December 5, 2025 02:18
Signed-off-by: Enwei Zhu <[email protected]>
Signed-off-by: Enwei Zhu <[email protected]>
Signed-off-by: Enwei Zhu <[email protected]>
Signed-off-by: Enwei Zhu <[email protected]>
Signed-off-by: Enwei Zhu <[email protected]>
Signed-off-by: Enwei Zhu <[email protected]>
Signed-off-by: Enwei Zhu <[email protected]>
Signed-off-by: Enwei Zhu <[email protected]>
Signed-off-by: Enwei Zhu <[email protected]>
Signed-off-by: Enwei Zhu <[email protected]>
Signed-off-by: Enwei Zhu <[email protected]>
@syuoni
Copy link
Collaborator Author

syuoni commented Dec 5, 2025

/bot run --disable-fail-fast

Signed-off-by: Enwei Zhu <[email protected]>
@syuoni syuoni requested a review from QiJune December 5, 2025 02:28
@tensorrt-cicd
Copy link
Collaborator

PR_Github #27052 [ run ] triggered by Bot. Commit: e7edf86

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26931 [ run ] completed with state ABORTED. Commit: 574aa70
LLM/main/L0_MergeRequest_PR #20522 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27052 [ run ] completed with state SUCCESS. Commit: e7edf86
/LLM/main/L0_MergeRequest_PR pipeline #20630 completed with status: 'FAILURE'

@syuoni
Copy link
Collaborator Author

syuoni commented Dec 6, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27172 [ run ] triggered by Bot. Commit: e7edf86

@syuoni syuoni enabled auto-merge (squash) December 6, 2025 02:36
@tensorrt-cicd
Copy link
Collaborator

PR_Github #27172 [ run ] completed with state SUCCESS. Commit: e7edf86
/LLM/main/L0_MergeRequest_PR pipeline #20734 completed with status: 'SUCCESS'

@syuoni syuoni merged commit 7cd5a67 into NVIDIA:main Dec 6, 2025
5 checks passed
MinaHuai pushed a commit to davidmlw/TensorRT-LLM that referenced this pull request Dec 10, 2025
…VIDIA#8779)

The performance results of some kernels could be easily affected by the warm/cold L2 cache status. To achieve more precise profiling results, the L2 cache is cleared for every execution by the circular buffer method for better benchmarking during autotuning.

Signed-off-by: Yukun He <[email protected]>

[None][infra] Waive failed cases for main branch on 11/25 (NVIDIA#9429)

Signed-off-by: qqiao <[email protected]>

[NVIDIA#8391][chore] test_perf.py to lock clocks read from gpu_configs.yml instead of max freq (NVIDIA#9409)

Signed-off-by: Eran Geva <[email protected]>

[None][ci] Move more test stages to use OCI machines (NVIDIA#9395)

Signed-off-by: Yanchao Lu <[email protected]>
Co-authored-by: Matt Lefebvre <[email protected]>

[None][feat] Improve TRTLLM MoE in small hidden size throughput cases (NVIDIA#9377)

Signed-off-by: Anthony Chang <[email protected]>

[https://nvbugs/5537996][fix] Let KV cache manager block initialization be aware whether it is doing a dry run or not (NVIDIA#9093)

Before this commit, the kv cache manager does the same regardless, which causes a mis-calculation in free memory available to allocate for the KV cache manager, hence causing a crash.

This commit fixes this by letting KV cache manager initialization be aware whether it is doing the dry run or not. If it is a dry run, use the max_tokens setting that is already pre-calculated and filled into kv_cache_config.max_tokens.

Signed-off-by: eopXD <[email protected]>

[https://nvbugs/5667922][fix] Update long context evaluation config (NVIDIA#9426)

Signed-off-by: mni <[email protected]>

[None][fix] Mitigate test timeout issues (NVIDIA#9445)

Signed-off-by: Shixiaowei02 <[email protected]>

[None][chore] Fix trtllm-eval for PyTorchLLM (NVIDIA#9427)

Signed-off-by: Fanrong Li <[email protected]>

[None][feat] Add a parser to layer-wise benchmarks (NVIDIA#9440)

Signed-off-by: Tailing Yuan <[email protected]>

[None][feat] Support custom chat template for tool calling (NVIDIA#9297)

Signed-off-by: Pengyun Lin <[email protected]>

[TRTLLM-8160][feat] Add draft token tree runtime on CDL (NVIDIA#8586)

Signed-off-by: Yue Weng <[email protected]>

[None][ci] waive a test (NVIDIA#9458)

Signed-off-by: Yan Chunwei <[email protected]>

[https://nvbugs/5680905][fix] Relax the MMLU accuracy requirement for DS-v3.2 (NVIDIA#9439)

Signed-off-by: Fanrong Li <[email protected]>

[TRTLLM-8376][feat] top-p optimization (removes redundant softmax) (NVIDIA#9411)

Signed-off-by: ixlmar <[email protected]>

[TRTLLM-9490][feat] use FlashInfer's top_k_sampling_from_probs (NVIDIA#9457)

Signed-off-by: ixlmar <[email protected]>

[https://nvbugs/5647400] [fix] Enlarged the AllReduce workspace size to 64MB. Added AllReduce strategy to AD config. (NVIDIA#9145)

Signed-off-by: Eran Geva <[email protected]>

[TRTLLM-909][feat] Overlap context chunks in pipeline parallel mode (NVIDIA#9308)

Signed-off-by: Robin Kobus <[email protected]>

[None][chore] AutoDeploy add multi stream moe pass to default.yaml (NVIDIA#9430)

Signed-off-by: Suyog Gupta <[email protected]>

[https://nvbugs/5685143][fix] avoid cudaFree overlap with cuda graph (NVIDIA#9438)

Signed-off-by: Chuang Zhu <[email protected]>

[None][chore] Bump version to 1.2.0rc5 (NVIDIA#9455)

Signed-off-by: Yiqing Yan <[email protected]>

[TRTLLM-8936][test] Add disagg and wideep multi-node multi-gpu test cases (NVIDIA#9356)

Signed-off-by: FredricZ-2007 <[email protected]>

[None][ci] move some slow test cases of DGX-B200 to post merge (NVIDIA#9467)

Signed-off-by: junq <[email protected]>

[TRTLLM-9293][feat] Enable partial weight loading to support streaming update weights (NVIDIA#9224)

Signed-off-by: shuyix <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[TRTLLM-9264][fix] Add accuracy/unit tests/doc for phi4mm (NVIDIA#9246)

Signed-off-by: Wanli Jiang <[email protected]>

[https://nvbugs/5580099][fix] Cherry pick IMA issue fix from release/1.1 (NVIDIA#9032)

Signed-off-by: Junyi Xu <[email protected]>

[None][chore] Upgrade CuteDSL to 4.3.0 (NVIDIA#9444)

Signed-off-by: Enwei Zhu <[email protected]>

[None][feat] Support MLA chunked prefill for DeepSeek V3.2 model (NVIDIA#9376)

Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>

[None][feat] Add environment variable to force spec-dec number of accepted tokens (NVIDIA#9371)

Signed-off-by: Aurelien Chartier <[email protected]>

[None][infra] Update allowed list 2025.11.25 (NVIDIA#9468)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][infra] Fail the pipeline when slurm ssh dropped (NVIDIA#9157)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][feat] AutoDeploy: Remove redundant copies in mamba layers (NVIDIA#9461)

Signed-off-by: Chenghao Zhang <[email protected]>
Co-authored-by: Suyog Gupta <[email protected]>

[None][feat] AutoDeploy: Add A_log fusion for Mamba layers (NVIDIA#9422)

Signed-off-by: Chenghao Zhang <[email protected]>

[None][ci] Waive blackwell test on spec gate. (NVIDIA#9502)

Signed-off-by: Zheyu Fu <[email protected]>

[https://nvbugs/5608930][fix] Fix a typo (NVIDIA#9487)

Signed-off-by: Shixiaowei02 <[email protected]>

[NVIDIA#9463][feat] Add revision option to trtllm commands (NVIDIA#9498)

Signed-off-by: Aurelien Chartier <[email protected]>

[TRTLLM-9085][doc] fix math formula rendering issues (NVIDIA#9481)

Signed-off-by: junq <[email protected]>

[None][chore] update comments in llm_args.py (NVIDIA#9472)

Signed-off-by: junq <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[https://nvbugs/5680310][fix] Fix ctx only timed out test (NVIDIA#9410)

Signed-off-by: Patrice Castonguay <[email protected]>

[https://nvbugs/5547414][fix] enable case after using local cache model (NVIDIA#9473)

Signed-off-by: Hui Gao <[email protected]>

[None][fix] Replace PYTORCH_CUDA_ALLOC_CONF with PYTORCH_ALLOC_CONF to fix deprecation warning (NVIDIA#9294)

Signed-off-by: Jiagan Cheng <[email protected]>

[https://nvbugs/5698581][fix] Init draft tokens for CUDA graph dummy request (NVIDIA#9505)

Signed-off-by: ziyixiong-nv <[email protected]>

[None][infra] Waive failed case in pre-merge on 11/27 (NVIDIA#9507)

Signed-off-by: qqiao <[email protected]>

[TRTLLM-9513][docs] Qwen3 deployment guide (NVIDIA#9488)

Signed-off-by: Lanyu Liao <[email protected]>
Co-authored-by: Lanyu Liao <[email protected]>

[None][chore] revert batch_size=1 to prevent timeout and lower accuracy reference by 0.12% as a WAR (NVIDIA#9447)

Signed-off-by: Lizhi Zhou <[email protected]>
Co-authored-by: Shi Xiaowei <[email protected]>

[TRTLLM-9279][infra] Use flexcache for gh200 nodes since they locate in Austin (NVIDIA#9405)

Signed-off-by: qqiao <[email protected]>
Signed-off-by: Emma Qiao <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[cherry-pick][https://nvbugs/5670793][fix] Solve trtllm-serve launch_disaggregated issue (NVIDIA#9346)

Signed-off-by: xxi <[email protected]>

[None][infra] Fix Slurm job script (NVIDIA#9508)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][fix] change allreduce workspace dtype to torch.int64 to avoid overflow (NVIDIA#9479)

Signed-off-by: Zhenhuan Chen <[email protected]>

[None][feat] add qwen3-next CI test of accuracy on BF16 and NVFP4 (NVIDIA#9330)

Signed-off-by: jiant <[email protected]>

[None][fix] fix TP support for DeepSeek-V3.2 on hopper (NVIDIA#9484)

Signed-off-by: Fanrong Li <[email protected]>

[TRTLLM-9389][chore] Refactor AlltoallMethodType. (NVIDIA#9388)

Signed-off-by: Bo Li <[email protected]>

[https://nvbugs/5674665][chore] Add test coverage for https://nvbugspro.nvidia.com/bug/5674665 (NVIDIA#9518)

Signed-off-by: eopXD <[email protected]>

[TRTLLM-7288][infra] Download merged waive list in slurm script (NVIDIA#8999)

Signed-off-by: Yiqing Yan <[email protected]>
Signed-off-by: Yanchao Lu <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[https://nvbugs/5687820][fix] Remove self.abort() in DetokenizedGenerationResult (NVIDIA#9449)

Signed-off-by: Enwei Zhu <[email protected]>

[NVIDIA#9150][feat] AutoDeploy Nemotron-Flash support (NVIDIA#9504)

Signed-off-by: Lucas Liebenwein <[email protected]>

[None] [chore] Update to cutlass 4.3 (NVIDIA#8637)

Signed-off-by: Kaiyu Xie <[email protected]>

[https://nvbugs/5637037][chore] Update waive lists. (NVIDIA#9386)

Signed-off-by: Bo Li <[email protected]>
Signed-off-by: Enwei Zhu <[email protected]>
Co-authored-by: Enwei Zhu <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[TRTLLM-8970][infra] Fix generate report when has isolation test result (NVIDIA#8861)

Signed-off-by: qqiao <[email protected]>
Signed-off-by: Emma Qiao <[email protected]>

[https://nvbugs/5685015][fix] Update invalid max_token test (NVIDIA#9435)

Signed-off-by: Junyi Xu <[email protected]>

[None][fix] Fix on-disk cache and revise logger/statistics for AutoTuner. (NVIDIA#9211)

Signed-off-by: Yukun He <[email protected]>

[https://nvbugs/5689658][test] Fix gpu lock issue running on cluster (NVIDIA#9441)

Signed-off-by: yufeiwu <[email protected]>

[None][chore] add spec_decoding configs in perf benchmark scripts and fix typos (NVIDIA#9533)

Signed-off-by: Lanyu Liao <[email protected]>
Co-authored-by: Lanyu Liao <[email protected]>

[None][fix] Remove FP8 K/V buffer from TRTLLM sparse MLA attention kernel (NVIDIA#9529)

Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>

[None] [chore] Enhancements and clean up to slurm scripts (NVIDIA#9493)

Signed-off-by: Kaiyu Xie <[email protected]>

[None][chore] Revert "[None][fix] change allreduce workspace dtype to torch.int64 t… (NVIDIA#9538)

Signed-off-by: Zhenhuan Chen <[email protected]>

[None][infra] Waive failed cases for main branch on 11/28 (NVIDIA#9539)

Signed-off-by: qqiao <[email protected]>

[None][fix] Pass checkpoint_format to create_input_processor (NVIDIA#9521)

Signed-off-by: Robin Kobus <[email protected]>

[TRTLLM-9541][infra] Use artifactory mirror for download.pytorch.org (NVIDIA#9477)

Signed-off-by: ZhanruiSunCh <[email protected]>
Signed-off-by: Zhanrui Sun <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[TRTLLM-9488][feat] add 'disable_flashinfer_sampling' config option (NVIDIA#9454)

Signed-off-by: ixlmar <[email protected]>

[None][infra] Waive failed case in pre-merge on 11/28 (NVIDIA#9537)

Signed-off-by: Wangshanshan <[email protected]>

[None][perf] Helix: improve all-to-all perf for large CP size (NVIDIA#9494)

Signed-off-by: Matthias Jouanneaux <[email protected]>
Signed-off-by: Zheyu Fu <[email protected]>
Co-authored-by: Zheyu Fu <[email protected]>

[None][feat] support for more accurate AR calculation (NVIDIA#9323)

Signed-off-by: binghanc <[email protected]>

[TRTLLM-9488][fix] llmapi references (NVIDIA#9547)

Signed-off-by: ixlmar <[email protected]>

[NVIDIA#8948][feat] Support custom sharding config (NVIDIA#9143)

Signed-off-by: greg-kwasniewski1 <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][chore] Weekly mass integration of release/1.1 -- rebase (NVIDIA#9522)

Signed-off-by: yunruis <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
Signed-off-by: qgai <[email protected]>
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Yan Chunwei <[email protected]>
Signed-off-by: Junyi Xu <[email protected]>
Signed-off-by: Simeng Liu <[email protected]>
Signed-off-by: nv-guomingz <[email protected]>
Signed-off-by: Jin Li <[email protected]>
Signed-off-by: Ivy Zhang <[email protected]>
Signed-off-by: Vincent Zhang <[email protected]>
Signed-off-by: peaceh <[email protected]>
Signed-off-by: Michal Guzek <[email protected]>
Signed-off-by: Michal Guzek <[email protected]>
Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>
Signed-off-by: leslie-fang25 <[email protected]>
Signed-off-by: Shunkang <[email protected]>
Signed-off-by: junq <[email protected]>
Co-authored-by: yunruis <[email protected]>
Co-authored-by: sunnyqgg <[email protected]>
Co-authored-by: brb-nv <[email protected]>
Co-authored-by: Yan Chunwei <[email protected]>
Co-authored-by: JunyiXu-nv <[email protected]>
Co-authored-by: Simeng Liu <[email protected]>
Co-authored-by: Guoming Zhang <[email protected]>
Co-authored-by: Jin Li <[email protected]>
Co-authored-by: Ivy Zhang <[email protected]>
Co-authored-by: Vincent Zhang <[email protected]>
Co-authored-by: peaceh-nv <[email protected]>
Co-authored-by: Michal Guzek <[email protected]>
Co-authored-by: Chang Liu <[email protected]>
Co-authored-by: Leslie Fang <[email protected]>
Co-authored-by: Shunkangz <[email protected]>
Co-authored-by: Shunkang <[email protected]>
Co-authored-by: QI JUN <[email protected]>

[TRTLLM-5971][feat] Integrate helix parallelism (NVIDIA#9342)

Signed-off-by: Balaram Buddharaju <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][infra] - Request idle time exemption for OCI jobs (NVIDIA#9528)

Signed-off-by: Yanchao Lu <[email protected]>

[None][infra] Wiave failed tests for main branch on 11/30 (NVIDIA#9555)

Signed-off-by: qqiao <[email protected]>

[None][fix] Fix port conflict in disagg tests (NVIDIA#9474)

Signed-off-by: Junyi Xu <[email protected]>

[None][ci] Split H100_PCIe-PyTorch-Post-Merge test stage (NVIDIA#9558)

Signed-off-by: Yanchao Lu <[email protected]>

[None][ci] Split H100_PCIe-PyTorch-Post-Merge test stage (NVIDIA#9559)

Signed-off-by: Yanchao Lu <[email protected]>

[TRTLLM-8958][feat] and [TRTLLM-8960]: create ConfigurableMoE and support TRTLLMGenFusedMoE as backend (NVIDIA#9486)

[None] [feat] Optimize the algorithm part of RocketKV (NVIDIA#9333)

Signed-off-by: yuhangh <[email protected]>

[https://nvbugs/5690172][fix] Fix Qwen3-235B ATP accuracy issue with PDL (NVIDIA#9530)

Signed-off-by: Enwei Zhu <[email protected]>

[TRTLLM-6222][feat] Extend cute_dsl_nvfp4_gemm to sm103. (NVIDIA#9543)

Signed-off-by: Mindy Li <[email protected]>

[None][fix] Correct virtual memory allocation alignment (NVIDIA#9491)

Signed-off-by: Yuan Tong <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[https://nvbugs/5684703][fix] Unwaive disagg guided decoding test (NVIDIA#9466)

Signed-off-by: Enwei Zhu <[email protected]>

[https://nvbugs/5503479][fix] Temporarily lower reference accuracy to stabilize CI (NVIDIA#9398)

Signed-off-by: Pengbo Wang <[email protected]>

[None][chore] remove qwen3-next accuracy tests (NVIDIA#9534)

Signed-off-by: jiant <[email protected]>

[None][doc] fix mtp.py typo (NVIDIA#9307)

Signed-off-by: liugaoji <[email protected]>

[None][feat] add chat template kwargs support to longbench-v2 (NVIDIA#9544)

Signed-off-by: Fanrong Li <[email protected]>

[NVIDIA#9496][fix] AutoDeploy: remove auto-tuner from nvfp4_gemm forward (NVIDIA#9497)

Signed-off-by: Neta Zmora <[email protected]>

[None][fix] Replace hash method with unique_id for cutedsl MoE runners. (NVIDIA#9569)

Signed-off-by: Yukun He <[email protected]>

[None][chore] refactor disaggregated scripts to use named arguments (NVIDIA#9581)

Signed-off-by: Zhenhuan Chen <[email protected]>

[TRTLLM-6222][feat] Several perf opt for cuteDSL nvf4 gemm (NVIDIA#9428)

Signed-off-by: Yuhan Li <[email protected]>

[None][chore] reduce the layers of the `devel` docker image (NVIDIA#9077)

Signed-off-by: Martin Marciniszyn Mehringer <[email protected]>

[https://nvbugs/5651854][infra] Enable perf metrics during accuracy testing (NVIDIA#9140)

[None][fix] Skip Allreduce init for Attention DP (NVIDIA#9542)

Signed-off-by: Enwei Zhu <[email protected]>

[None][test] [None][test] Waive main branch test failures 12/1 (NVIDIA#9566)

Signed-off-by: Yanchao Lu <[email protected]>

[None][ci] Minor change for Slurm scripts (NVIDIA#9561)

Signed-off-by: Yanchao Lu <[email protected]>

[TRTLLM-6768][infra] Fix params for not updating github status (NVIDIA#6747)

Signed-off-by: Yiqing Yan <[email protected]>

[None][infra] Update the pytest options after MI (NVIDIA#9579)

Signed-off-by: qqiao <[email protected]>

[TRTLLM-6756][feat] Add Beam Search to TorchSampler (NVIDIA#8509)

Signed-off-by: Stefan Niebler <[email protected]>

[None][chore] Defer exposing context parallel configs (NVIDIA#9552)

Signed-off-by: Balaram Buddharaju <[email protected]>

[TRTC-1943][feat] Env vars override support in LLM API (NVIDIA#9104)

Signed-off-by: Venky Ganesh <[email protected]>

[None][feat] AutoDeploy: Use the router gemm op for nemotron MOE (NVIDIA#9500)

Signed-off-by: Chenghao Zhang <[email protected]>

[NVIDIA#9198][feat] Refactor dist ops in AutoDeploy (NVIDIA#9301)

Signed-off-by: Eran Geva <[email protected]>

[None][fix] Prevent YAML partial kv_cache_config from incorrectly overriding the complete kv_cache_config (NVIDIA#9262)

Signed-off-by: Yuening Li <[email protected]>

[TRTLLM-9085][doc] fix math formula rendering issues in github (NVIDIA#9605)

Signed-off-by: junq <[email protected]>

[None][feat] Unify nvfp4 gemm backend (NVIDIA#8963)

Signed-off-by: Shijie Wang <[email protected]>
Signed-off-by: Yukun He <[email protected]>
Signed-off-by: Shijie <[email protected]>
Co-authored-by: Yukun He <[email protected]>

[None][feat] Add support for KVCache reuse for DSv32 (NVIDIA#9383)

Signed-off-by: Iman Tabrizian <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][chroe] Polish qwen3-next modeling code. (NVIDIA#8902)

Signed-off-by: nv-guomingz <[email protected]>

[https://nvbugs/5703953][fix] Use random port for disagg tests (NVIDIA#9582)

Signed-off-by: Junyi Xu <[email protected]>

[None][fix] Waive gb200 (NVIDIA#9580)

Signed-off-by: Xin He (SW-GPU) <[email protected]>

[FMDL-1328][feat] Add support for nano-v3 and super-v3 with pytorch backend (NVIDIA#9261)

Signed-off-by: Wanli Jiang <[email protected]>

[https://nvbugs/5582091][test] increase warmup times in testing for multi-gpu cases (NVIDIA#9578)

Signed-off-by: Ruodi Lu <[email protected]>
Co-authored-by: Ruodi Lu <[email protected]>

[None][chore] Add failed cases into waives.txt (NVIDIA#9588)

Signed-off-by: xinhe-nv <[email protected]>

[https://nvbugs/5702793][fix] Fix uncontiguous tensor view (NVIDIA#9576)

Signed-off-by: shuyix <[email protected]>

[None][infra] Waive failed cases for main branch (NVIDIA#9615)

Signed-off-by: qqiao <[email protected]>

[TRTLLM-9488][feat] use FlashInfer.sampling by default (NVIDIA#9545)

Signed-off-by: ixlmar <[email protected]>

[None][infra] Update allowlist 2025/12/01 (NVIDIA#9616)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][infra] Remove an invalid test name in waives.txt (NVIDIA#9620)

Signed-off-by: qqiao <[email protected]>

Lock the gpu clocks in L0 perf tests (NVIDIA#9585)

Signed-off-by: Eran Geva <[email protected]>

[TRTLLM-9466][test] Evaluate helix parallelism with DSV3 Lite (NVIDIA#9597)

Signed-off-by: Balaram Buddharaju <[email protected]>

[None][fix] Extract GPU count from single-node stage names (NVIDIA#9599)

Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>

[https://nvbugs/5667774][fix] Refine Piecewise Cuda Graph Condition for DP (NVIDIA#9393)

Signed-off-by: Jin Li <[email protected]>

[TRTLLM-9144][fix] enhance RPC robustness (NVIDIA#8711)

Signed-off-by: Superjomn <[email protected]>
Signed-off-by: Erin Ho <[email protected]>
Signed-off-by: Yan Chunwei <[email protected]>
Co-authored-by: Erin Ho <[email protected]>

[https://nvbugs/5627710][fix] Fix synchronization bugs in KvCacheTransferManager that can cause corrupted blocks (NVIDIA#9056)

Signed-off-by: thorjohnsen <[email protected]>
Signed-off-by: Thor Johnsen <[email protected]>
Co-authored-by: Iman Tabrizian <[email protected]>
Co-authored-by: Robin Kobus <[email protected]>

[TRTLLM-8980][test] Clean up spec dec tests in test_llm_api_pytorch (NVIDIA#8889)

Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[NVIDIA#9150][feat] Add code for nano v3 to custom implementation in AD (NVIDIA#9465)

* Why?

We would like to show an alternative to monkey-patching in AutoDeploy.

* What?

This commit builds on the existing custom model implementation for
NemotronH and adds the bits relevant for MoE layers.

Part of NVIDIA#9150.

Signed-off-by: William Zhang <[email protected]>

[NVIDIA#9150][feat] AutoDeploy: reviewer comments for NVIDIA#9150 (NVIDIA#9527)

Signed-off-by: Lucas Liebenwein <[email protected]>

[https://nvbugs/5651854][fix] Fix dist-serving perf by clearing CPU affinity (NVIDIA#9549)

Signed-off-by: Shixiaowei02 <[email protected]>

[NVIDIA#9550][feat] AutoDeploy: Add NVFP4 Cutlass MoE kernels  (NVIDIA#9551)

Signed-off-by: Neta Zmora <[email protected]>

[https://nvbugs/5688388][fix] fix: Reducing num request in disagg test to speed up (NVIDIA#9598)

Signed-off-by: Patrice Castonguay <[email protected]>

[TRTLLM-8946][feat] Improved heuristics to detect shardable regions (NVIDIA#9200)

Signed-off-by: Lucas Liebenwein <[email protected]>
Signed-off-by: greg-kwasniewski1 <[email protected]>
Co-authored-by: Lucas Liebenwein <[email protected]>

[NVIDIA#9632][feat] Support EXTRA_WHEEL_BUILD_ARGS during wheel build (NVIDIA#9633)

Signed-off-by: Yu Chi Li <[email protected]>

[None][chore] Waive test failing on pre-merge (NVIDIA#9638)

Signed-off-by: Balaram Buddharaju <[email protected]>

[None][chore] Remove traceback dump for multimodal input processor (NVIDIA#9634)

Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>

[None][chore] Fix trtllm-eval and move GroupedGemmInputsHelper (NVIDIA#9612)

Signed-off-by: Enwei Zhu <[email protected]>

[https://nvbugs/5698434][fix] Use separate weight mapper for draft (NVIDIA#9607)

Signed-off-by: Anurag Mukkara <[email protected]>

[TRTLLM-7101][infra] Reuse passed tests (NVIDIA#6894)

Signed-off-by: Yiqing Yan <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[None][test] Remove duplicate test cases (NVIDIA#9623)

Signed-off-by: yufeiwu <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][feat] Add RocketKV usage doc and e2e accuracy test on LongBenchV2 (NVIDIA#9572)

Signed-off-by: yuhangh <[email protected]>

[TRTLLM-9242][doc] Add examples showcasing openai compatible APIs (NVIDIA#9520)

Signed-off-by: Junyi Xu <[email protected]>

[None][chore] AutoDeploy update cuda stream manager for multi-device (NVIDIA#9575)

Signed-off-by: Suyog Gupta <[email protected]>

[TRTLLM-9391][chore] Automatically estimate required workspace. (NVIDIA#9535)

Signed-off-by: Bo Li <[email protected]>

[https://nvbugs/5708475][fix] Fix e2e eval accuracy for helix parallelism (NVIDIA#9647)

Signed-off-by: Balaram Buddharaju <[email protected]>

[https://nvbugs/5561153][test] Fix log error for perf test (NVIDIA#9622)

Signed-off-by: FredricZ-2007 <[email protected]>

[TRTLLM-8241][feat] Aliasing to comply to LlmArgs (NVIDIA#9586)

Signed-off-by: Pengyun Lin <[email protected]>

[None][chore] Add failed cases into waives.txt (NVIDIA#9593)

Signed-off-by: Jie Li <[email protected]>
Co-authored-by: Jie Li <[email protected]>

[TRTLLM-6842][feat] Support Response API for general purpose (NVIDIA#9392)

Signed-off-by: Junyi Xu <[email protected]>

[None][test] Update Qwen3-next accuracy testing by setting the cuda … (NVIDIA#9613)

Signed-off-by: nv-guomingz <[email protected]>

[None][feat] update trtllm-gen nvfp4 kernels with better performance (NVIDIA#9510)

Signed-off-by: Perkz Zheng <[email protected]>

[None][doc] Replace the tensorrt icon with torch icon on overview.md (NVIDIA#9644)

Signed-off-by: nv-guomingz <[email protected]>

[https://nvbugs/5705197][chore] Unwaive timeout disagg tests (NVIDIA#9637)

Signed-off-by: Patrice Castonguay <[email protected]>

[https://nvbugs/5552132][fix] Enable LoRa for GPT OSS Torch (NVIDIA#8253)

Signed-off-by: Michal Guzek <[email protected]>

[None][fix] Fix wide ep MoE error (NVIDIA#9642)

Signed-off-by: Iman Tabrizian <[email protected]>

[https://nvbugs/5702795][fix] Remove the warning message for aten.log. (NVIDIA#9665)

Signed-off-by: nv-guomingz <[email protected]>

[https://nvbugs/5693853][fix] Fix error handling when querying machin… (NVIDIA#9483)

Signed-off-by: Gal Hubara Agam <[email protected]>

[OMNIML-2932] [feat] nvfp4 awq support (NVIDIA#8698)

Signed-off-by: weimingc <[email protected]>

[NVIDIA#9643][fix] AutoDeploy: fix nano sharding config (NVIDIA#9668)

Signed-off-by: Lucas Liebenwein <[email protected]>

[NVIDIA#9147][feat] AutoDeploy: Draft Target Speculative Decoding (NVIDIA#9275)

Signed-off-by: Govind Ramnarayan <[email protected]>

[None][feat] Update Qwen3CodeToolParser to align tool-calling parameters (NVIDIA#9540)

Signed-off-by: Wanli Jiang <[email protected]>

[TRTLLM-7181][infra] Generate test results when pytest timeout happens (NVIDIA#9396)

Signed-off-by: Yiqing Yan <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[TRTLLM-9522][fix] restore `trtllm-serve mm_embedding_serve` (NVIDIA#9669)

[TRTLLM-5093][infra] Write env variables to a file in the interactive debug session (NVIDIA#6792)

Signed-off-by: Yiqing Yan <[email protected]>

[None][fix] fix error when processing batches containing both text and mm data (NVIDIA#8381)

Signed-off-by: Nekofish-L <[email protected]>

[TRTLLM-7073][feat] Support torch compile for PP for Llama and DeepSeekV3 (NVIDIA#7838)

Signed-off-by: Jin Li <[email protected]>

[None][feat] Add weights initialization and context phase parser to layer-wise benchmarks (NVIDIA#9667)

Signed-off-by: Tailing Yuan <[email protected]>

[TRTLLM-8274][feat] Check if executor is shutdown in /health entrypoint (NVIDIA#9057)

Signed-off-by: Junyi Xu <[email protected]>

[NVIDIA#8733][feat] Add Llama4 MoE handling to AutoDeploy (NVIDIA#9556)

Signed-off-by: Tal Cherckez <[email protected]>
Signed-off-by: tcherckez-nvidia <[email protected]>
Co-authored-by: Neta Zmora <[email protected]>

[None][ci] unwaive tests (NVIDIA#9651)

Signed-off-by: Yan Chunwei <[email protected]>

[None][feat] Add NIXL-LIBFABRIC support (NVIDIA#9225)

Signed-off-by: Yoray Zack <[email protected]>
Signed-off-by: zackyoray <[email protected]>

[None][test] rename wide ep and disagg metric name in perf test (NVIDIA#9704)

Signed-off-by: Ruodi Lu <[email protected]>
Co-authored-by: Ruodi Lu <[email protected]>

[https://nvbugs/5467531][fix] Unwaive fused_moe all to all test with … (NVIDIA#9617)

Signed-off-by: Jin Li <[email protected]>

[None][fix] Recover TRTLLM MoE Perf for DEP (NVIDIA#9562)

Signed-off-by: Anthony Chang <[email protected]>

[None][chore] Add failed cases into waives.txt (NVIDIA#9662)

Signed-off-by: Xin He (SW-GPU) <[email protected]>
Signed-off-by: xinhe-nv <[email protected]>
Signed-off-by: Yanchao Lu <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[None][fix] Fix TLLM_SPEC_DECODE_FORCE_NUM_ACCEPTED_TOKENS for MTP/EAGLE (NVIDIA#9608)

Signed-off-by: Aurelien Chartier <[email protected]>

[None][infra] Add container notices and documentation (NVIDIA#9185)

Signed-off-by: Parker Drake <[email protected]>

[TRTLLM-5312][infra] Add triton trigger rules (NVIDIA#6440)

Signed-off-by: Yiqing Yan <[email protected]>

[None][doc] Add feature docs for helix parallelism (NVIDIA#9684)

Signed-off-by: Balaram Buddharaju <[email protected]>

[TRTLLM-9579][infra] Set mergeWaiveList stage UNSTABLE when there is any issue (NVIDIA#9692)

Signed-off-by: Yiqing Yan <[email protected]>

[None][doc] Added line about partial reuse (NVIDIA#7846)

Signed-off-by: thorjohnsen <[email protected]>

[TRTLLM-8920][feat] decouple disagg service from fastapi (NVIDIA#8714)

Signed-off-by: Lizhi Zhou <[email protected]>

[https://nvbugs/5633340][fix] start disagg workers and servers on free ports (NVIDIA#9694)

Signed-off-by: Lizhi Zhou <[email protected]>

[TRTLLM-9562] [doc] Add Deployment Guide for Kimi K2 Thinking on TensorRT LLM - Blackwell (NVIDIA#9711)

Signed-off-by: Kaiyu Xie <[email protected]>

[NVIDIA#9602][feat] AutoDeploy: Support TRTLLM Sampler (NVIDIA#9641)

Signed-off-by: Govind Ramnarayan <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None] [tests] Unwaive EPLB tests (NVIDIA#9625)

Signed-off-by: Kaiyu Xie <[email protected]>

[https://nvbugs/5518713][test] Refactor core test lists by merging with llm_perf_cluster.yml (NVIDIA#9714)

Signed-off-by: yufeiwu <[email protected]>

[TRTLLM-7136][feat] Update load_weights method to include mapping parameter in checkpoint loaders (NVIDIA#9583)

Signed-off-by: Robin Kobus <[email protected]>

[None][refactor] Improve request processing function in sampler (NVIDIA#9671)

Signed-off-by: Robin Kobus <[email protected]>

[https://nvbugs/5670672][fix] Fix flaky KV connector tests (NVIDIA#9676)

Signed-off-by: jthomson04 <[email protected]>

[None][infra] Update allowed list 20251204 (NVIDIA#9718)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][feat] AutoDeploy: Perf optimization for Attention and rmsnorm (NVIDIA#9719)

Signed-off-by: Chenghao Zhang <[email protected]>

[None][chore] Waive flakey disagg tests (NVIDIA#9749)

Signed-off-by: Mike Iovine <[email protected]>

[https://nvbugs/5601682][fix] Fix cacheTransceiver hang (NVIDIA#9311)

Signed-off-by: Iman Tabrizian <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9199][docs] KV Connector Docs (NVIDIA#9325)

Signed-off-by: jthomson04 <[email protected]>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9160][doc] add doc to llm_runtime.py (NVIDIA#9482)

Signed-off-by: Yan Chunwei <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[None][doc] VDR 1.0 trtllm-serve doc enhancement (NVIDIA#9443)

Signed-off-by: Pengyun Lin <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9086][doc] Clean up TODOs in documentation (NVIDIA#9292)

Signed-off-by: junq <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9157][doc] Guided decoding doc improvement (NVIDIA#9359)

Signed-off-by: Enwei Zhu <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[None][infra] Updated Linux installation guide (NVIDIA#9485)

Signed-off-by: Yiqing Yan <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9075][doc] refine the slurm examples (NVIDIA#9548)

Signed-off-by: Yan Chunwei <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9093][doc] update hyper links in overview (NVIDIA#9568)

Signed-off-by: junq <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9092][doc] link to modelopt checkpoints in quick start guide (NVIDIA#9571)

Signed-off-by: junq <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][fix] Fix triton moe load_weight (NVIDIA#9649)

Signed-off-by: shuyix <[email protected]>

[None][fix] fix a bug: deepseek_fp8_block_scales in TRTLLMGEN-MoE use 2D x_sf instead of 1D (NVIDIA#9658)

Signed-off-by: xxi <[email protected]>

[TRTLLM-9372][feat] Enable CuteDSL MoE with Large EP (NVIDIA#9592)

Signed-off-by: Enwei Zhu <[email protected]>

[TRTLLM-9522][chore] implement default `attach_multimodal_embeddings` (NVIDIA#9664)

Signed-off-by: ixlmar <[email protected]>

[TRTLLM-9660][feat] Convert cuteDSL GEMM to opt-in feature (NVIDIA#9682)

Signed-off-by: Jonas Li <[email protected]>
Co-authored-by: Kaiyu Xie <[email protected]>

[None][fix] enable hmac in RPC (NVIDIA#9745)

Signed-off-by: Superjomn <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[https://nvbugs/5703953][fix] Preserving ip:port for trtllm-serve before initializing llm (NVIDIA#9646)

Signed-off-by: Junyi Xu <[email protected]>

[None][infra] Waive failed cases for main branch on 12/07 (NVIDIA#9769)

Signed-off-by: qqiao <[email protected]>

[None][fix] Several minor fixes to CI setting (NVIDIA#9765)

Signed-off-by: Yanchao Lu <[email protected]>

[OMNIML-3036][doc] Re-branding TensorRT-Model-Optimizer as Nvidia Model-Optimizer (NVIDIA#9679)

Signed-off-by: Chenjie Luo <[email protected]>

[None][feat] Enable NCCL_SYMMETRIC as default fallback for AllReduce (NVIDIA#9314)

Signed-off-by: Ludwig Schneider <[email protected]>

[TRTLLM-9000][feat] Add multi-node Perf Tests into CI (NVIDIA#8800)

Signed-off-by: Chenfei Zhang <[email protected]>

[None][test] add ntp tolerance in time metrics verification (NVIDIA#9741)

Signed-off-by: zhengd-nv <[email protected]>

[TRTLLM-9603][feat] Enable ConfigurableMoE test in the CI (NVIDIA#9645)

[https://nvbugs/5422621][test] Add GB 200 WIDEEP test case for RCCA 5422621 (NVIDIA#9506)

Signed-off-by: FredricZ-2007 <[email protected]>

[None][fix] Fix two tuning cache miss issues. (NVIDIA#9743)

Signed-off-by: Yukun He <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[TRTLLM-9706] [doc] Update wide EP documents (NVIDIA#9724)

Signed-off-by: Kaiyu Xie <[email protected]>

[https://nvbugs/5666804][test] only adding sampler config for limited models (NVIDIA#9512)

Signed-off-by: Ruodi Lu <[email protected]>
Co-authored-by: Ruodi Lu <[email protected]>
Co-authored-by: yufeiwu-nv <[email protected]>
Co-authored-by: Larry Xu <[email protected]>

[None][infra] Waive failed cases for main on 12/08 (NVIDIA#9773)

Signed-off-by: qqiao <[email protected]>

[None][chore] Move the rocketkv e2e test to post-merge (NVIDIA#9768)

Signed-off-by: Fanrong Li <[email protected]>

[None][chore] Enable tvm_ffi for cute dsl nvfp4_gemm to reduce host overhead. (NVIDIA#9690)

Signed-off-by: Mindy Li <[email protected]>

[TRTLLM-9431][perf] Enable multistream for Linear Attention in Qwen3-… (NVIDIA#9696)

Signed-off-by: nv-guomingz <[email protected]>

[None][chore] Remove closed bugs (NVIDIA#9770)

Signed-off-by: xinhe-nv <[email protected]>

[None][infra] update mooncake in docker images (NVIDIA#9584)

Signed-off-by: zhengd-nv <[email protected]>
Signed-off-by: Zheng Duan <[email protected]>

[None][test] Add Kimi k2 WIDEEP perf and accuracy cases (NVIDIA#9686)

Signed-off-by: FredricZ-2007 <[email protected]>
Signed-off-by: Kaiyu Xie <[email protected]>
Co-authored-by: Kaiyu Xie <[email protected]>

[https://nvbugs/5527655][test] Add test case for RCCA 5527655 (NVIDIA#9511)

Signed-off-by: FredricZ-2007 <[email protected]>

[http://nvbugs/5649010][fix] fix test_auto_scaling.py::test_worker_restart timeout (NVIDIA#9775)

Signed-off-by: Lizhi Zhou <[email protected]>

[None][fix] Switch AutoDeploy's default allreduce strategy to NCCL (NVIDIA#9666)

Signed-off-by: Eran Geva <[email protected]>

[TRTLLM-9506][fix] Fix AR for DeepSeek-R1 2 model path (NVIDIA#9661)

Signed-off-by: qgai <[email protected]>

ray + updatew works

trtllm works in async env

trtllm works in sync and async env

ray + updatew works

rebase to the updated verl

server mode

still cherry pick

still cherry pick

still cherry pick

integrated http interface

hang at RyExecutor create workers ray.remote

clean code

use tensorrt_llm.rlhf_utils

Signed-off-by: Liwei Ma <[email protected]>

placement, asyncllm, and basic tests
Signed-off-by: Erin Ho <[email protected]>

connect sleep and wakeup; Add support to pass None to update_weights
Signed-off-by: Erin Ho <[email protected]>

Batching ctx for IFB scheduler

Signed-off-by: Yuan Tong <[email protected]>

accuracy WAR for TP>1: always use AllReduceStrategy.NCCL, refactored
Signed-off-by: Erin Ho <[email protected]>

fix e2e integration

Signed-off-by: Superjomn <[email protected]>

update asyncllm, other nits
Signed-off-by: Erin Ho <[email protected]>

fix init setup

Signed-off-by: Erin Ho <[email protected]>

Fix TRTLLMSampler logprobs perf

Signed-off-by: Yuan Tong <[email protected]>

fix and cleanup
Signed-off-by: Erin Ho <[email protected]>

fix server

Signed-off-by: Erin Ho <[email protected]>

Revert "Batching ctx for IFB scheduler"

This reverts commit b51aac0

Signed-off-by: Yuan Tong <[email protected]>

update & address comments

Signed-off-by: Erin Ho <[email protected]>
usberkeley pushed a commit to usberkeley/TensorRT-LLM that referenced this pull request Dec 11, 2025
codego7250 pushed a commit to codego7250/TensorRT-LLM that referenced this pull request Dec 11, 2025
codego7250 pushed a commit to codego7250/TensorRT-LLM that referenced this pull request Dec 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants