-
Notifications
You must be signed in to change notification settings - Fork 2k
Deepseek R1 FP8 Support on Blackwell #6486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Barry Kang <[email protected]> Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Barry Kang <[email protected]> Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Barry Kang <[email protected]> Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Yuxian Qiu <[email protected]> Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Yuxian Qiu <[email protected]> Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Yuxian Qiu <[email protected]> Signed-off-by: Fanrong Li <[email protected]>
* add triton masked index copy for deepgemm moe. Signed-off-by: Fanrong Li <[email protected]> * rm slice. Signed-off-by: Fanrong Li <[email protected]> * Enable masked grouped GEMM Signed-off-by: Barry Kang <[email protected]> --------- Signed-off-by: Fanrong Li <[email protected]> Signed-off-by: Barry Kang <[email protected]> Co-authored-by: Fanrong Li <[email protected]> Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Yuxian Qiu <[email protected]> Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Yuxian Qiu <[email protected]> Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Barry Kang <[email protected]> Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Barry Kang <[email protected]> Signed-off-by: Fanrong Li <[email protected]>
* Use local barrier to avoid multi-node hang issue. Signed-off-by: Yuxian Qiu <[email protected]> * Fix hang issue in the single-node case. Signed-off-by: Yuxian Qiu <[email protected]> --------- Signed-off-by: Yuxian Qiu <[email protected]> Signed-off-by: Fanrong Li <[email protected]>
* optimize the masked index copy and index gather. Signed-off-by: Fanrong Li <[email protected]> * rm torch.compile for preprocess_after_permute duo to the compatibility issue. Signed-off-by: Fanrong Li <[email protected]> --------- Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Zongfei Jing <[email protected]> Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Yuxian Qiu <[email protected]> Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Yuxian Qiu <[email protected]> Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Barry Kang <[email protected]> Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Zongfei Jing <[email protected]> Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Fanrong Li <[email protected]>
Signed-off-by: Barry Kang <[email protected]>
Signed-off-by: Fanrong Li <[email protected]>
|
/bot kill |
|
/bot run --disable-fail-fast |
|
PR_Github #13701 [ run ] triggered by Bot |
|
PR_Github #13702 [ ] completed with state |
|
PR_Github #13674 [ run ] completed with state |
|
PR_Github #13701 [ run ] completed with state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (6)
tests/unittest/_torch/modules/test_fused_moe.py (1)
481-497: Remove duplicate variable initialization in grouped_gemm function.The
dandm_indicesvariables are initialized twice in the nested function.def grouped_gemm(a: torch.Tensor, b: torch.Tensor, a_sf: torch.Tensor, b_sf: torch.Tensor, offset_array: torch.Tensor) -> torch.Tensor: - d = torch.empty((a.shape[0], b.shape[1]), - device=b.device, - dtype=torch.bfloat16) - m_indices = torch.empty(a.shape[0], device=b.device, dtype=torch.int32) - for idx in range(offset_array.numel() - 1): - m_indices[offset_array[idx]:offset_array[idx + 1]] = idx - num_groups, n, k_ = b.shape d = torch.empty((a.shape[0], b.shape[1]), device=b.device, dtype=torch.bfloat16) m_indices = torch.empty(a.shape[0], device=b.device, dtype=torch.int32) for idx in range(offset_array.numel() - 1): m_indices[offset_array[idx]:offset_array[idx + 1]] = idxtensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py (2)
368-374: Address workaround limitations and fix line length.The code contains a workaround that restricts functionality to top-1 routing and has a line that exceeds the 120-character limit.
Apply this diff to fix the line length and document the limitation:
if self.apply_router_weight_on_input: - assert self.routing_method.top_k == 1, "Current workaround only supports top-1 routing" - assert x.dtype != torch.float8_e4m3fn, "Current workaround for apply_router_weight_on_input does not support fp8 input" + assert self.routing_method.top_k == 1, "Current workaround only supports top-1 routing" + assert x.dtype != torch.float8_e4m3fn, ( + "Current workaround for apply_router_weight_on_input does not support fp8 input" + ) x = x * token_final_scales.to(x.dtype) # TODO: remove this once we have correct fusedmoe kernel ready token_final_scales = NoneThe TODO indicates incomplete implementation. Would you like me to open an issue to track the proper implementation of router weight application with the fused MoE kernel?
247-290: Add comprehensive documentation for the DeepGEMM function.The function has extensive constraints and requirements that should be documented for maintainability.
Add documentation explaining the constraints:
@nvtx_range("[DG]") def deepgemm_fp8_group_blockwise_gemm( a: torch.Tensor, b: torch.Tensor, sfa: torch.Tensor, sfb: torch.Tensor, masked_m: torch.Tensor, expected_m: int, ) -> torch.Tensor: + """ + Perform FP8 group blockwise GEMM using DeepGEMM library. + + Args: + a: Input activation tensor [G, M, K] in FP8 format + b: Weight tensor [G, N, K] in FP8 format + sfa: Scaling factors for tensor a + sfb: Scaling factors for tensor b + masked_m: Per-group M dimensions as int32 tensor + expected_m: Expected maximum M dimension + + Returns: + Output tensor [G, M, N] in bfloat16 + + Constraints: + - Tensors must be contiguous with stride(-1) == 1 + - All tensors must be FP8 e4m3fn format + - Output must be N-major for optimal performance + """ d = torch.empty((a.shape[0], a.shape[1], b.shape[1]), device=b.device, dtype=torch.bfloat16)tensorrt_llm/quantization/utils/fp8_utils.py (3)
32-52: Fix naming inconsistency between function name and return type.The function name suggests E8M0 format but returns E4M3FN format, which is confusing.
Either rename the function or fix the return type:
@nvtx_range("[DG] quantization") @torch.compile(dynamic=True) -def per_token_cast_to_fp8_e8m0( +def per_token_cast_to_fp8_e4m3fn( x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: + """ + Cast tensor to FP8 E4M3FN format with per-token quantization. + + Args: + x: Input tensor (2D or 3D), last dimension must be divisible by 128 + + Returns: + Tuple of (quantized tensor in FP8 E4M3FN, scaling factors) + """
54-80: Fix naming inconsistency in per_block_cast_to_fp8_e8m0.Similar to
per_token_cast_to_fp8_e8m0, this function name suggests E8M0 format but actually returns E4M3FN format (lines 65 and 77).Either rename the function or change the implementation:
-def per_block_cast_to_fp8_e8m0( +def per_block_cast_to_fp8_e4m3fn( x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
470-480: Remove incorrect assertion.The assertion at line 472 checks if the last dimension is divisible by 2, but unlike
silu_and_mul_masked_post_quant_fwd, this function doesn't divide k by 2.assert input.is_contiguous() assert len(input.shape) == 2 - assert input.shape[-1] % 2 == 0 # FP8 quantization parameters finfo = torch.finfo(torch.float8_e4m3fn)
🧹 Nitpick comments (2)
tensorrt_llm/quantization/utils/fp8_utils.py (2)
304-316: Fix docstring formatting issues.The docstring needs proper formatting according to D205 and D415 style rules.
def silu_and_mul_masked_post_quant_fwd( input: torch.Tensor, quant_group_size: int, masked_m: torch.Tensor, scale_ue8m0: bool = False, ): """ - input shape [g, m, k] - output shape [g, m, k // 2], dtype fp8 - output_scale [g, k // 4, m // 2 // 128], dtype int32 - quant_group_size int - masked_m shape [g] + Perform fused SiLU activation and multiplication with masked post-quantization. + + Args: + input: Input tensor of shape [g, m, k] + quant_group_size: Quantization group size (int) + masked_m: Mask tensor of shape [g] + scale_ue8m0: Whether to scale to UE8M0 format + + Returns: + Tuple of: + - output: Tensor of shape [g, m, k // 2] in fp8 dtype + - output_scale: Tensor of shape [g, k // 4, m] in int32 dtype """
457-468: Fix docstring formatting issues.The docstring needs proper formatting according to D205 and D415 style rules.
def per_token_quant_and_transform( input: torch.Tensor, quant_group_size: int = 128, scale_ue8m0: bool = True, ): """ - input shape [g, m, k] - output shape [g, m, k // 2], dtype fp8 - output_scale [g, k // 4, m // 2 // 128], dtype int32 - quant_group_size int - masked_m shape [g] + Perform per-token quantization and transformation to FP8. + + Args: + input: Input tensor of shape [m, k] + quant_group_size: Quantization group size (default: 128) + scale_ue8m0: Whether to scale to UE8M0 format (default: True) + + Returns: + Tuple of: + - output: Tensor of shape [m, k] in fp8 dtype + - output_scale: Tensor of shape [m, scale_k] in int32 dtype """
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (18)
examples/llm-api/quickstart_advanced.py(1 hunks)requirements.txt(1 hunks)tensorrt_llm/_torch/models/checkpoints/hf/weight_loader.py(2 hunks)tensorrt_llm/_torch/models/modeling_deepseekv3.py(6 hunks)tensorrt_llm/_torch/modules/attention.py(6 hunks)tensorrt_llm/_torch/modules/fused_moe/create_moe.py(3 hunks)tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py(1 hunks)tensorrt_llm/_torch/modules/fused_moe/quantization.py(2 hunks)tensorrt_llm/_torch/modules/linear.py(4 hunks)tensorrt_llm/_torch/pyexecutor/_util.py(1 hunks)tensorrt_llm/_utils.py(2 hunks)tensorrt_llm/llmapi/llm_args.py(1 hunks)tensorrt_llm/quantization/utils/__init__.py(1 hunks)tensorrt_llm/quantization/utils/fp8_utils.py(1 hunks)tests/unittest/_torch/helpers.py(2 hunks)tests/unittest/_torch/modules/test_fused_moe.py(3 hunks)tests/unittest/_torch/thop/test_fp8_block_scale_gemm.py(1 hunks)tests/unittest/test_pip_install.py(1 hunks)
✅ Files skipped from review due to trivial changes (3)
- tensorrt_llm/_torch/models/checkpoints/hf/weight_loader.py
- tensorrt_llm/llmapi/llm_args.py
- requirements.txt
🚧 Files skipped from review as they are similar to previous changes (12)
- tensorrt_llm/_torch/pyexecutor/_util.py
- examples/llm-api/quickstart_advanced.py
- tests/unittest/test_pip_install.py
- tensorrt_llm/_utils.py
- tensorrt_llm/quantization/utils/init.py
- tests/unittest/_torch/thop/test_fp8_block_scale_gemm.py
- tensorrt_llm/_torch/models/modeling_deepseekv3.py
- tensorrt_llm/_torch/modules/linear.py
- tests/unittest/_torch/helpers.py
- tensorrt_llm/_torch/modules/fused_moe/quantization.py
- tensorrt_llm/_torch/modules/fused_moe/create_moe.py
- tensorrt_llm/_torch/modules/attention.py
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: in tensorrt-llm testing, it's common to have both cli flow tests (test_cli_flow.py) and pytorch api ...
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tests/unittest/_torch/modules/test_fused_moe.py
📚 Learning: in tensorrt_llm/executor/worker.py, the lora adapter cache optimization logic that checks `is_adapte...
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.402Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks `is_adapter_in_cpu_cache()` and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
Applied to files:
tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py
🧬 Code Graph Analysis (2)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py (8)
tensorrt_llm/_utils.py (1)
nvtx_range(834-853)tensorrt_llm/_torch/utils.py (1)
Fp4QuantizedTensor(92-99)tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1)
CutlassFusedMoE(16-433)tensorrt_llm/_torch/modules/fused_moe/interface.py (2)
MoEWeightLoadingMode(12-14)has_deepseek_fp8_block_scales(115-118)tensorrt_llm/_torch/modules/fused_moe/routing.py (1)
BaseMoeRoutingMethod(26-49)tests/unittest/_torch/modules/test_fused_moe.py (1)
swiglu_fused_moe(477-479)tensorrt_llm/_torch/distributed/communicator.py (1)
tp_size(46-47)tensorrt_llm/quantization/utils/fp8_utils.py (1)
silu_and_mul_masked_post_quant_fwd(303-382)
tensorrt_llm/quantization/utils/fp8_utils.py (2)
tensorrt_llm/_utils.py (1)
nvtx_range(834-853)tests/unittest/_torch/helpers.py (5)
ceil_div(7-8)align(11-12)ceil_to_ue8m0(15-16)per_token_cast_to_fp8_e8m0(44-52)per_block_cast_to_fp8_e8m0(55-68)
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_deepgemm.py
370-370: Line too long (131 > 120)
(E501)
tensorrt_llm/quantization/utils/fp8_utils.py
309-314: 1 blank line required between summary line and description
(D205)
309-314: First line should end with a period, question mark, or exclamation point
Add closing punctuation
(D415)
459-464: 1 blank line required between summary line and description
(D205)
459-464: First line should end with a period, question mark, or exclamation point
Add closing punctuation
(D415)
Superjomn
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM from the llmapi perspective
|
/bot reuse-pipeline |
|
PR_Github #13748 [ ] completed with state |
Signed-off-by: Barry Kang <[email protected]> Signed-off-by: Fanrong Li <[email protected]> Signed-off-by: Yuxian Qiu <[email protected]> Signed-off-by: Zongfei Jing <[email protected]> Co-authored-by: Barry Kang <[email protected]> Co-authored-by: Fanrong Li <[email protected]> Co-authored-by: Yuxian Qiu <[email protected]> Signed-off-by: Lanyu Liao <[email protected]>
Signed-off-by: Barry Kang <[email protected]> Signed-off-by: Fanrong Li <[email protected]> Signed-off-by: Yuxian Qiu <[email protected]> Signed-off-by: Zongfei Jing <[email protected]> Co-authored-by: Barry Kang <[email protected]> Co-authored-by: Fanrong Li <[email protected]> Co-authored-by: Yuxian Qiu <[email protected]>
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores
deep_gemmlibrary for enhanced FP8 operations.Documentation
Description
Test Coverage
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.