Skip to content

Conversation

@ajrasane
Copy link
Collaborator

@ajrasane ajrasane commented Oct 16, 2025

Summary by CodeRabbit

  • New Features

    • Added --max_batch_size CLI parameter to configure maximum batch size for the model.
    • Introduced a wrapper module for compiled models to provide consistent integration with existing pipelines.
  • Refactor

    • Updated quantized model loading process.
    • Streamlined main execution flow for model compilation and initialization.

Description

Updated the Flux example to be compatible with the latest version of auto deploy

Test Coverage

TODO: Add integration test

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • [] Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 16, 2025

📝 Walkthrough

Walkthrough

Modified the Flux model compilation pipeline to replace graph fusion and quantization steps with load_buffers_and_params for quantized state restoration, introduces TransformerWrapper for consistent module interface, adds max_batch_size CLI parameter, and propagates it through compiler configuration.

Changes

Cohort / File(s) Summary
Quantized State Loading & Model Wrapping
examples/auto_deploy/build_and_run_flux.py
Removes fuse_gemms and quantize imports; adds load_buffers_and_params import from tensorrt_llm._torch.auto_deploy.transformations._graph; introduces new TransformerWrapper class exposing forward method and no-op cache_context method; updates model assignment to wrap compiled graph with TransformerWrapper(gm, config) instead of direct assignment
Compiler Configuration & CLI
examples/auto_deploy/build_and_run_flux.py
Adds --max_batch_size CLI argument with type=int and default=1; propagates max_batch_size from CLI args through compiler invocation via new parameter in compiler_cls(gm, args=(), max_batch_size=args.max_batch_size, kwargs=flux_kwargs)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings, 1 inconclusive)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The PR description is incomplete and lacks sufficient detail. The "Description" section only states "Updated the Flux example to be compatible with the latest version of auto deploy," which is vague and does not explain what specifically changed or why those changes were necessary. The "Test Coverage" section explicitly marks the item as "TODO: Add integration test," indicating that no test coverage has been provided for these significant changes. The PR checklist remains unchecked, suggesting the author did not complete the pre-submission review items. Given that the raw summary shows substantial structural changes including a new public class, modified compilation paths, and removed transformations, the description fails to adequately communicate these modifications. Expand the "Description" section to explain the specific changes: introduce TransformerWrapper as a wrapper around compiled models, switch from fuse_gemms/quantize to load_buffers_and_params for quantized state restoration, add max_batch_size CLI parameter support, and why these changes improve the autodeploy pipeline. For "Test Coverage," either provide concrete test names/locations for existing tests or explicitly commit to adding integration tests before merge. Finally, review and check off items in the PR Checklist that apply to validate compliance with coding guidelines and testing requirements.
Docstring Coverage ⚠️ Warning Docstring coverage is 16.67% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title Check ❓ Inconclusive The PR title "[None][chore] Update the Flux autodeploy example" is generic and vague. While it correctly refers to the file being changed and acknowledges that updates are being made, it fails to convey the specific nature or significance of the changes. The raw summary shows substantial technical modifications including introducing a new TransformerWrapper class, changing the quantization approach via load_buffers_and_params, adding max_batch_size CLI support, and modifying the compilation flow—none of which are reflected in the title. The title reads like a catch-all descriptor rather than a specific summary of the primary changes from the developer's perspective. Consider using a more descriptive title that captures the key architectural changes, such as "[None][feat] Introduce TransformerWrapper and refactor Flux autodeploy compilation path" or "[None][refactor] Update Flux autodeploy with new quantization and batch size support." This would give reviewers and future maintainers a clearer understanding of what the PR accomplishes at a glance.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 46ee7ac and 08b4363.

📒 Files selected for processing (1)
  • examples/auto_deploy/build_and_run_flux.py (3 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • examples/auto_deploy/build_and_run_flux.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • examples/auto_deploy/build_and_run_flux.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • examples/auto_deploy/build_and_run_flux.py
🧬 Code graph analysis (1)
examples/auto_deploy/build_and_run_flux.py (3)
tensorrt_llm/_torch/auto_deploy/transformations/_graph.py (1)
  • load_buffers_and_params (32-68)
tensorrt_llm/_torch/auto_deploy/compile/compiler.py (3)
  • CompileBackendRegistry (12-31)
  • get (25-27)
  • compile (47-48)
tensorrt_llm/_torch/auto_deploy/compile/backends/torch_opt.py (1)
  • compile (26-28)
🪛 Ruff (0.14.0)
examples/auto_deploy/build_and_run_flux.py

26-26: Unused method argument: args

(ARG002)


26-26: Unused method argument: kwargs

(ARG002)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (6)
examples/auto_deploy/build_and_run_flux.py (6)

10-10: LGTM!

The import of load_buffers_and_params is appropriate for the new quantized weight loading approach.


139-144: LGTM!

The max_batch_size CLI argument is well-defined with a sensible default value of 1.


177-178: LGTM!

The TransformerWrapper instantiation correctly wraps the compiled model with the configuration, providing a consistent interface for the pipeline.


167-171: Confirm permissive weight-loading settings
The call at examples/auto_deploy/build_and_run_flux.py:169–171 uses strict_missing=False, strict_unexpected=False, and clone=False, which deviates from the usual strict_missing=True pattern and silently ignores mismatched keys while sharing tensor memory. Confirm this is intentional and won’t mask missing parameters or cause aliasing issues.


16-36: Track the cache_context limitation and file a follow-up issue. The no-op implementation is safe—there are no call sites—and the unused args/kwargs are required by the interface (prefix them with underscores to silence linters if desired).


174-174: max_batch_size is supported by all compiler backends
TorchCudagraphCompiler (and thus TorchOptCompiler) explicitly handles max_batch_size, and the base CompilerBackend’s __init__ accepts extra kwargs, so passing this parameter will not cause a runtime error.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@ajrasane ajrasane marked this pull request as draft October 16, 2025 16:26
@ajrasane ajrasane self-assigned this Oct 16, 2025
@ajrasane ajrasane force-pushed the user/ajrasane/torch_diffusers branch 2 times, most recently from 813986e to 315756e Compare October 28, 2025 04:45
@ajrasane ajrasane force-pushed the user/ajrasane/torch_diffusers branch from d9329d5 to 623ae89 Compare November 7, 2025 00:37
@suyoggupta
Copy link
Collaborator

/bot run

@ajrasane ajrasane requested a review from cjluo-nv November 7, 2025 03:30
@ajrasane ajrasane force-pushed the user/ajrasane/torch_diffusers branch from 623ae89 to 6dcb2c7 Compare November 7, 2025 03:32
@suyoggupta
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23802 [ run ] triggered by Bot. Commit: 6dcb2c7

Signed-off-by: ajrasane <[email protected]>
@ajrasane ajrasane marked this pull request as ready for review November 7, 2025 03:55
@ajrasane ajrasane requested a review from a team as a code owner November 7, 2025 03:55
@tensorrt-cicd
Copy link
Collaborator

PR_Github #23802 [ run ] completed with state FAILURE. Commit: 6dcb2c7
/LLM/main/L0_MergeRequest_PR pipeline #17918 completed with status: 'FAILURE'

@ajrasane ajrasane force-pushed the user/ajrasane/torch_diffusers branch from 1b735d5 to 48f8a38 Compare November 14, 2025 13:57
@Fridah-nv
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24628 [ run ] triggered by Bot. Commit: bcc687f

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24628 [ run ] completed with state SUCCESS. Commit: bcc687f
/LLM/main/L0_MergeRequest_PR pipeline #18592 completed with status: 'FAILURE'

@Fridah-nv
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24730 [ run ] triggered by Bot. Commit: bcc687f

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24730 [ run ] completed with state FAILURE. Commit: bcc687f
/LLM/main/L0_MergeRequest_PR pipeline #18664 completed with status: 'FAILURE'

@Fridah-nv
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24739 [ run ] triggered by Bot. Commit: d486f99

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24739 [ run ] completed with state FAILURE. Commit: d486f99

@Fridah-nv
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24794 [ run ] triggered by Bot. Commit: 7e5f4cf

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24794 [ run ] completed with state SUCCESS. Commit: 7e5f4cf
/LLM/main/L0_MergeRequest_PR pipeline #18709 completed with status: 'FAILURE'

@Fridah-nv
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24805 [ run ] triggered by Bot. Commit: 7e5f4cf

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24805 [ run ] completed with state SUCCESS. Commit: 7e5f4cf
/LLM/main/L0_MergeRequest_PR pipeline #18718 completed with status: 'FAILURE'

@Fridah-nv
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24933 [ run ] triggered by Bot. Commit: 7e5f4cf

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24933 [ run ] completed with state SUCCESS. Commit: 7e5f4cf
/LLM/main/L0_MergeRequest_PR pipeline #18832 completed with status: 'SUCCESS'

@Fridah-nv Fridah-nv merged commit 8d7cda2 into NVIDIA:main Nov 18, 2025
5 checks passed
@github-project-automation github-project-automation bot moved this from In review to Done in AutoDeploy Board Nov 18, 2025
lkomali pushed a commit to lkomali/TensorRT-LLM that referenced this pull request Nov 19, 2025
Signed-off-by: ajrasane <[email protected]>
Co-authored-by: Frida Hou <[email protected]>
Signed-off-by: lkomali <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

7 participants