Skip to content

Conversation

@JunyiXu-nv
Copy link
Collaborator

@JunyiXu-nv JunyiXu-nv commented Nov 24, 2025

Summary by CodeRabbit

  • New Features

    • Added support for reasoning parsers including DeepSeek-R1 and Qwen3 in responses API
    • Enhanced OpenAI-compatible protocol with expanded response event types and text format configuration
  • Improvements

    • Improved responses streaming workflow with flexible multi-path processing
    • Enhanced response history management and state tracking
    • Added optional tokenization control in chat template application

✏️ Tip: You can customize this high-level summary in your review settings.

Description

  1. Add non-harmony model support for Responses API, supporting the same set of features of harmony model (gpt_oss).
  2. Add test for non-harmony models.
  3. Modified codes to conform the coding style
  4. Add more comments

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@JunyiXu-nv JunyiXu-nv requested a review from LinPoly November 24, 2025 07:11
@JunyiXu-nv JunyiXu-nv force-pushed the dev-junyi-general-responses-api branch from 6088257 to ceb7ded Compare November 24, 2025 10:14
@JunyiXu-nv JunyiXu-nv marked this pull request as ready for review November 24, 2025 10:21
@JunyiXu-nv JunyiXu-nv requested a review from a team as a code owner November 24, 2025 10:21
@JunyiXu-nv JunyiXu-nv requested a review from syuoni November 24, 2025 10:21
@JunyiXu-nv JunyiXu-nv force-pushed the dev-junyi-general-responses-api branch from ceb7ded to 1378f31 Compare November 24, 2025 10:21
@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 24, 2025

📝 Walkthrough

Walkthrough

This PR refactors the Responses API to support configurable reasoning and tool parsers, replacing direct Harmony adapter dependency with a flexible use_harmony flag. Changes span chat template tokenization control, reasoning parser whitespace handling, protocol/response format updates, server orchestration of streaming paths, and comprehensive responses utility refactoring with Harmony/non-Harmony processing pipelines.

Changes

Cohort / File(s) Summary
Chat Template Configuration
tensorrt_llm/inputs/utils.py
Added enable_tokenize: bool = False parameter to apply_chat_template to make tokenization behavior configurable during template application.
Reasoning Parser Updates
tensorrt_llm/llmapi/reasoning_parser.py
Modified DeepSeekR1Parser.parse to strip leading/trailing whitespace from both reasoning_content and content segments after partitioning.
Protocol and Response Formats
tensorrt_llm/serve/openai_protocol.py
Added imports for Response*Event types and related constructs; introduced _response_format_text_config_to_guided_decoding_params helper; added StreamingResponsesResponse type alias; changed ResponsesResponse.max_output_tokens to Optional[int]; refactored ResponsesRequest.to_sampling_params to remove default_max_tokens parameter and integrate guided decoding via new helper.
Server Responses Orchestration
tensorrt_llm/serve/openai_server.py
Replaced direct Harmony adapter dependency with flexible use_harmony flag; refactored create_stream_response and preprocessing/response creation paths to accept use_harmony, reasoning_parser, and tool_parser parameters; removed harmony_adapter argument from downstream utilities.
Responses Utility Implementation
tensorrt_llm/serve/responses_utils.py
Large refactor introducing Harmony-enabled multi-path processing: added new public classes (ResponsesStreamingStateTracker, ResponsesStreamingEventsHelper); renamed many public helpers to private underscored variants (get_system_message_get_system_message, etc.); introduced Harmony/non-Harmony conditional branching; expanded conversation history store logic with richer message handling, capacity management, and LRU-like eviction; added internal pipelines for input preprocessing (_create_input_messages, _create_input_tokens), output postprocessing (_create_output_content), and streaming event sequencing; refactored process_streaming_events and create_response to support both modes.
Test Fixtures and Multi-Model Support
tests/unittest/llmapi/apps/_test_openai_responses.py
Replaced fixed model fixture with parameterized fixture supporting three model paths ("gpt_oss/gpt-oss-20b", "DeepSeek-R1-Distill-Qwen-1.5B", "Qwen3/Qwen3-0.6B"); added conditional reasoning_parser and tool_parser argument construction based on model prefix; updated tool-calling tests with reasoning extraction checks and DeepSeek-R1 early skips.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant OpenAIServer as openai_server.py
    participant ResponseUtils as responses_utils.py
    participant HarmonyAdapter
    participant TokenRenderer
    
    rect rgb(200, 240, 255)
    Note over OpenAIServer,ResponseUtils: use_harmony = true (Harmony path)
    Client->>OpenAIServer: POST /responses
    OpenAIServer->>ResponseUtils: request_preprocess<br/>(use_harmony=true)
    ResponseUtils->>HarmonyAdapter: Initialize & prepare
    OpenAIServer->>ResponseUtils: process_streaming_events<br/>(use_harmony=true, reasoning_parser, tool_parser)
    ResponseUtils->>ResponseUtils: _construct_harmony_messages()
    ResponseUtils->>HarmonyAdapter: Process & emit harmony events
    ResponseUtils->>ResponseUtils: _apply_reasoning_parser()
    ResponseUtils->>ResponseUtils: _apply_tool_parser()
    ResponseUtils->>Client: Stream response.created, response.in_progress, ..., response.completed
    end
    
    rect rgb(240, 200, 255)
    Note over OpenAIServer,ResponseUtils: use_harmony = false (Non-Harmony path)
    Client->>OpenAIServer: POST /responses
    OpenAIServer->>ResponseUtils: request_preprocess<br/>(use_harmony=false)
    ResponseUtils->>ResponseUtils: _create_input_tokens()
    OpenAIServer->>ResponseUtils: process_streaming_events<br/>(use_harmony=false, reasoning_parser, tool_parser)
    ResponseUtils->>TokenRenderer: Render messages
    ResponseUtils->>ResponseUtils: _apply_reasoning_parser()
    ResponseUtils->>ResponseUtils: _apply_tool_parser()
    ResponseUtils->>ResponseUtils: _create_output_content()
    ResponseUtils->>Client: Stream response.created, response.in_progress, delta events, response.completed
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Key areas requiring attention:

  • responses_utils.py: Extensive refactoring with substantial new logic for Harmony/non-Harmony branching, streaming state management, and conversation history handling; verify control flow correctness and state consistency across both paths
  • openai_server.py: Verify that removal of direct Harmony adapter initialization does not break edge cases; confirm parameter threading through all downstream calls
  • openai_protocol.py: Review changes to ResponsesRequest.to_sampling_params for correctness of max_tokens calculation and guided decoding parameter propagation
  • responses_utils.py public API changes: Numerous function renamings to private variants; verify all call sites updated and public API surface is intentional
  • test parameterization: Ensure model-specific reasoning_parser and tool_parser flags are correctly wired and DeepSeek-R1 test skips do not mask actual failures

Possibly related PRs

Suggested reviewers

  • LinPoly

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: adding general purpose (non-harmony model) support for the Response API, which aligns with the substantial changes across multiple files enabling non-harmony model paths.
Description check ✅ Passed The description adequately explains the PR objectives including non-harmony model support, test coverage, code style conformance, and additional comments. The PR checklist is completed and matches template requirements.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25550 [ run ] triggered by Bot. Commit: 1378f31

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
tensorrt_llm/serve/openai_server.py (1)

104-111: Guard against self.model_config being None before accessing .model_type

load_pretrained_config failures are caught and self.model_config is set to None (Lines 104–110), but a few lines later self.use_harmony is computed as self.model_config.model_type == "gpt_oss" (Line 146). If config loading fails and DISABLE_HARMONY_ADAPTER is not set, this will raise an AttributeError at server startup.

Consider something like:

-        except Exception:
-            logger.debug("Failed to load AutoConfig for %s", hf_tokenizer_path)
-            self.model_config = None
+        except Exception:
+            logger.debug("Failed to load AutoConfig for %s", hf_tokenizer_path)
+            self.model_config = None
...
-        if disable_harmony:
-            self.use_harmony = False
-        else:
-            self.use_harmony = (self.model_config.model_type == "gpt_oss")
+        if disable_harmony:
+            self.use_harmony = False
+        else:
+            self.use_harmony = (
+                getattr(self.model_config, "model_type", None) == "gpt_oss"
+            )

This preserves existing behavior when the config is available, while falling back to use_harmony=False if it cannot be loaded.

Also applies to: 140-146

tensorrt_llm/serve/responses_utils.py (3)

203-252: ConversationHistoryStore: mutable defaults, missing prev_resp_id handling, and possible infinite loop in _pop_conversation

There are three related correctness issues here:

  1. Mutable default argument for resp_msgs (Line 204–207).
    resp_msgs= [] creates a shared list across calls and conversations. Combined with self.conversations[conversation_id] = resp_msgs, different conversations can accidentally share the same underlying list.

  2. prev_resp_id not found still indexed (Lines 239–245).
    You log a warning when prev_resp_id isn’t in response_to_conversation, but then still do
    conversation_id = self.response_to_conversation[prev_resp_id], which will raise KeyError. That contradicts the “warn and continue” intent.

  3. _pop_conversation can be a no-op in a capacity loop (Lines 341–394).
    If the conversation doesn’t contain a complete (user → assistant) (non-harmony) or (user → final) (harmony) span, start_index/end_index end up such that del conversation[start_index:end_index + 1] deletes nothing. Because store_response calls _pop_conversation in a while len(conversation) > conversation_capacity loop, this can become an infinite loop.

I recommend fixing these together so capacity trimming is always safe and side‑effect free between conversations. For example:

-    async def store_response(self,
-                             resp: ResponsesResponse,
-                             resp_msgs: Optional[
-                                 Union[List[Message],
-                                       List[ChatCompletionMessageParam]]] = [],
-                             prev_resp_id: Optional[str] = None) -> None:
+    async def store_response(
+        self,
+        resp: ResponsesResponse,
+        resp_msgs: Optional[
+            Union[List[Message], List[ChatCompletionMessageParam]]
+        ] = None,
+        prev_resp_id: Optional[str] = None,
+    ) -> None:
@@
-        async with self.conversations_lock:
+        # Make a private copy so we never alias the caller's list
+        resp_msgs_list: List[Union[Message, ChatCompletionMessageParam]] = (
+            list(resp_msgs) if resp_msgs is not None else []
+        )
+
+        async with self.conversations_lock:
             conversation_id: str
             if resp_id in self.response_to_conversation:
                 conversation_id = self.response_to_conversation[resp_id]
-                self.conversations[conversation_id].extend(resp_msgs)
-            elif prev_resp_id is not None:
-                if prev_resp_id not in self.response_to_conversation:
-                    logger.warning(
-                        f"Previous response id {prev_resp_id} not found in conversation store"
-                    )
-
-                conversation_id = self.response_to_conversation[prev_resp_id]
-                self.conversations[conversation_id].extend(resp_msgs)
+                self.conversations[conversation_id].extend(resp_msgs_list)
+            elif prev_resp_id is not None and prev_resp_id in self.response_to_conversation:
+                conversation_id = self.response_to_conversation[prev_resp_id]
+                self.conversations[conversation_id].extend(resp_msgs_list)
                 while len(self.conversations[conversation_id]
                           ) > self.conversation_capacity:
                     self._pop_conversation(resp_id)
             else:
-                conversation_id = _random_uuid()
-                self.conversations[conversation_id] = resp_msgs
+                if prev_resp_id is not None:
+                    logger.warning(
+                        "Previous response id %s not found in conversation store, "
+                        "starting a new conversation", prev_resp_id
+                    )
+                conversation_id = _random_uuid()
+                self.conversations[conversation_id] = resp_msgs_list
@@
-    def _pop_conversation(self, resp_id) -> None:
+    def _pop_conversation(self, resp_id) -> None:
@@
-        conversation = self.conversations[conversation_id]
-        if len(conversation) == 0:
-            return
-
-        is_harmony_conversation = isinstance(conversation[0], Message)
-
-        def get_first_conversation_range_harmony():
-            start_index = 0
-            end_index = 0
-            for i, msg in enumerate(conversation):
-                if msg.author.role == Role.USER:
-                    start_index = i
-                elif msg.channel == "final":
-                    end_index = i
-                    break
-
-            return start_index, end_index
-
-        def get_first_conversation_range():
-            start_index = 0
-            end_index = 0
-            for i, msg in enumerate(conversation):
-                if msg.get("role", "") == "user":
-                    start_index = i
-                elif msg.get("role", "") == "assistant":
-                    end_index = i
-                    break
-
-            return start_index, end_index
-
-        start_index, end_index = 0, 0
-        if is_harmony_conversation:
-            start_index, end_index = get_first_conversation_range_harmony()
-        else:
-            start_index, end_index = get_first_conversation_range()
-
-        del conversation[start_index:end_index + 1]
+        conversation = self.conversations[conversation_id]
+        if not conversation:
+            return
+
+        is_harmony_conversation = isinstance(conversation[0], Message)
+
+        def get_first_conversation_range_harmony() -> tuple[int, int]:
+            start_index: Optional[int] = None
+            end_index: Optional[int] = None
+            for i, msg in enumerate(conversation):
+                if msg.author.role == Role.USER and start_index is None:
+                    start_index = i
+                elif start_index is not None and msg.channel == "final":
+                    end_index = i
+                    break
+            if start_index is None:
+                # Fallback: drop from the beginning
+                start_index = 0
+            if end_index is None:
+                # Fallback: drop up to the last message
+                end_index = len(conversation) - 1
+            return start_index, end_index
+
+        def get_first_conversation_range() -> tuple[int, int]:
+            start_index: Optional[int] = None
+            end_index: Optional[int] = None
+            for i, msg in enumerate(conversation):
+                role = msg.get("role", "")
+                if role == "user" and start_index is None:
+                    start_index = i
+                elif start_index is not None and role == "assistant":
+                    end_index = i
+                    break
+            if start_index is None:
+                start_index = 0
+            if end_index is None:
+                end_index = len(conversation) - 1
+            return start_index, end_index
+
+        start_index, end_index = (
+            get_first_conversation_range_harmony()
+            if is_harmony_conversation else
+            get_first_conversation_range()
+        )
+
+        # Always remove at least one element to guarantee progress
+        del conversation[start_index:end_index + 1]




Also applies to: 341-394

---

`402-495`: **Harmony message construction: mutable default for `prev_msgs`**

`_construct_harmony_messages` uses a mutable default:

```python
def _construct_harmony_messages(
    request: ResponsesRequest,
    prev_response: Optional[ResponsesResponse],
    prev_msgs: List[Message] = [],
) -> List[Message]:

If any caller mutates prev_msgs (e.g., appending), that mutation will leak across calls. Even if current code only reads it, this is brittle.

Refactor to use None and normalize inside:

-def _construct_harmony_messages(
-    request: ResponsesRequest,
-    prev_response: Optional[ResponsesResponse],
-    prev_msgs: List[Message] = [],
-) -> List[Message]:
+def _construct_harmony_messages(
+    request: ResponsesRequest,
+    prev_response: Optional[ResponsesResponse],
+    prev_msgs: Optional[List[Message]] = None,
+) -> List[Message]:
@@
-    messages: List[Message] = []
+    messages: List[Message] = []
+    if prev_msgs is None:
+        prev_msgs = []

The rest of the logic can remain unchanged.


1054-1112: create_response: implicit Optional annotation and overall flow

The non-streaming response assembly looks correct: you reuse generation_result when provided, otherwise await generator, then choose Harmony/non-Harmony output builders and store to ConversationHistoryStore when enabled.

One minor typing issue:

create_time: int = None,

This is an implicit Optional[int]. Prefer making that explicit:

-    create_time: int = None,
+    create_time: Optional[int] = None,

Everything else in this block LGTM.

🧹 Nitpick comments (14)
tensorrt_llm/llmapi/reasoning_parser.py (1)

50-62: Whitespace stripping only in parse – consider parity with streaming path

parse() now strips both reasoning_content and content, but parse_delta() leaves whitespace as-is when it finds </think>. This means non‑streaming and streaming parsing can diverge slightly at boundaries (leading/trailing newlines or spaces). If the goal is normalized reasoning/content across both paths, consider trimming in the final parse_delta branch as well, or explicitly documenting the intended difference.

tensorrt_llm/inputs/utils.py (1)

564-577: New enable_tokenize flag is useful; just confirm tokenizer API compatibility

Wiring enable_tokenize through to tokenizer.apply_chat_template(..., tokenize=enable_tokenize) is sound and preserves previous behavior when left at the default False. Two minor follow‑ups:

  • Ensure all TokenizerBase / TransformersTokenizer implementations used here accept a tokenize keyword; otherwise this will raise TypeError at runtime for custom tokenizers.
  • When enable_tokenize=True, the underlying tokenizer may return token IDs (e.g., List[int]), while the annotation still says (str | List[str]). If you plan to rely on the tokenized form elsewhere, consider broadening the return type hint for clarity.

Also applies to: 596-604

tests/unittest/llmapi/apps/_test_openai_responses.py (2)

15-21: Parametrized model fixture and per-model server args look good

The module-scoped, parametrized model fixture and the conditional args construction (reasoning/tool parser flags based on model prefix) are a nice way to reuse the same tests across Harmony and non‑Harmony models. Just note that any future change in model naming conventions (e.g., different prefixes for Qwen/DeepSeek variants) will require updating these string checks; if that becomes painful, you might eventually want a small helper that maps model IDs to parser choices in one place.

Also applies to: 24-38


58-84: Enhanced check_tool_calling diagnostics – consider aggregating all content parts

Capturing reasoning_content and message_content for use in assertion messages is helpful. Right now you only look at output.content[0].text; if multiple text parts are ever present, the error messages will show only the first. For more complete debugging, you could join all text parts (e.g., "".join(part.text for part in output.content)), while keeping the rest of the logic unchanged.

tensorrt_llm/serve/openai_server.py (1)

905-917: Responses API wiring looks consistent; clarify assumptions around parsers and non-Harmony path

The openai_responses path now:

  • Sends use_harmony, reasoning_parser=self.llm.args.reasoning_parser, and tool_parser=self.tool_parser into responses_api_process_streaming_events and responses_api_create_response.
  • Routes tokenizer/model_config/processor into responses_api_request_preprocess only when not self.use_harmony.

This is a good split between Harmony and non‑Harmony flows. A couple of points to verify:

  • It assumes self.llm.args always has a reasoning_parser attribute; if that’s not guaranteed for all existing deployment configs, this will crash with AttributeError.
  • For the non‑Harmony path, tokenizer, model_config, and processor being None will now be surfaced inside responses_api_request_preprocess. If any models intentionally run without a HF config/processor, make sure that function either handles None gracefully or you gate openai_responses accordingly.

If those assumptions hold across your supported models, this wiring looks solid.

Also applies to: 937-946, 962-972

tensorrt_llm/serve/openai_protocol.py (2)

15-28: Helper bridging ResponseFormatTextConfig to GuidedDecodingParams – validate field mapping against OpenAI types

The new _response_format_text_config_to_guided_decoding_params correctly reuses _response_format_to_guided_decoding_params by constructing a local ResponseFormat. Two things to double‑check against openai.types.responses:

  • Using getattr(text_format, "schema_", None) assumes the OpenAI SDK exposes the JSON schema for text.format under that attribute (likely via an alias). If the SDK ever changes this (e.g., uses json_schema instead), guided decoding will silently stop working.
  • ResponseFormat.type is constrained to "text" | "json" | "json_schema" | "json_object" | "regex" | "ebnf" | "structural_tag". Ensure that text_format.type is always one of these values; if the OpenAI SDK introduces new format types, _response_format_to_guided_decoding_params will raise a ValueError.

Given how tightly this depends on the OpenAI Python model definitions, it’s worth re‑verifying against the exact SDK version you target.

Also applies to: 226-236


809-839: ResponsesRequest.to_sampling_params semantics: explicit max_tokens and text-format guided decoding

This implementation:

  • Sets max_tokens only from self.max_output_tokens (or leaves it None), ignoring any default max_tokens that might be present in default_sampling_params.
  • Derives temperature/top_p from request fields when set, otherwise from default_sampling_params (falling back to _DEFAULT_SAMPLING_PARAMS).
  • Pulls stop_token_ids exclusively from default_sampling_params.
  • Builds guided_decoding from self.text.format via _response_format_text_config_to_guided_decoding_params.

That’s coherent, but it slightly changes how global defaults can influence max_tokens for Responses requests. If you intend to allow a server‑side default for max_output_tokens when the client omits it, consider optionally wiring a "max_tokens" entry from default_sampling_params into max_tokens when self.max_output_tokens is None. Otherwise, this is a clean separation of request‑level vs. default behavior.

tensorrt_llm/serve/responses_utils.py (7)

260-303: store_messages: logging nit and behavior check

Functionally this looks fine and matches the new “whole conversation messages” contract. Two minor points:

  • The debug line _responses_debug_log(f"ConversationHistoryStore storing msg:") is an f-string with no placeholders (Ruff F541). You can drop the f prefix.
  • When prev_resp_id is not found in response_to_conversation, you silently start a new conversation. That’s probably OK, but it’s worth double-checking this matches the intended behavior given store_response now logs when prev_resp_id is missing.

If you want, I can propose a small diff to clean up the logging and optionally add a warning in the “start new conversation” branch.


767-813: _create_input_tokens: unused prev_response argument and dependency assumptions

  • prev_response is in the signature but not used, which Ruff flags (ARG001). Either drop it or add a comment if you’re keeping it for symmetry/future use.
  • For the non-harmony path, this function assumes tokenizer, model_config, and processor are non-None (passed from request_preprocess). If use_harmony is False but any of those is accidentally None, apply_chat_template will fail.

If call sites in openai_server.py always provide these for non-harmony models, this is fine. Otherwise, consider asserting or validating them here.


836-893: request_preprocess: split Harmony vs non-Harmony paths are sane; minor nit on debug f-strings

The Harmony/non-Harmony branching and reuse of ConversationHistoryStore for both _create_input_tokens_harmony and _create_input_tokens look consistent. The final _responses_debug_log(_decode_tokens(...)) is useful for debugging.

The only small nit is _responses_debug_log("======= Complete Inputs to model =======") etc. don’t need f-strings; if you’re cleaning up Ruff F541 warnings, you can drop f where present (e.g., the earlier “Prev msgs” logs).


1115-1236: Streaming state & events helper: overall design good; consider item_id management

The ResponsesStreamingStateTracker / ResponsesStreamingEventsHelper abstractions and the variety of get_*_event helpers are nicely structured and keep per‑stream state centralized.

One concern: in this file, item_id is never set on ResponsesStreamingEventsHelper before being used to construct items/events (e.g., in get_message_output_added_events / get_reasoning_output_added_events). Unless some external code mutates streaming_events_helper.item_id between calls, events will carry an empty string as item_id.

If the Responses API consumers rely on stable, non-empty item ids, you may want to set a default per output index, e.g.:

-    def output_index_increment(self):
-        self.state_tracker.current_output_index += 1
+    def output_index_increment(self):
+        self.state_tracker.current_output_index += 1
+        # Initialize a fresh item_id for each new output index if not set
+        if not self.state_tracker.current_item_id:
+            self.state_tracker.current_item_id = _random_uuid()

Or explicitly set item_id in process_streaming_events whenever you begin a new output.

Also, _get_output_added_events is implemented as a generator but annotated as returning List[StreamingResponsesResponse]; you’re using it via yield from, so the generator signature is correct—only the return annotation/docstring are a bit misleading.


1323-1408: _should_send_done_events: behavior is reasonable; note cost of re-parsing full output

The done-event decision logic (reasoning→text, text→tool_call, and end-of-generation fallbacks) is sound and aligns with the comments.

The only caveat is performance: on each streaming chunk, you call _apply_reasoning_parser(..., streaming=False) and _apply_tool_parser(..., streaming=False) over output.text (the full accumulated text). Depending on sequence lengths and parser complexity, this could be non-trivial. If this becomes a bottleneck, consider incremental tracking of last-sent content rather than re-parsing from scratch each time.

No immediate correctness issues here.


1642-1737: process_streaming_events: orchestration looks correct; minor nits

  • The overall flow—emit response.created and response.in_progress, consume the async generator, delegate to Harmony vs non-Harmony streaming generators, then call create_response with generation_result—is solid and avoids double-awaiting the generator.
  • You always construct a harmony_adapter = get_harmony_adapter() even when use_harmony is False. It’s a minor cost but could be avoided if construction is non-trivial.

If you care to micro-optimize:

-    stream_request_id = f"responses-api-{request.request_id}"
-    harmony_adapter = get_harmony_adapter()
+    stream_request_id = f"responses-api-{request.request_id}"
+    harmony_adapter = get_harmony_adapter() if use_harmony else None
@@
-        if use_harmony:
+        if use_harmony:
             event_generator = _generate_streaming_event_harmony(
-                harmony_adapter=harmony_adapter,
+                harmony_adapter=harmony_adapter,

(Or simply move get_harmony_adapter() inside the if use_harmony: branch.)

Otherwise, this block looks good.


117-120: Minor logging/style nits (f-strings without placeholders)

Several debug logs use f-strings without any {} placeholders, e.g.:

_responses_debug_log(f"------- Parsing input -----------")
...
_responses_debug_log(f"Prev msgs:")

These are harmless but trigger Ruff F541. If you’re cleaning up lint:

-    _responses_debug_log(f"------- Parsing input -----------")
+    _responses_debug_log("------- Parsing input -----------")

Same idea for similar lines around 224, 277, 303, 864, etc.

Also applies to: 224-227, 277-281, 303-305, 864-867

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f95edb5 and 1378f31.

📒 Files selected for processing (6)
  • tensorrt_llm/inputs/utils.py (2 hunks)
  • tensorrt_llm/llmapi/reasoning_parser.py (1 hunks)
  • tensorrt_llm/serve/openai_protocol.py (6 hunks)
  • tensorrt_llm/serve/openai_server.py (3 hunks)
  • tensorrt_llm/serve/responses_utils.py (21 hunks)
  • tests/unittest/llmapi/apps/_test_openai_responses.py (5 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
tensorrt_llm/serve/openai_server.py (4)
tensorrt_llm/llmapi/llm.py (2)
  • tokenizer (741-745)
  • tokenizer (748-749)
tensorrt_llm/_torch/models/modeling_llava_next.py (2)
  • tokenizer (73-74)
  • processor (81-82)
tensorrt_llm/_torch/models/modeling_qwen2vl.py (2)
  • tokenizer (124-125)
  • processor (132-133)
tensorrt_llm/_torch/models/modeling_vila.py (2)
  • tokenizer (905-906)
  • processor (909-910)
tensorrt_llm/serve/openai_protocol.py (1)
tensorrt_llm/sampling_params.py (1)
  • GuidedDecodingParams (15-37)
🪛 Ruff (0.14.5)
tensorrt_llm/serve/responses_utils.py

117-117: f-string without any placeholders

Remove extraneous f prefix

(F541)


206-206: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)


224-224: f-string without any placeholders

Remove extraneous f prefix

(F541)


277-277: f-string without any placeholders

Remove extraneous f prefix

(F541)


303-303: f-string without any placeholders

Remove extraneous f prefix

(F541)


462-462: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)


636-636: Avoid specifying long messages outside the exception class

(TRY003)


659-660: Avoid specifying long messages outside the exception class

(TRY003)


769-769: Unused function argument: prev_response

(ARG001)


864-864: f-string without any placeholders

Remove extraneous f prefix

(F541)


1063-1063: PEP 484 prohibits implicit Optional

Convert to T | None

(RUF013)


1297-1298: Avoid specifying long messages outside the exception class

(TRY003)


1451-1452: Avoid specifying long messages outside the exception class

(TRY003)


1454-1455: Avoid specifying long messages outside the exception class

(TRY003)


1720-1720: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (5)
tests/unittest/llmapi/apps/_test_openai_responses.py (1)

145-149: Tool-call skips for DeepSeek-R1 and streaming guard are appropriate

  • Skipping tool-call tests when model.startswith("DeepSeek-R1") aligns with the current model capabilities and prevents misleading failures.
  • In the streaming tool-call test, explicitly asserting function_call is not None before accessing .arguments avoids silent AttributeErrors and produces a clearer failure if the server ever stops emitting tool calls.

These changes improve test robustness without altering behavior for supported models.

Also applies to: 217-221, 250-261

tensorrt_llm/serve/openai_protocol.py (2)

874-892: StreamingResponsesResponse union type is a good consolidation of streaming events

The StreamingResponsesResponse alias consolidates all supported streaming event types (created/in‑progress/completed, output/content part added/done, reasoning deltas/done, code interpreter and web search events). This should simplify typing and downstream handling in streaming utilities like process_streaming_events without impacting runtime behavior.


911-912: ResponsesResponse.max_output_tokens now optional and sourced from sampling params

Making ResponsesResponse.max_output_tokens optional and populating it from sampling_params.max_tokens in from_request aligns the serialized response with the actual generation configuration:

  • If the client does not set max_output_tokens, both sampling_params.max_tokens and the response field remain None.
  • If it is set, the same value flows through to sampling and back to the response.

This is a sensible representation of what was actually used during generation; just be aware that any implicit server‑side defaults for max_tokens will no longer be reflected here unless wired into SamplingParams upstream.

Also applies to: 935-960

tensorrt_llm/serve/responses_utils.py (2)

623-690: _response_output_item_to_chat_completion_message and _create_input_messages look correct

The mapping from Responses items to ChatCompletionMessageParam (including function/tool roles and function_call_output → tool) and the filtering of "reasoning" messages from history is consistent with the intended chat format. No functional issues stand out here.


895-957: Reasoning & tool parser helpers: logic OK, but streaming dict requirement later must match usage

_apply_reasoning_parser and _apply_tool_parser correctly:

  • Lazily create parsers per output_index when a dict is supplied.
  • Support both full (parse / detect_and_parse) and streaming (parse_delta / parse_streaming_increment) modes.
  • Fall back to returning the original text when no parser or tools are configured.

No issues here, but note that later _generate_streaming_event assumes that reasoning_parser_dict contains an entry for output_idx; see my separate comment there about guarding that usage when reasoning_parser_id is None.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25550 [ run ] completed with state SUCCESS. Commit: 1378f31
/LLM/main/L0_MergeRequest_PR pipeline #19349 completed with status: 'FAILURE'

@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25667 [ run ] triggered by Bot. Commit: 6db1ac4

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25667 [ run ] completed with state SUCCESS. Commit: 6db1ac4
/LLM/main/L0_MergeRequest_PR pipeline #19452 completed with status: 'FAILURE'

@JunyiXu-nv JunyiXu-nv force-pushed the dev-junyi-general-responses-api branch from 6db1ac4 to fd21c8a Compare November 25, 2025 09:19
@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25716 [ run ] triggered by Bot. Commit: fd21c8a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25716 [ run ] completed with state SUCCESS. Commit: fd21c8a
/LLM/main/L0_MergeRequest_PR pipeline #19498 completed with status: 'FAILURE'

Copy link
Collaborator

@LinPoly LinPoly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Seems like the tool calling test for responses API can be moved to tests/unittest/llmapi/apps/_test_openai_tool_call.py, but anyway we can do it afterwards, like in the PR of supporting streaming tool call.

@JunyiXu-nv JunyiXu-nv force-pushed the dev-junyi-general-responses-api branch from ff9fd66 to c6dd6f8 Compare November 28, 2025 02:20
@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26092 [ run ] triggered by Bot. Commit: c6dd6f8

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26092 [ run ] completed with state SUCCESS. Commit: c6dd6f8
/LLM/main/L0_MergeRequest_PR pipeline #19812 completed with status: 'FAILURE'

@JunyiXu-nv JunyiXu-nv force-pushed the dev-junyi-general-responses-api branch from c6dd6f8 to 73cb11a Compare December 1, 2025 02:17
@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26351 [ run ] triggered by Bot. Commit: 73cb11a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26351 [ run ] completed with state SUCCESS. Commit: 73cb11a
/LLM/main/L0_MergeRequest_PR pipeline #20012 completed with status: 'FAILURE'

@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26375 [ run ] triggered by Bot. Commit: 73cb11a

@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26525 [ run ] triggered by Bot. Commit: f74ded0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26525 [ run ] completed with state FAILURE. Commit: f74ded0
/LLM/main/L0_MergeRequest_PR pipeline #20169 completed with status: 'FAILURE'

@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26534 [ run ] triggered by Bot. Commit: f74ded0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26534 [ run ] completed with state FAILURE. Commit: f74ded0
/LLM/main/L0_MergeRequest_PR pipeline #20175 completed with status: 'FAILURE'

@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26554 [ run ] triggered by Bot. Commit: f74ded0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26554 [ run ] completed with state FAILURE. Commit: f74ded0
/LLM/main/L0_MergeRequest_PR pipeline #20193 completed with status: 'FAILURE'

@JunyiXu-nv JunyiXu-nv force-pushed the dev-junyi-general-responses-api branch from f74ded0 to 4d2462d Compare December 2, 2025 08:22
@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26573 [ run ] triggered by Bot. Commit: 4d2462d

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26573 [ run ] completed with state FAILURE. Commit: 4d2462d
/LLM/main/L0_MergeRequest_PR pipeline #20207 completed with status: 'FAILURE'

@JunyiXu-nv JunyiXu-nv force-pushed the dev-junyi-general-responses-api branch from 4d2462d to a4c6445 Compare December 2, 2025 12:32
@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26613 [ run ] triggered by Bot. Commit: a4c6445

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26613 [ run ] completed with state SUCCESS. Commit: a4c6445
/LLM/main/L0_MergeRequest_PR pipeline #20239 completed with status: 'FAILURE'

@JunyiXu-nv
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26681 [ run ] triggered by Bot. Commit: a4c6445

@tensorrt-cicd
Copy link
Collaborator

PR_Github #26681 [ run ] completed with state SUCCESS. Commit: a4c6445
/LLM/main/L0_MergeRequest_PR pipeline #20303 completed with status: 'SUCCESS'

@JunyiXu-nv JunyiXu-nv merged commit 743486b into NVIDIA:main Dec 3, 2025
5 checks passed
MinaHuai pushed a commit to davidmlw/TensorRT-LLM that referenced this pull request Dec 10, 2025
…VIDIA#8779)

The performance results of some kernels could be easily affected by the warm/cold L2 cache status. To achieve more precise profiling results, the L2 cache is cleared for every execution by the circular buffer method for better benchmarking during autotuning.

Signed-off-by: Yukun He <[email protected]>

[None][infra] Waive failed cases for main branch on 11/25 (NVIDIA#9429)

Signed-off-by: qqiao <[email protected]>

[NVIDIA#8391][chore] test_perf.py to lock clocks read from gpu_configs.yml instead of max freq (NVIDIA#9409)

Signed-off-by: Eran Geva <[email protected]>

[None][ci] Move more test stages to use OCI machines (NVIDIA#9395)

Signed-off-by: Yanchao Lu <[email protected]>
Co-authored-by: Matt Lefebvre <[email protected]>

[None][feat] Improve TRTLLM MoE in small hidden size throughput cases (NVIDIA#9377)

Signed-off-by: Anthony Chang <[email protected]>

[https://nvbugs/5537996][fix] Let KV cache manager block initialization be aware whether it is doing a dry run or not (NVIDIA#9093)

Before this commit, the kv cache manager does the same regardless, which causes a mis-calculation in free memory available to allocate for the KV cache manager, hence causing a crash.

This commit fixes this by letting KV cache manager initialization be aware whether it is doing the dry run or not. If it is a dry run, use the max_tokens setting that is already pre-calculated and filled into kv_cache_config.max_tokens.

Signed-off-by: eopXD <[email protected]>

[https://nvbugs/5667922][fix] Update long context evaluation config (NVIDIA#9426)

Signed-off-by: mni <[email protected]>

[None][fix] Mitigate test timeout issues (NVIDIA#9445)

Signed-off-by: Shixiaowei02 <[email protected]>

[None][chore] Fix trtllm-eval for PyTorchLLM (NVIDIA#9427)

Signed-off-by: Fanrong Li <[email protected]>

[None][feat] Add a parser to layer-wise benchmarks (NVIDIA#9440)

Signed-off-by: Tailing Yuan <[email protected]>

[None][feat] Support custom chat template for tool calling (NVIDIA#9297)

Signed-off-by: Pengyun Lin <[email protected]>

[TRTLLM-8160][feat] Add draft token tree runtime on CDL (NVIDIA#8586)

Signed-off-by: Yue Weng <[email protected]>

[None][ci] waive a test (NVIDIA#9458)

Signed-off-by: Yan Chunwei <[email protected]>

[https://nvbugs/5680905][fix] Relax the MMLU accuracy requirement for DS-v3.2 (NVIDIA#9439)

Signed-off-by: Fanrong Li <[email protected]>

[TRTLLM-8376][feat] top-p optimization (removes redundant softmax) (NVIDIA#9411)

Signed-off-by: ixlmar <[email protected]>

[TRTLLM-9490][feat] use FlashInfer's top_k_sampling_from_probs (NVIDIA#9457)

Signed-off-by: ixlmar <[email protected]>

[https://nvbugs/5647400] [fix] Enlarged the AllReduce workspace size to 64MB. Added AllReduce strategy to AD config. (NVIDIA#9145)

Signed-off-by: Eran Geva <[email protected]>

[TRTLLM-909][feat] Overlap context chunks in pipeline parallel mode (NVIDIA#9308)

Signed-off-by: Robin Kobus <[email protected]>

[None][chore] AutoDeploy add multi stream moe pass to default.yaml (NVIDIA#9430)

Signed-off-by: Suyog Gupta <[email protected]>

[https://nvbugs/5685143][fix] avoid cudaFree overlap with cuda graph (NVIDIA#9438)

Signed-off-by: Chuang Zhu <[email protected]>

[None][chore] Bump version to 1.2.0rc5 (NVIDIA#9455)

Signed-off-by: Yiqing Yan <[email protected]>

[TRTLLM-8936][test] Add disagg and wideep multi-node multi-gpu test cases (NVIDIA#9356)

Signed-off-by: FredricZ-2007 <[email protected]>

[None][ci] move some slow test cases of DGX-B200 to post merge (NVIDIA#9467)

Signed-off-by: junq <[email protected]>

[TRTLLM-9293][feat] Enable partial weight loading to support streaming update weights (NVIDIA#9224)

Signed-off-by: shuyix <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[TRTLLM-9264][fix] Add accuracy/unit tests/doc for phi4mm (NVIDIA#9246)

Signed-off-by: Wanli Jiang <[email protected]>

[https://nvbugs/5580099][fix] Cherry pick IMA issue fix from release/1.1 (NVIDIA#9032)

Signed-off-by: Junyi Xu <[email protected]>

[None][chore] Upgrade CuteDSL to 4.3.0 (NVIDIA#9444)

Signed-off-by: Enwei Zhu <[email protected]>

[None][feat] Support MLA chunked prefill for DeepSeek V3.2 model (NVIDIA#9376)

Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>

[None][feat] Add environment variable to force spec-dec number of accepted tokens (NVIDIA#9371)

Signed-off-by: Aurelien Chartier <[email protected]>

[None][infra] Update allowed list 2025.11.25 (NVIDIA#9468)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][infra] Fail the pipeline when slurm ssh dropped (NVIDIA#9157)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][feat] AutoDeploy: Remove redundant copies in mamba layers (NVIDIA#9461)

Signed-off-by: Chenghao Zhang <[email protected]>
Co-authored-by: Suyog Gupta <[email protected]>

[None][feat] AutoDeploy: Add A_log fusion for Mamba layers (NVIDIA#9422)

Signed-off-by: Chenghao Zhang <[email protected]>

[None][ci] Waive blackwell test on spec gate. (NVIDIA#9502)

Signed-off-by: Zheyu Fu <[email protected]>

[https://nvbugs/5608930][fix] Fix a typo (NVIDIA#9487)

Signed-off-by: Shixiaowei02 <[email protected]>

[NVIDIA#9463][feat] Add revision option to trtllm commands (NVIDIA#9498)

Signed-off-by: Aurelien Chartier <[email protected]>

[TRTLLM-9085][doc] fix math formula rendering issues (NVIDIA#9481)

Signed-off-by: junq <[email protected]>

[None][chore] update comments in llm_args.py (NVIDIA#9472)

Signed-off-by: junq <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[https://nvbugs/5680310][fix] Fix ctx only timed out test (NVIDIA#9410)

Signed-off-by: Patrice Castonguay <[email protected]>

[https://nvbugs/5547414][fix] enable case after using local cache model (NVIDIA#9473)

Signed-off-by: Hui Gao <[email protected]>

[None][fix] Replace PYTORCH_CUDA_ALLOC_CONF with PYTORCH_ALLOC_CONF to fix deprecation warning (NVIDIA#9294)

Signed-off-by: Jiagan Cheng <[email protected]>

[https://nvbugs/5698581][fix] Init draft tokens for CUDA graph dummy request (NVIDIA#9505)

Signed-off-by: ziyixiong-nv <[email protected]>

[None][infra] Waive failed case in pre-merge on 11/27 (NVIDIA#9507)

Signed-off-by: qqiao <[email protected]>

[TRTLLM-9513][docs] Qwen3 deployment guide (NVIDIA#9488)

Signed-off-by: Lanyu Liao <[email protected]>
Co-authored-by: Lanyu Liao <[email protected]>

[None][chore] revert batch_size=1 to prevent timeout and lower accuracy reference by 0.12% as a WAR (NVIDIA#9447)

Signed-off-by: Lizhi Zhou <[email protected]>
Co-authored-by: Shi Xiaowei <[email protected]>

[TRTLLM-9279][infra] Use flexcache for gh200 nodes since they locate in Austin (NVIDIA#9405)

Signed-off-by: qqiao <[email protected]>
Signed-off-by: Emma Qiao <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[cherry-pick][https://nvbugs/5670793][fix] Solve trtllm-serve launch_disaggregated issue (NVIDIA#9346)

Signed-off-by: xxi <[email protected]>

[None][infra] Fix Slurm job script (NVIDIA#9508)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][fix] change allreduce workspace dtype to torch.int64 to avoid overflow (NVIDIA#9479)

Signed-off-by: Zhenhuan Chen <[email protected]>

[None][feat] add qwen3-next CI test of accuracy on BF16 and NVFP4 (NVIDIA#9330)

Signed-off-by: jiant <[email protected]>

[None][fix] fix TP support for DeepSeek-V3.2 on hopper (NVIDIA#9484)

Signed-off-by: Fanrong Li <[email protected]>

[TRTLLM-9389][chore] Refactor AlltoallMethodType. (NVIDIA#9388)

Signed-off-by: Bo Li <[email protected]>

[https://nvbugs/5674665][chore] Add test coverage for https://nvbugspro.nvidia.com/bug/5674665 (NVIDIA#9518)

Signed-off-by: eopXD <[email protected]>

[TRTLLM-7288][infra] Download merged waive list in slurm script (NVIDIA#8999)

Signed-off-by: Yiqing Yan <[email protected]>
Signed-off-by: Yanchao Lu <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[https://nvbugs/5687820][fix] Remove self.abort() in DetokenizedGenerationResult (NVIDIA#9449)

Signed-off-by: Enwei Zhu <[email protected]>

[NVIDIA#9150][feat] AutoDeploy Nemotron-Flash support (NVIDIA#9504)

Signed-off-by: Lucas Liebenwein <[email protected]>

[None] [chore] Update to cutlass 4.3 (NVIDIA#8637)

Signed-off-by: Kaiyu Xie <[email protected]>

[https://nvbugs/5637037][chore] Update waive lists. (NVIDIA#9386)

Signed-off-by: Bo Li <[email protected]>
Signed-off-by: Enwei Zhu <[email protected]>
Co-authored-by: Enwei Zhu <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[TRTLLM-8970][infra] Fix generate report when has isolation test result (NVIDIA#8861)

Signed-off-by: qqiao <[email protected]>
Signed-off-by: Emma Qiao <[email protected]>

[https://nvbugs/5685015][fix] Update invalid max_token test (NVIDIA#9435)

Signed-off-by: Junyi Xu <[email protected]>

[None][fix] Fix on-disk cache and revise logger/statistics for AutoTuner. (NVIDIA#9211)

Signed-off-by: Yukun He <[email protected]>

[https://nvbugs/5689658][test] Fix gpu lock issue running on cluster (NVIDIA#9441)

Signed-off-by: yufeiwu <[email protected]>

[None][chore] add spec_decoding configs in perf benchmark scripts and fix typos (NVIDIA#9533)

Signed-off-by: Lanyu Liao <[email protected]>
Co-authored-by: Lanyu Liao <[email protected]>

[None][fix] Remove FP8 K/V buffer from TRTLLM sparse MLA attention kernel (NVIDIA#9529)

Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>

[None] [chore] Enhancements and clean up to slurm scripts (NVIDIA#9493)

Signed-off-by: Kaiyu Xie <[email protected]>

[None][chore] Revert "[None][fix] change allreduce workspace dtype to torch.int64 t… (NVIDIA#9538)

Signed-off-by: Zhenhuan Chen <[email protected]>

[None][infra] Waive failed cases for main branch on 11/28 (NVIDIA#9539)

Signed-off-by: qqiao <[email protected]>

[None][fix] Pass checkpoint_format to create_input_processor (NVIDIA#9521)

Signed-off-by: Robin Kobus <[email protected]>

[TRTLLM-9541][infra] Use artifactory mirror for download.pytorch.org (NVIDIA#9477)

Signed-off-by: ZhanruiSunCh <[email protected]>
Signed-off-by: Zhanrui Sun <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[TRTLLM-9488][feat] add 'disable_flashinfer_sampling' config option (NVIDIA#9454)

Signed-off-by: ixlmar <[email protected]>

[None][infra] Waive failed case in pre-merge on 11/28 (NVIDIA#9537)

Signed-off-by: Wangshanshan <[email protected]>

[None][perf] Helix: improve all-to-all perf for large CP size (NVIDIA#9494)

Signed-off-by: Matthias Jouanneaux <[email protected]>
Signed-off-by: Zheyu Fu <[email protected]>
Co-authored-by: Zheyu Fu <[email protected]>

[None][feat] support for more accurate AR calculation (NVIDIA#9323)

Signed-off-by: binghanc <[email protected]>

[TRTLLM-9488][fix] llmapi references (NVIDIA#9547)

Signed-off-by: ixlmar <[email protected]>

[NVIDIA#8948][feat] Support custom sharding config (NVIDIA#9143)

Signed-off-by: greg-kwasniewski1 <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][chore] Weekly mass integration of release/1.1 -- rebase (NVIDIA#9522)

Signed-off-by: yunruis <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
Signed-off-by: qgai <[email protected]>
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Yan Chunwei <[email protected]>
Signed-off-by: Junyi Xu <[email protected]>
Signed-off-by: Simeng Liu <[email protected]>
Signed-off-by: nv-guomingz <[email protected]>
Signed-off-by: Jin Li <[email protected]>
Signed-off-by: Ivy Zhang <[email protected]>
Signed-off-by: Vincent Zhang <[email protected]>
Signed-off-by: peaceh <[email protected]>
Signed-off-by: Michal Guzek <[email protected]>
Signed-off-by: Michal Guzek <[email protected]>
Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>
Signed-off-by: leslie-fang25 <[email protected]>
Signed-off-by: Shunkang <[email protected]>
Signed-off-by: junq <[email protected]>
Co-authored-by: yunruis <[email protected]>
Co-authored-by: sunnyqgg <[email protected]>
Co-authored-by: brb-nv <[email protected]>
Co-authored-by: Yan Chunwei <[email protected]>
Co-authored-by: JunyiXu-nv <[email protected]>
Co-authored-by: Simeng Liu <[email protected]>
Co-authored-by: Guoming Zhang <[email protected]>
Co-authored-by: Jin Li <[email protected]>
Co-authored-by: Ivy Zhang <[email protected]>
Co-authored-by: Vincent Zhang <[email protected]>
Co-authored-by: peaceh-nv <[email protected]>
Co-authored-by: Michal Guzek <[email protected]>
Co-authored-by: Chang Liu <[email protected]>
Co-authored-by: Leslie Fang <[email protected]>
Co-authored-by: Shunkangz <[email protected]>
Co-authored-by: Shunkang <[email protected]>
Co-authored-by: QI JUN <[email protected]>

[TRTLLM-5971][feat] Integrate helix parallelism (NVIDIA#9342)

Signed-off-by: Balaram Buddharaju <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][infra] - Request idle time exemption for OCI jobs (NVIDIA#9528)

Signed-off-by: Yanchao Lu <[email protected]>

[None][infra] Wiave failed tests for main branch on 11/30 (NVIDIA#9555)

Signed-off-by: qqiao <[email protected]>

[None][fix] Fix port conflict in disagg tests (NVIDIA#9474)

Signed-off-by: Junyi Xu <[email protected]>

[None][ci] Split H100_PCIe-PyTorch-Post-Merge test stage (NVIDIA#9558)

Signed-off-by: Yanchao Lu <[email protected]>

[None][ci] Split H100_PCIe-PyTorch-Post-Merge test stage (NVIDIA#9559)

Signed-off-by: Yanchao Lu <[email protected]>

[TRTLLM-8958][feat] and [TRTLLM-8960]: create ConfigurableMoE and support TRTLLMGenFusedMoE as backend (NVIDIA#9486)

[None] [feat] Optimize the algorithm part of RocketKV (NVIDIA#9333)

Signed-off-by: yuhangh <[email protected]>

[https://nvbugs/5690172][fix] Fix Qwen3-235B ATP accuracy issue with PDL (NVIDIA#9530)

Signed-off-by: Enwei Zhu <[email protected]>

[TRTLLM-6222][feat] Extend cute_dsl_nvfp4_gemm to sm103. (NVIDIA#9543)

Signed-off-by: Mindy Li <[email protected]>

[None][fix] Correct virtual memory allocation alignment (NVIDIA#9491)

Signed-off-by: Yuan Tong <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[https://nvbugs/5684703][fix] Unwaive disagg guided decoding test (NVIDIA#9466)

Signed-off-by: Enwei Zhu <[email protected]>

[https://nvbugs/5503479][fix] Temporarily lower reference accuracy to stabilize CI (NVIDIA#9398)

Signed-off-by: Pengbo Wang <[email protected]>

[None][chore] remove qwen3-next accuracy tests (NVIDIA#9534)

Signed-off-by: jiant <[email protected]>

[None][doc] fix mtp.py typo (NVIDIA#9307)

Signed-off-by: liugaoji <[email protected]>

[None][feat] add chat template kwargs support to longbench-v2 (NVIDIA#9544)

Signed-off-by: Fanrong Li <[email protected]>

[NVIDIA#9496][fix] AutoDeploy: remove auto-tuner from nvfp4_gemm forward (NVIDIA#9497)

Signed-off-by: Neta Zmora <[email protected]>

[None][fix] Replace hash method with unique_id for cutedsl MoE runners. (NVIDIA#9569)

Signed-off-by: Yukun He <[email protected]>

[None][chore] refactor disaggregated scripts to use named arguments (NVIDIA#9581)

Signed-off-by: Zhenhuan Chen <[email protected]>

[TRTLLM-6222][feat] Several perf opt for cuteDSL nvf4 gemm (NVIDIA#9428)

Signed-off-by: Yuhan Li <[email protected]>

[None][chore] reduce the layers of the `devel` docker image (NVIDIA#9077)

Signed-off-by: Martin Marciniszyn Mehringer <[email protected]>

[https://nvbugs/5651854][infra] Enable perf metrics during accuracy testing (NVIDIA#9140)

[None][fix] Skip Allreduce init for Attention DP (NVIDIA#9542)

Signed-off-by: Enwei Zhu <[email protected]>

[None][test] [None][test] Waive main branch test failures 12/1 (NVIDIA#9566)

Signed-off-by: Yanchao Lu <[email protected]>

[None][ci] Minor change for Slurm scripts (NVIDIA#9561)

Signed-off-by: Yanchao Lu <[email protected]>

[TRTLLM-6768][infra] Fix params for not updating github status (NVIDIA#6747)

Signed-off-by: Yiqing Yan <[email protected]>

[None][infra] Update the pytest options after MI (NVIDIA#9579)

Signed-off-by: qqiao <[email protected]>

[TRTLLM-6756][feat] Add Beam Search to TorchSampler (NVIDIA#8509)

Signed-off-by: Stefan Niebler <[email protected]>

[None][chore] Defer exposing context parallel configs (NVIDIA#9552)

Signed-off-by: Balaram Buddharaju <[email protected]>

[TRTC-1943][feat] Env vars override support in LLM API (NVIDIA#9104)

Signed-off-by: Venky Ganesh <[email protected]>

[None][feat] AutoDeploy: Use the router gemm op for nemotron MOE (NVIDIA#9500)

Signed-off-by: Chenghao Zhang <[email protected]>

[NVIDIA#9198][feat] Refactor dist ops in AutoDeploy (NVIDIA#9301)

Signed-off-by: Eran Geva <[email protected]>

[None][fix] Prevent YAML partial kv_cache_config from incorrectly overriding the complete kv_cache_config (NVIDIA#9262)

Signed-off-by: Yuening Li <[email protected]>

[TRTLLM-9085][doc] fix math formula rendering issues in github (NVIDIA#9605)

Signed-off-by: junq <[email protected]>

[None][feat] Unify nvfp4 gemm backend (NVIDIA#8963)

Signed-off-by: Shijie Wang <[email protected]>
Signed-off-by: Yukun He <[email protected]>
Signed-off-by: Shijie <[email protected]>
Co-authored-by: Yukun He <[email protected]>

[None][feat] Add support for KVCache reuse for DSv32 (NVIDIA#9383)

Signed-off-by: Iman Tabrizian <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][chroe] Polish qwen3-next modeling code. (NVIDIA#8902)

Signed-off-by: nv-guomingz <[email protected]>

[https://nvbugs/5703953][fix] Use random port for disagg tests (NVIDIA#9582)

Signed-off-by: Junyi Xu <[email protected]>

[None][fix] Waive gb200 (NVIDIA#9580)

Signed-off-by: Xin He (SW-GPU) <[email protected]>

[FMDL-1328][feat] Add support for nano-v3 and super-v3 with pytorch backend (NVIDIA#9261)

Signed-off-by: Wanli Jiang <[email protected]>

[https://nvbugs/5582091][test] increase warmup times in testing for multi-gpu cases (NVIDIA#9578)

Signed-off-by: Ruodi Lu <[email protected]>
Co-authored-by: Ruodi Lu <[email protected]>

[None][chore] Add failed cases into waives.txt (NVIDIA#9588)

Signed-off-by: xinhe-nv <[email protected]>

[https://nvbugs/5702793][fix] Fix uncontiguous tensor view (NVIDIA#9576)

Signed-off-by: shuyix <[email protected]>

[None][infra] Waive failed cases for main branch (NVIDIA#9615)

Signed-off-by: qqiao <[email protected]>

[TRTLLM-9488][feat] use FlashInfer.sampling by default (NVIDIA#9545)

Signed-off-by: ixlmar <[email protected]>

[None][infra] Update allowlist 2025/12/01 (NVIDIA#9616)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][infra] Remove an invalid test name in waives.txt (NVIDIA#9620)

Signed-off-by: qqiao <[email protected]>

Lock the gpu clocks in L0 perf tests (NVIDIA#9585)

Signed-off-by: Eran Geva <[email protected]>

[TRTLLM-9466][test] Evaluate helix parallelism with DSV3 Lite (NVIDIA#9597)

Signed-off-by: Balaram Buddharaju <[email protected]>

[None][fix] Extract GPU count from single-node stage names (NVIDIA#9599)

Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>

[https://nvbugs/5667774][fix] Refine Piecewise Cuda Graph Condition for DP (NVIDIA#9393)

Signed-off-by: Jin Li <[email protected]>

[TRTLLM-9144][fix] enhance RPC robustness (NVIDIA#8711)

Signed-off-by: Superjomn <[email protected]>
Signed-off-by: Erin Ho <[email protected]>
Signed-off-by: Yan Chunwei <[email protected]>
Co-authored-by: Erin Ho <[email protected]>

[https://nvbugs/5627710][fix] Fix synchronization bugs in KvCacheTransferManager that can cause corrupted blocks (NVIDIA#9056)

Signed-off-by: thorjohnsen <[email protected]>
Signed-off-by: Thor Johnsen <[email protected]>
Co-authored-by: Iman Tabrizian <[email protected]>
Co-authored-by: Robin Kobus <[email protected]>

[TRTLLM-8980][test] Clean up spec dec tests in test_llm_api_pytorch (NVIDIA#8889)

Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[NVIDIA#9150][feat] Add code for nano v3 to custom implementation in AD (NVIDIA#9465)

* Why?

We would like to show an alternative to monkey-patching in AutoDeploy.

* What?

This commit builds on the existing custom model implementation for
NemotronH and adds the bits relevant for MoE layers.

Part of NVIDIA#9150.

Signed-off-by: William Zhang <[email protected]>

[NVIDIA#9150][feat] AutoDeploy: reviewer comments for NVIDIA#9150 (NVIDIA#9527)

Signed-off-by: Lucas Liebenwein <[email protected]>

[https://nvbugs/5651854][fix] Fix dist-serving perf by clearing CPU affinity (NVIDIA#9549)

Signed-off-by: Shixiaowei02 <[email protected]>

[NVIDIA#9550][feat] AutoDeploy: Add NVFP4 Cutlass MoE kernels  (NVIDIA#9551)

Signed-off-by: Neta Zmora <[email protected]>

[https://nvbugs/5688388][fix] fix: Reducing num request in disagg test to speed up (NVIDIA#9598)

Signed-off-by: Patrice Castonguay <[email protected]>

[TRTLLM-8946][feat] Improved heuristics to detect shardable regions (NVIDIA#9200)

Signed-off-by: Lucas Liebenwein <[email protected]>
Signed-off-by: greg-kwasniewski1 <[email protected]>
Co-authored-by: Lucas Liebenwein <[email protected]>

[NVIDIA#9632][feat] Support EXTRA_WHEEL_BUILD_ARGS during wheel build (NVIDIA#9633)

Signed-off-by: Yu Chi Li <[email protected]>

[None][chore] Waive test failing on pre-merge (NVIDIA#9638)

Signed-off-by: Balaram Buddharaju <[email protected]>

[None][chore] Remove traceback dump for multimodal input processor (NVIDIA#9634)

Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>

[None][chore] Fix trtllm-eval and move GroupedGemmInputsHelper (NVIDIA#9612)

Signed-off-by: Enwei Zhu <[email protected]>

[https://nvbugs/5698434][fix] Use separate weight mapper for draft (NVIDIA#9607)

Signed-off-by: Anurag Mukkara <[email protected]>

[TRTLLM-7101][infra] Reuse passed tests (NVIDIA#6894)

Signed-off-by: Yiqing Yan <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[None][test] Remove duplicate test cases (NVIDIA#9623)

Signed-off-by: yufeiwu <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][feat] Add RocketKV usage doc and e2e accuracy test on LongBenchV2 (NVIDIA#9572)

Signed-off-by: yuhangh <[email protected]>

[TRTLLM-9242][doc] Add examples showcasing openai compatible APIs (NVIDIA#9520)

Signed-off-by: Junyi Xu <[email protected]>

[None][chore] AutoDeploy update cuda stream manager for multi-device (NVIDIA#9575)

Signed-off-by: Suyog Gupta <[email protected]>

[TRTLLM-9391][chore] Automatically estimate required workspace. (NVIDIA#9535)

Signed-off-by: Bo Li <[email protected]>

[https://nvbugs/5708475][fix] Fix e2e eval accuracy for helix parallelism (NVIDIA#9647)

Signed-off-by: Balaram Buddharaju <[email protected]>

[https://nvbugs/5561153][test] Fix log error for perf test (NVIDIA#9622)

Signed-off-by: FredricZ-2007 <[email protected]>

[TRTLLM-8241][feat] Aliasing to comply to LlmArgs (NVIDIA#9586)

Signed-off-by: Pengyun Lin <[email protected]>

[None][chore] Add failed cases into waives.txt (NVIDIA#9593)

Signed-off-by: Jie Li <[email protected]>
Co-authored-by: Jie Li <[email protected]>

[TRTLLM-6842][feat] Support Response API for general purpose (NVIDIA#9392)

Signed-off-by: Junyi Xu <[email protected]>

[None][test] Update Qwen3-next accuracy testing by setting the cuda … (NVIDIA#9613)

Signed-off-by: nv-guomingz <[email protected]>

[None][feat] update trtllm-gen nvfp4 kernels with better performance (NVIDIA#9510)

Signed-off-by: Perkz Zheng <[email protected]>

[None][doc] Replace the tensorrt icon with torch icon on overview.md (NVIDIA#9644)

Signed-off-by: nv-guomingz <[email protected]>

[https://nvbugs/5705197][chore] Unwaive timeout disagg tests (NVIDIA#9637)

Signed-off-by: Patrice Castonguay <[email protected]>

[https://nvbugs/5552132][fix] Enable LoRa for GPT OSS Torch (NVIDIA#8253)

Signed-off-by: Michal Guzek <[email protected]>

[None][fix] Fix wide ep MoE error (NVIDIA#9642)

Signed-off-by: Iman Tabrizian <[email protected]>

[https://nvbugs/5702795][fix] Remove the warning message for aten.log. (NVIDIA#9665)

Signed-off-by: nv-guomingz <[email protected]>

[https://nvbugs/5693853][fix] Fix error handling when querying machin… (NVIDIA#9483)

Signed-off-by: Gal Hubara Agam <[email protected]>

[OMNIML-2932] [feat] nvfp4 awq support (NVIDIA#8698)

Signed-off-by: weimingc <[email protected]>

[NVIDIA#9643][fix] AutoDeploy: fix nano sharding config (NVIDIA#9668)

Signed-off-by: Lucas Liebenwein <[email protected]>

[NVIDIA#9147][feat] AutoDeploy: Draft Target Speculative Decoding (NVIDIA#9275)

Signed-off-by: Govind Ramnarayan <[email protected]>

[None][feat] Update Qwen3CodeToolParser to align tool-calling parameters (NVIDIA#9540)

Signed-off-by: Wanli Jiang <[email protected]>

[TRTLLM-7181][infra] Generate test results when pytest timeout happens (NVIDIA#9396)

Signed-off-by: Yiqing Yan <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[TRTLLM-9522][fix] restore `trtllm-serve mm_embedding_serve` (NVIDIA#9669)

[TRTLLM-5093][infra] Write env variables to a file in the interactive debug session (NVIDIA#6792)

Signed-off-by: Yiqing Yan <[email protected]>

[None][fix] fix error when processing batches containing both text and mm data (NVIDIA#8381)

Signed-off-by: Nekofish-L <[email protected]>

[TRTLLM-7073][feat] Support torch compile for PP for Llama and DeepSeekV3 (NVIDIA#7838)

Signed-off-by: Jin Li <[email protected]>

[None][feat] Add weights initialization and context phase parser to layer-wise benchmarks (NVIDIA#9667)

Signed-off-by: Tailing Yuan <[email protected]>

[TRTLLM-8274][feat] Check if executor is shutdown in /health entrypoint (NVIDIA#9057)

Signed-off-by: Junyi Xu <[email protected]>

[NVIDIA#8733][feat] Add Llama4 MoE handling to AutoDeploy (NVIDIA#9556)

Signed-off-by: Tal Cherckez <[email protected]>
Signed-off-by: tcherckez-nvidia <[email protected]>
Co-authored-by: Neta Zmora <[email protected]>

[None][ci] unwaive tests (NVIDIA#9651)

Signed-off-by: Yan Chunwei <[email protected]>

[None][feat] Add NIXL-LIBFABRIC support (NVIDIA#9225)

Signed-off-by: Yoray Zack <[email protected]>
Signed-off-by: zackyoray <[email protected]>

[None][test] rename wide ep and disagg metric name in perf test (NVIDIA#9704)

Signed-off-by: Ruodi Lu <[email protected]>
Co-authored-by: Ruodi Lu <[email protected]>

[https://nvbugs/5467531][fix] Unwaive fused_moe all to all test with … (NVIDIA#9617)

Signed-off-by: Jin Li <[email protected]>

[None][fix] Recover TRTLLM MoE Perf for DEP (NVIDIA#9562)

Signed-off-by: Anthony Chang <[email protected]>

[None][chore] Add failed cases into waives.txt (NVIDIA#9662)

Signed-off-by: Xin He (SW-GPU) <[email protected]>
Signed-off-by: xinhe-nv <[email protected]>
Signed-off-by: Yanchao Lu <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[None][fix] Fix TLLM_SPEC_DECODE_FORCE_NUM_ACCEPTED_TOKENS for MTP/EAGLE (NVIDIA#9608)

Signed-off-by: Aurelien Chartier <[email protected]>

[None][infra] Add container notices and documentation (NVIDIA#9185)

Signed-off-by: Parker Drake <[email protected]>

[TRTLLM-5312][infra] Add triton trigger rules (NVIDIA#6440)

Signed-off-by: Yiqing Yan <[email protected]>

[None][doc] Add feature docs for helix parallelism (NVIDIA#9684)

Signed-off-by: Balaram Buddharaju <[email protected]>

[TRTLLM-9579][infra] Set mergeWaiveList stage UNSTABLE when there is any issue (NVIDIA#9692)

Signed-off-by: Yiqing Yan <[email protected]>

[None][doc] Added line about partial reuse (NVIDIA#7846)

Signed-off-by: thorjohnsen <[email protected]>

[TRTLLM-8920][feat] decouple disagg service from fastapi (NVIDIA#8714)

Signed-off-by: Lizhi Zhou <[email protected]>

[https://nvbugs/5633340][fix] start disagg workers and servers on free ports (NVIDIA#9694)

Signed-off-by: Lizhi Zhou <[email protected]>

[TRTLLM-9562] [doc] Add Deployment Guide for Kimi K2 Thinking on TensorRT LLM - Blackwell (NVIDIA#9711)

Signed-off-by: Kaiyu Xie <[email protected]>

[NVIDIA#9602][feat] AutoDeploy: Support TRTLLM Sampler (NVIDIA#9641)

Signed-off-by: Govind Ramnarayan <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None] [tests] Unwaive EPLB tests (NVIDIA#9625)

Signed-off-by: Kaiyu Xie <[email protected]>

[https://nvbugs/5518713][test] Refactor core test lists by merging with llm_perf_cluster.yml (NVIDIA#9714)

Signed-off-by: yufeiwu <[email protected]>

[TRTLLM-7136][feat] Update load_weights method to include mapping parameter in checkpoint loaders (NVIDIA#9583)

Signed-off-by: Robin Kobus <[email protected]>

[None][refactor] Improve request processing function in sampler (NVIDIA#9671)

Signed-off-by: Robin Kobus <[email protected]>

[https://nvbugs/5670672][fix] Fix flaky KV connector tests (NVIDIA#9676)

Signed-off-by: jthomson04 <[email protected]>

[None][infra] Update allowed list 20251204 (NVIDIA#9718)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][feat] AutoDeploy: Perf optimization for Attention and rmsnorm (NVIDIA#9719)

Signed-off-by: Chenghao Zhang <[email protected]>

[None][chore] Waive flakey disagg tests (NVIDIA#9749)

Signed-off-by: Mike Iovine <[email protected]>

[https://nvbugs/5601682][fix] Fix cacheTransceiver hang (NVIDIA#9311)

Signed-off-by: Iman Tabrizian <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9199][docs] KV Connector Docs (NVIDIA#9325)

Signed-off-by: jthomson04 <[email protected]>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9160][doc] add doc to llm_runtime.py (NVIDIA#9482)

Signed-off-by: Yan Chunwei <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[None][doc] VDR 1.0 trtllm-serve doc enhancement (NVIDIA#9443)

Signed-off-by: Pengyun Lin <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9086][doc] Clean up TODOs in documentation (NVIDIA#9292)

Signed-off-by: junq <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9157][doc] Guided decoding doc improvement (NVIDIA#9359)

Signed-off-by: Enwei Zhu <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[None][infra] Updated Linux installation guide (NVIDIA#9485)

Signed-off-by: Yiqing Yan <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9075][doc] refine the slurm examples (NVIDIA#9548)

Signed-off-by: Yan Chunwei <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9093][doc] update hyper links in overview (NVIDIA#9568)

Signed-off-by: junq <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9092][doc] link to modelopt checkpoints in quick start guide (NVIDIA#9571)

Signed-off-by: junq <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][fix] Fix triton moe load_weight (NVIDIA#9649)

Signed-off-by: shuyix <[email protected]>

[None][fix] fix a bug: deepseek_fp8_block_scales in TRTLLMGEN-MoE use 2D x_sf instead of 1D (NVIDIA#9658)

Signed-off-by: xxi <[email protected]>

[TRTLLM-9372][feat] Enable CuteDSL MoE with Large EP (NVIDIA#9592)

Signed-off-by: Enwei Zhu <[email protected]>

[TRTLLM-9522][chore] implement default `attach_multimodal_embeddings` (NVIDIA#9664)

Signed-off-by: ixlmar <[email protected]>

[TRTLLM-9660][feat] Convert cuteDSL GEMM to opt-in feature (NVIDIA#9682)

Signed-off-by: Jonas Li <[email protected]>
Co-authored-by: Kaiyu Xie <[email protected]>

[None][fix] enable hmac in RPC (NVIDIA#9745)

Signed-off-by: Superjomn <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[https://nvbugs/5703953][fix] Preserving ip:port for trtllm-serve before initializing llm (NVIDIA#9646)

Signed-off-by: Junyi Xu <[email protected]>

[None][infra] Waive failed cases for main branch on 12/07 (NVIDIA#9769)

Signed-off-by: qqiao <[email protected]>

[None][fix] Several minor fixes to CI setting (NVIDIA#9765)

Signed-off-by: Yanchao Lu <[email protected]>

[OMNIML-3036][doc] Re-branding TensorRT-Model-Optimizer as Nvidia Model-Optimizer (NVIDIA#9679)

Signed-off-by: Chenjie Luo <[email protected]>

[None][feat] Enable NCCL_SYMMETRIC as default fallback for AllReduce (NVIDIA#9314)

Signed-off-by: Ludwig Schneider <[email protected]>

[TRTLLM-9000][feat] Add multi-node Perf Tests into CI (NVIDIA#8800)

Signed-off-by: Chenfei Zhang <[email protected]>

[None][test] add ntp tolerance in time metrics verification (NVIDIA#9741)

Signed-off-by: zhengd-nv <[email protected]>

[TRTLLM-9603][feat] Enable ConfigurableMoE test in the CI (NVIDIA#9645)

[https://nvbugs/5422621][test] Add GB 200 WIDEEP test case for RCCA 5422621 (NVIDIA#9506)

Signed-off-by: FredricZ-2007 <[email protected]>

[None][fix] Fix two tuning cache miss issues. (NVIDIA#9743)

Signed-off-by: Yukun He <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[TRTLLM-9706] [doc] Update wide EP documents (NVIDIA#9724)

Signed-off-by: Kaiyu Xie <[email protected]>

[https://nvbugs/5666804][test] only adding sampler config for limited models (NVIDIA#9512)

Signed-off-by: Ruodi Lu <[email protected]>
Co-authored-by: Ruodi Lu <[email protected]>
Co-authored-by: yufeiwu-nv <[email protected]>
Co-authored-by: Larry Xu <[email protected]>

[None][infra] Waive failed cases for main on 12/08 (NVIDIA#9773)

Signed-off-by: qqiao <[email protected]>

[None][chore] Move the rocketkv e2e test to post-merge (NVIDIA#9768)

Signed-off-by: Fanrong Li <[email protected]>

[None][chore] Enable tvm_ffi for cute dsl nvfp4_gemm to reduce host overhead. (NVIDIA#9690)

Signed-off-by: Mindy Li <[email protected]>

[TRTLLM-9431][perf] Enable multistream for Linear Attention in Qwen3-… (NVIDIA#9696)

Signed-off-by: nv-guomingz <[email protected]>

[None][chore] Remove closed bugs (NVIDIA#9770)

Signed-off-by: xinhe-nv <[email protected]>

[None][infra] update mooncake in docker images (NVIDIA#9584)

Signed-off-by: zhengd-nv <[email protected]>
Signed-off-by: Zheng Duan <[email protected]>

[None][test] Add Kimi k2 WIDEEP perf and accuracy cases (NVIDIA#9686)

Signed-off-by: FredricZ-2007 <[email protected]>
Signed-off-by: Kaiyu Xie <[email protected]>
Co-authored-by: Kaiyu Xie <[email protected]>

[https://nvbugs/5527655][test] Add test case for RCCA 5527655 (NVIDIA#9511)

Signed-off-by: FredricZ-2007 <[email protected]>

[http://nvbugs/5649010][fix] fix test_auto_scaling.py::test_worker_restart timeout (NVIDIA#9775)

Signed-off-by: Lizhi Zhou <[email protected]>

[None][fix] Switch AutoDeploy's default allreduce strategy to NCCL (NVIDIA#9666)

Signed-off-by: Eran Geva <[email protected]>

[TRTLLM-9506][fix] Fix AR for DeepSeek-R1 2 model path (NVIDIA#9661)

Signed-off-by: qgai <[email protected]>

ray + updatew works

trtllm works in async env

trtllm works in sync and async env

ray + updatew works

rebase to the updated verl

server mode

still cherry pick

still cherry pick

still cherry pick

integrated http interface

hang at RyExecutor create workers ray.remote

clean code

use tensorrt_llm.rlhf_utils

Signed-off-by: Liwei Ma <[email protected]>

placement, asyncllm, and basic tests
Signed-off-by: Erin Ho <[email protected]>

connect sleep and wakeup; Add support to pass None to update_weights
Signed-off-by: Erin Ho <[email protected]>

Batching ctx for IFB scheduler

Signed-off-by: Yuan Tong <[email protected]>

accuracy WAR for TP>1: always use AllReduceStrategy.NCCL, refactored
Signed-off-by: Erin Ho <[email protected]>

fix e2e integration

Signed-off-by: Superjomn <[email protected]>

update asyncllm, other nits
Signed-off-by: Erin Ho <[email protected]>

fix init setup

Signed-off-by: Erin Ho <[email protected]>

Fix TRTLLMSampler logprobs perf

Signed-off-by: Yuan Tong <[email protected]>

fix and cleanup
Signed-off-by: Erin Ho <[email protected]>

fix server

Signed-off-by: Erin Ho <[email protected]>

Revert "Batching ctx for IFB scheduler"

This reverts commit b51aac0

Signed-off-by: Yuan Tong <[email protected]>

update & address comments

Signed-off-by: Erin Ho <[email protected]>
usberkeley pushed a commit to usberkeley/TensorRT-LLM that referenced this pull request Dec 11, 2025
codego7250 pushed a commit to codego7250/TensorRT-LLM that referenced this pull request Dec 11, 2025
codego7250 pushed a commit to codego7250/TensorRT-LLM that referenced this pull request Dec 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants