Skip to content

Conversation

@chenfeiz0326
Copy link
Collaborator

@chenfeiz0326 chenfeiz0326 commented Oct 24, 2025

Description

This PR supports uploading pytest's perf results (only server-client benchmark) and regressive testcases (only post-merge can upload regressive test cases) to OpenSearch database.

Now pytest will:
1 Run perf tests and get new perf metrics
2 Query the database to get history perf data and calculate the new baseline (best perf of last 14 days) of each perf metrics
3 Upload new perf metrics and new baseline.
4 Upload regressive test cases to database for easier triage.

Run pytest:
Before running pytest, you need to set:

export OPEN_SEARCH_DB_BASE_URL="http://gpuwa.nvidia.com"

Run perf tests without uploading:
perf/test_perf.py::test_perf[perf_sanity-l0_{gpu_type}-{select_pattern}]
Run perf tests and uploading to database:
perf/test_perf.py::test_perf[perf_sanity_upload-l0_{gpu_type}-{select_pattern}]
select_pattern is optional.

echo "perf/test_perf.py::test_perf[perf_sanity_upload-l0_dgx_b200" > test_list.txt
pytest -v -s --test-prefix=${LLM_ROOT}/tests/integration/defs --test-list=test_list.txt -R=perf_sanity_upload-l0_dgx_b200 --output-dir=./output --perf 

Run bot in PR can trigger database uploading:

/bot run --stage-list "DGX_B200-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1,DGX_B300-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1"

Post-merge can trigger database uploading.

Perf data in OpenSearch: https://gpuwa.nvidia.com/os-dashboards/app/data-explorer/discover#?_a=(discover:(columns:!(_source),isDirty:!f,sort:!()),metadata:(indexPattern:ad061750-b628-11f0-990f-f7c929993cf6,view:discover))&_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-2w,to:now))&_q=(filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!f,index:ad061750-b628-11f0-990f-f7c929993cf6,key:b_is_post_merge,negate:!f,params:(query:!f),type:phrase),query:(match_phrase:(b_is_post_merge:!f)))),query:(language:lucene,query:''))

Summary by CodeRabbit

  • New Features

    • Added performance data collection and database integration for test results.
    • Introduced baseline performance tracking and historical data comparison capabilities.
    • Enhanced per-test metrics recording with granular result tracking.
  • Tests

    • Updated test suite with refined performance monitoring configuration.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 24, 2025

📝 Walkthrough

Walkthrough

Introduces NVDataFlow integration for performance testing. A new nvdf.py module provides configuration assembly, GPU metadata collection, API operations (POST/GET with retry logic), and baseline data preparation. The test_perf.py module is updated to convert test configurations to NVDataFlow format, track per-test results, and upload data to the database after execution. Supporting changes include method signature updates and test list modifications.

Changes

Cohort / File(s) Summary
NVDataFlow Module
tests/integration/defs/perf/nvdf.py
New module providing NVDataFlow service integration: configuration assembly via get_nvdf_config() and get_job_info(); data validation via type_check_for_nvdf() and _id(); API operations via post() and query() with retry logic; data composition via post_data(), query_data(), and prepare_baseline_data(); comparison utilities via match(), get_best_perf_result(), and get_baseline().
Test Performance Integration
tests/integration/defs/perf/test_perf.py
Added imports from nvdf module. Extended ServerConfig and ClientConfig with to_nvdf_data() methods for data conversion. Introduced _test_results attribute and store_test_result() method to MultiMetricPerfTest for per-test result tracking. Added upload_test_results_to_database() method to aggregate, prepare, and post results to NVDataFlow for server-benchmark runtime.
Performance Utilities
tests/integration/defs/perf/utils.py
Updated run_ex() method signature in AbstractPerfScriptTestClass to accept metric_type parameter. Integrated call to store_test_result() after performance computation to capture per-metric results.
Test Configuration
tests/integration/test_lists/test-db/perf_sanity_l0_dgx_b200.yml, tests/integration/test_lists/test-db/perf_sanity_l0_dgx_b300.yml
Removed test entries: b300 variant from b200 list and b200 variant from b300 list, aligning test coverage to target GPU hardware.

Sequence Diagram

sequenceDiagram
    participant Test as Test Execution
    participant Utils as utils.run_ex()
    participant PerfTest as PerfTest Instance
    participant NVDF as nvdf Module
    participant DB as NVDataFlow DB

    Test->>Utils: run_ex(full_test_name, metric_type, ...)
    Utils->>Utils: Execute benchmark & compute perf_result
    Utils->>PerfTest: store_test_result(cmd_idx, metric_type, perf_result)
    PerfTest->>PerfTest: _test_results[cmd_idx][metric_type] = perf_result
    Utils->>Utils: _write_result()
    
    Test->>PerfTest: upload_test_results_to_database()
    PerfTest->>NVDF: prepare_baseline_data(new_data, model_groups, gpu_type)
    NVDF->>NVDF: match/aggregate metrics from history
    NVDF->>PerfTest: return baseline_data_dict
    
    PerfTest->>NVDF: post_data(baseline_data_dict, new_data_dict, model_groups, gpu_type)
    NVDF->>NVDF: per project: type_check_for_nvdf, _id()
    NVDF->>DB: POST validated data with retry logic
    DB->>NVDF: response
    NVDF->>PerfTest: return
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 78.57% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description Check ⚠️ Warning The pull request description is missing several required sections from the repository's description template. Specifically, the PR title should follow the format [JIRA ticket/NVBugs ID/GitHub issue/None][type] Summary, but is not included in the provided description. The "Test Coverage" section, which should clearly list relevant tests that safeguard the changes, is completely absent. Similarly, the "PR Checklist" section with required items to review before submission is not provided. While the author did provide a detailed "Description" section with usage examples and implementation details, the omission of these required sections represents a significant deviation from the template structure. Please update the pull request description to include all required sections from the template: (1) add the PR title in the format [JIRA ticket/NVBugs ID/GitHub issue/None][type] Summary at the top, (2) add a "Test Coverage" section that lists which tests safeguard these changes (such as the modifications to test_perf.py and related test configuration files), and (3) include the "PR Checklist" section with items reviewed before submission. The existing Description section can remain but should be properly positioned after the title in the template structure.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The PR title "[TRTLLM-8825][feat] Support Pytest Perf Results uploading to Database" directly aligns with the main objective of the changeset. The raw_summary and pr_objectives confirm that the primary feature being added is support for uploading pytest performance results to an OpenSearch database. The title follows the required format with a JIRA ticket ID, feature type indicator, and a concise summary of the change. The wording is clear and specific enough that teammates reviewing the git history would immediately understand the primary change without ambiguity.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
tests/integration/defs/perf/utils.py (1)

509-520: Make metric_type Optional and avoid mutable default for outputs

run_ex is called with metric_type=None for prepare-dataset; also outputs={} as default is unsafe.

-    def run_ex(self,
-               full_test_name: str,
-               metric_type: PerfMetricType,
+    def run_ex(self,
+               full_test_name: str,
+               metric_type: Optional[PerfMetricType],
                venv: Optional[PythonVenvRunnerImpl],
@@
-               output_dir: str,
-               outputs: Dict[int, str] = {},
+               output_dir: str,
+               outputs: Optional[Dict[int, str]] = None,
@@
-        outputs = outputs.copy()
+        outputs = (outputs or {}).copy()
tests/integration/defs/perf/test_perf.py (1)

2293-2305: Avoid KeyError when removing failed cmd results

Use pop(..., None) to safely discard failed entries.

-                    del self._test_results[self._current_cmd_idx]
+                    self._test_results.pop(self._current_cmd_idx, None)

Also applies to: 2311-2312

🧹 Nitpick comments (7)
tests/integration/defs/perf/nvdf.py (5)

29-31: Avoid hard-coded NVDF endpoint; allow HTTPS and override via env

Make NVDF_BASE_URL configurable (env) and prefer HTTPS for transport security.

-NVDF_BASE_URL = "http://gpuwa.nvidia.com"
-PROJECT_ROOT = "sandbox-tmp-perf-test"
+NVDF_BASE_URL = os.getenv("NVDF_BASE_URL", "https://gpuwa.nvidia.com")
+PROJECT_ROOT = os.getenv("NVDF_PROJECT_ROOT", "sandbox-tmp-perf-test")

139-158: GPU name normalization comment vs behavior

Comment says “Replace spaces with hyphens” but code removes spaces. Align both; hyphens are easier to read.

-                # Replace spaces with hyphens
-                gpu_type = gpu_type.replace(" ", "")
+                # Replace spaces with hyphens for readability
+                gpu_type = gpu_type.replace(" ", "-")

315-339: GET with body is unusual; also add backoff jitter

OpenSearch supports GET-with-body but some proxies don’t. Consider POST. Add sleep between retries for stability.

-    while retry_time:
-        res = requests.get(url, data=json_data, headers=headers, timeout=10)
+    while retry_time:
+        res = requests.get(url, data=json_data, headers=headers, timeout=10)
         if res.status_code in [200, 201, 202]:
             return res
@@
-        retry_time -= 1
+        retry_time -= 1
+        time.sleep(min(60, 2 ** (5 - retry_time)))

Optionally switch to requests.post(url, ...) if infra allows.


507-560: Simplify baseline checks and guard non-numeric values

Small cleanups for clarity; also avoid including None in min/max.

-            # Skip baseline data
-            if data.get("b_is_baseline") and data.get("b_is_baseline") == True:
+            # Skip baseline data
+            if data.get("b_is_baseline"):
                 continue
-            if metric not in data:
+            if metric not in data or data[metric] is None:
                 continue
             values.append(data.get(metric))

562-578: Simplify truthy check

Minor readability improvement.

-    for data in history_data_list:
-        if data.get("b_is_baseline") and data.get("b_is_baseline") == True:
+    for data in history_data_list:
+        if data.get("b_is_baseline"):
             return data
tests/integration/defs/perf/utils.py (1)

607-609: Guard store_test_result and provide base-class default to prevent AttributeError

When extending AbstractPerfScriptTestClass, not all subclasses may implement store_test_result. Guard the call or add a no-op default in the base class.

-                # Store the test result
-                self.store_test_result(cmd_idx, metric_type, self._perf_result)
+                # Store the test result if supported
+                if metric_type is not None and hasattr(self, "store_test_result"):
+                    self.store_test_result(cmd_idx, metric_type, self._perf_result)

Optionally add to AbstractPerfScriptTestClass:

+    def store_test_result(self, cmd_idx: int, metric_type, perf_result: float) -> None:
+        """No-op default; subclasses may override to persist per-metric results."""
+        return
tests/integration/defs/perf/test_perf.py (1)

575-600: Use actual model name and stable typing in ServerConfig NVDF payload

  • Prefer grouping by underlying model_name rather than server config label for consistent project keys.
  • l_moe_max_num_tokens should be an int; default 0 avoids type-check failures.
     def to_nvdf_data(self) -> dict:
         """Convert ServerConfig to NVDataFlow data"""
         return {
-            "s_model_name": self.name,
+            "s_model_name": self.model_name,
+            "s_server_name": self.name,
@@
-            "l_moe_max_num_tokens": self.moe_max_num_tokens,
+            "l_moe_max_num_tokens": int(self.moe_max_num_tokens or 0),

Note: Adding s_server_name is optional but helpful for tracing.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 602b059 and b5eb07a.

📒 Files selected for processing (5)
  • tests/integration/defs/perf/nvdf.py (1 hunks)
  • tests/integration/defs/perf/test_perf.py (10 hunks)
  • tests/integration/defs/perf/utils.py (2 hunks)
  • tests/integration/test_lists/test-db/perf_sanity_l0_dgx_b200.yml (0 hunks)
  • tests/integration/test_lists/test-db/perf_sanity_l0_dgx_b300.yml (0 hunks)
💤 Files with no reviewable changes (2)
  • tests/integration/test_lists/test-db/perf_sanity_l0_dgx_b300.yml
  • tests/integration/test_lists/test-db/perf_sanity_l0_dgx_b200.yml
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tests/integration/defs/perf/utils.py
  • tests/integration/defs/perf/nvdf.py
  • tests/integration/defs/perf/test_perf.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tests/integration/defs/perf/utils.py
  • tests/integration/defs/perf/nvdf.py
  • tests/integration/defs/perf/test_perf.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tests/integration/defs/perf/utils.py
  • tests/integration/defs/perf/nvdf.py
  • tests/integration/defs/perf/test_perf.py
🧬 Code graph analysis (3)
tests/integration/defs/perf/utils.py (1)
tests/integration/defs/perf/test_perf.py (1)
  • store_test_result (2213-2220)
tests/integration/defs/perf/nvdf.py (1)
tests/integration/defs/trt_test_alternative.py (2)
  • print_error (318-324)
  • print_info (300-306)
tests/integration/defs/perf/test_perf.py (2)
tests/integration/defs/perf/nvdf.py (4)
  • _id (229-233)
  • get_nvdf_config (90-122)
  • post_data (341-361)
  • prepare_baseline_data (425-473)
tests/integration/defs/perf/utils.py (1)
  • PerfMetricType (87-117)
🪛 Ruff (0.14.1)
tests/integration/defs/perf/nvdf.py

91-91: Found useless expression. Either assign it to a variable or remove it.

(B018)


91-91: Undefined name gpu_type

(F821)


91-91: Undefined name gpu_count

(F821)


91-91: Undefined name build_id

(F821)


91-91: Undefined name build_url

(F821)


91-91: Undefined name job_name

(F821)


91-91: Undefined name job_id

(F821)


91-91: Undefined name job_url

(F821)


92-92: Found useless expression. Either assign it to a variable or remove it.

(B018)


92-92: Undefined name branch

(F821)


92-92: Undefined name commit

(F821)


92-92: Undefined name is_post_merge

(F821)


92-92: Undefined name is_pr_job

(F821)


92-92: Undefined name trigger_mr_user

(F821)


103-103: Undefined name gpu_type

(F821)


104-104: Undefined name gpu_count

(F821)


106-106: Undefined name host_node_name

(F821)


109-109: Undefined name build_id

(F821)


110-110: Undefined name build_url

(F821)


111-111: Undefined name job_name

(F821)


112-112: Undefined name job_id

(F821)


113-113: Undefined name job_url

(F821)


114-114: Undefined name branch

(F821)


115-115: Undefined name commit

(F821)


116-116: Undefined name is_post_merge

(F821)


117-117: Undefined name is_pr_job

(F821)


118-118: Undefined name trigger_mr_user

(F821)


145-145: Starting a process with a partial executable path

(S607)


160-160: Do not catch blind exception: Exception

(BLE001)


167-167: Do not use bare except

(E722)


300-300: Probable use of requests call without timeout

(S113)


359-359: Do not catch blind exception: Exception

(BLE001)


405-405: Loop control variable data overrides iterable it iterates

(B020)


420-420: Do not catch blind exception: Exception

(BLE001)


538-538: Avoid equality comparisons to True; use data.get("b_is_baseline"): for truth checks

Replace with data.get("b_is_baseline")

(E712)


551-551: Avoid equality comparisons to True; use data.get("b_is_baseline"): for truth checks

Replace with data.get("b_is_baseline")

(E712)


575-575: Avoid equality comparisons to True; use data.get("b_is_baseline"): for truth checks

Replace with data.get("b_is_baseline")

(E712)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (4)
tests/integration/defs/perf/nvdf.py (1)

287-313: HTTP POST without timeout/backoff jitter; misleading retry message

Add timeout, exponential backoff, and correct the retry count in the final log.
[raise_recommended_refactor]

-    retry_time = 5
-    while retry_time:
-        res = requests.post(url, data=json_data, headers=headers)
+    retries = 5
+    attempt = 0
+    while attempt < retries:
+        res = requests.post(url, data=json_data, headers=headers, timeout=10)
         if res.status_code in [200, 201, 202]:
@@
-        retry_time -= 1
+        attempt += 1
+        time.sleep(min(60, 2 ** attempt))
-    print_error(
-        f"Fail to post to {project} after {retry_time} retry: {url}, json: {json_data}, error: {res.text}"
-    )
+    print_error(
+        f"Fail to post to {project} after {retries} retries: {url}, json: {json_data}, error: {getattr(res, 'text', '<no response>')}"
+    )
tests/integration/defs/perf/test_perf.py (2)

680-690: ClientConfig NVDF payload OK; relies on 'd_' numeric support in type checker

No change needed after expanding type_check_for_nvdf to accept d_*.


1453-1455: Result aggregation shape looks good

Storing results by cmd_idx and PerfMetricType is consistent with later upload usage.

Also applies to: 2213-2221

tests/integration/defs/perf/utils.py (1)

2254-2264: Review comment is inconsistent with actual code state

The review comment assumes metric_type has been changed to Optional[PerfMetricType], but the function definition at utils.py:509-519 shows it remains metric_type: PerfMetricType (not Optional). The call site at test_perf.py:2255 passes None, which violates this non-Optional type contract.

Either the function parameter type annotation needs to be updated to Optional[PerfMetricType], or the call site needs to pass a valid PerfMetricType value instead of None.

Likely an incorrect or invalid review comment.

@chenfeiz0326
Copy link
Collaborator Author

/bot run --stage-list "DGX_B200-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1,DGX_B300-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22491 [ run ] triggered by Bot. Commit: ed8d6c5

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22491 [ run ] completed with state SUCCESS. Commit: ed8d6c5
/LLM/main/L0_MergeRequest_PR pipeline #16949 (Partly Tested) completed with status: 'FAILURE'

@chenfeiz0326
Copy link
Collaborator Author

/bot run --stage-list "DGX_B200-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1,DGX_B300-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22495 [ run ] triggered by Bot. Commit: da15bed

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22495 [ run ] completed with state SUCCESS. Commit: da15bed
/LLM/main/L0_MergeRequest_PR pipeline #16952 (Partly Tested) completed with status: 'SUCCESS'

@chenfeiz0326
Copy link
Collaborator Author

/bot run --stage-list "DGX_B200-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1,DGX_B300-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22498 [ run ] triggered by Bot. Commit: ec8b536

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22498 [ run ] completed with state SUCCESS. Commit: ec8b536
/LLM/main/L0_MergeRequest_PR pipeline #16955 (Partly Tested) completed with status: 'SUCCESS'

@chenfeiz0326
Copy link
Collaborator Author

/bot run --stage-list "DGX_B200-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1,DGX_B300-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22502 [ run ] triggered by Bot. Commit: 9276d86

@chenfeiz0326
Copy link
Collaborator Author

/bot run --stage-list "DGX_B200-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1,DGX_B300-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22503 [ run ] triggered by Bot. Commit: faca55a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22502 [ run ] completed with state ABORTED. Commit: 9276d86
LLM/main/L0_MergeRequest_PR #16959 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22503 [ run ] completed with state SUCCESS. Commit: faca55a
/LLM/main/L0_MergeRequest_PR pipeline #16960 (Partly Tested) completed with status: 'SUCCESS'

@chenfeiz0326
Copy link
Collaborator Author

/bot run --stage-list "DGX_B200-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1,DGX_B300-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22524 [ run ] triggered by Bot. Commit: 6564b82

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22524 [ run ] completed with state SUCCESS. Commit: 6564b82
/LLM/main/L0_MergeRequest_PR pipeline #16979 (Partly Tested) completed with status: 'SUCCESS'

@chenfeiz0326 chenfeiz0326 force-pushed the chenfeiz/support_posting_to_opensearch branch from bb1e8b5 to 8fccc26 Compare October 27, 2025 04:59
@chenfeiz0326
Copy link
Collaborator Author

/bot run --stage-list "DGX_B200-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1,DGX_B300-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22610 [ run ] triggered by Bot. Commit: 0b13828

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22610 [ run ] completed with state SUCCESS. Commit: 0b13828
/LLM/main/L0_MergeRequest_PR pipeline #17043 (Partly Tested) completed with status: 'SUCCESS'

@chenfeiz0326 chenfeiz0326 force-pushed the chenfeiz/support_posting_to_opensearch branch from 0b13828 to 2df0f73 Compare October 27, 2025 10:37
@tensorrt-cicd
Copy link
Collaborator

PR_Github #23200 [ run ] completed with state SUCCESS. Commit: f57c68a
/LLM/main/L0_MergeRequest_PR pipeline #17487 (Partly Tested) completed with status: 'FAILURE'

@chenfeiz0326
Copy link
Collaborator Author

/bot run --stage-list "DGX_B200-4_GPUs-PyTorch-Perf-Sanity-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23233 [ run ] triggered by Bot. Commit: f57c68a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23233 [ run ] completed with state SUCCESS. Commit: f57c68a
/LLM/main/L0_MergeRequest_PR pipeline #17512 (Partly Tested) completed with status: 'SUCCESS'

@chenfeiz0326
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23242 [ run ] triggered by Bot. Commit: f57c68a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23242 [ run ] completed with state SUCCESS. Commit: f57c68a
/LLM/main/L0_MergeRequest_PR pipeline #17518 completed with status: 'FAILURE'

@chenfeiz0326
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23245 [ run ] triggered by Bot. Commit: f57c68a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23245 [ run ] completed with state SUCCESS. Commit: f57c68a
/LLM/main/L0_MergeRequest_PR pipeline #17519 completed with status: 'FAILURE'

@chenfeiz0326
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23247 [ run ] triggered by Bot. Commit: f57c68a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23247 [ run ] completed with state SUCCESS. Commit: f57c68a
/LLM/main/L0_MergeRequest_PR pipeline #17521 completed with status: 'FAILURE'

@chenfeiz0326 chenfeiz0326 force-pushed the chenfeiz/support_posting_to_opensearch branch from f57c68a to dc934d5 Compare November 3, 2025 01:37
@chenfeiz0326
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23307 [ run ] triggered by Bot. Commit: dc934d5

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23307 [ run ] completed with state FAILURE. Commit: dc934d5
/LLM/main/L0_MergeRequest_PR pipeline #17560 completed with status: 'FAILURE'

Signed-off-by: Chenfei Zhang <[email protected]>
Signed-off-by: Chenfei Zhang <[email protected]>
@chenfeiz0326 chenfeiz0326 force-pushed the chenfeiz/support_posting_to_opensearch branch from dc934d5 to 5571793 Compare November 3, 2025 04:54
@chenfeiz0326
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23334 [ run ] triggered by Bot. Commit: 5571793

Signed-off-by: Chenfei Zhang <[email protected]>
@chenfeiz0326 chenfeiz0326 requested a review from a team as a code owner November 3, 2025 05:02
@chenfeiz0326
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23337 [ run ] triggered by Bot. Commit: 7c4f1bf

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23334 [ run ] completed with state ABORTED. Commit: 5571793
LLM/main/L0_MergeRequest_PR #17583 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23337 [ run ] completed with state SUCCESS. Commit: 7c4f1bf
/LLM/main/L0_MergeRequest_PR pipeline #17586 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@litaotju litaotju merged commit cc4ab8d into NVIDIA:main Nov 3, 2025
5 checks passed
fredricz-20070104 pushed a commit to fredricz-20070104/TensorRT-LLM that referenced this pull request Nov 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants