Skip to content

Conversation

@zerollzeng
Copy link
Collaborator

@zerollzeng zerollzeng commented Nov 17, 2025

Summary by CodeRabbit

  • Refactor
    • Simplified benchmark execution workflow by removing configuration and server health-check polling loops.
    • Reorganized configuration structure, consolidating dataset and sequence parameters under a unified benchmark section.
    • Streamlined server readiness validation with explicit error handling for missing configuration values.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 17, 2025

📝 Walkthrough

Walkthrough

Configuration structure reorganization across SLURM benchmark scripts moves dataset_file, input_length, and output_length to the benchmark section. Server synchronization logic is simplified by removing wait loops and config validation delays, replaced with direct extraction and validation. Python submission scripts updated to reference new configuration paths.

Changes

Cohort / File(s) Summary
Configuration Restructuring
examples/disaggregated/slurm/benchmark/config.yaml, examples/wide_ep/slurm_scripts/config.yaml
Moved dataset_file, input_length, and output_length from scattered sections (environment, sequence) to consolidated benchmark block. Removed sequence configuration section.
Server Initialization & Readiness
examples/disaggregated/slurm/benchmark/start_server.sh, examples/disaggregated/slurm/benchmark/disaggr_torch.slurm, examples/disaggregated/slurm/benchmark/accuracy_eval.sh
Replaced blocking config and health-check loops with direct hostname/port extraction and immediate validation. Added explicit server readiness synchronization via wait_server.sh invocation in SLURM script. Removed redundant waiting logic. Minor capitalization fix in startup message.
Benchmark Execution Scripts
examples/disaggregated/slurm/benchmark/run_benchmark.sh, examples/disaggregated/slurm/benchmark/run_benchmark_nv_sa.sh
Removed wait-for-file, config polling, and health-check loops. Eliminated auxiliary functions (wait_for_file, wait_for_server, do_get_logs, extract_server_info). Simplified to direct extraction from server_config.yaml with early validation. Added benchmark directory cleanup in one script.
Python Job Submission
examples/disaggregated/slurm/benchmark/submit.py
Updated all configuration access paths from sequence section to benchmark section for input_length, output_length, and dataset_file. Revised log directory formation and sbatch command arguments to use new benchmark-derived values.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant SLURM as SLURM Job
    participant Server as Server (start_server.sh)
    participant Waiter as Wait Server (disaggr_torch.slurm)
    participant Bench as Benchmark (run_benchmark.sh)
    
    SLURM->>Server: Start server in background
    par Initialization
        SLURM->>Waiter: Invoke wait_server.sh via srun
        Waiter->>Waiter: Poll server_config.yaml
        Waiter->>Waiter: Validate hostname & port extraction
        alt Validation Success
            Waiter-->>SLURM: Return success
        else Validation Fails
            Waiter-->>SLURM: Exit with error
        end
    and Execution
        SLURM->>Bench: Extract hostname/port from server_config.yaml
        Bench->>Bench: Validate extraction (exit if empty)
        Bench->>Bench: Construct base_url
    end
    
    SLURM->>Bench: Execute benchmark with validated config
    Bench-->>SLURM: Benchmark complete
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Configuration path updates: Verify all references to input_length, output_length, and dataset_file have been consistently migrated from sequence/environment to benchmark across both YAML files and Python submission script.
  • Removed synchronization logic: Confirm that replacing wait loops with direct extraction plus validation does not introduce timing race conditions or miss server initialization failures.
  • New server readiness mechanism: Review the wait_server.sh invocation pattern in disaggr_torch.slurm and ensure proper error handling and cleanup on failure.
  • Function removals: Validate that all removed functions (do_get_logs, wait_for_file, wait_for_server, etc.) are not called elsewhere or depended upon by other scripts.

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description contains only the template with no actual content filled in; the Description and Test Coverage sections are empty, and the PR Checklist is not meaningfully completed. Fill in the Description section explaining what was refactored and why, provide Test Coverage details about tests that validate the changes, and complete the PR Checklist appropriately.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title correctly follows the repository template format with [None][chore] prefix and clearly identifies the main change as a benchmark refactor.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
examples/disaggregated/slurm/benchmark/run_benchmark_nv_sa.sh (1)

54-57: Consider potential race condition with shared directory.

The explicit removal of ${BENCH_SERVING_DIR} (/tmp/bench_serving) before cloning could cause issues if multiple benchmark jobs run concurrently on the same node, as they would share the same /tmp directory.

Consider using a unique directory per job to avoid conflicts:

-readonly BENCH_SERVING_DIR="/tmp/bench_serving"
+readonly BENCH_SERVING_DIR="/tmp/bench_serving_${SLURM_JOB_ID:-$$}"
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 24f5cd7 and 3692614.

📒 Files selected for processing (8)
  • examples/disaggregated/slurm/benchmark/accuracy_eval.sh (1 hunks)
  • examples/disaggregated/slurm/benchmark/config.yaml (1 hunks)
  • examples/disaggregated/slurm/benchmark/disaggr_torch.slurm (2 hunks)
  • examples/disaggregated/slurm/benchmark/run_benchmark.sh (1 hunks)
  • examples/disaggregated/slurm/benchmark/run_benchmark_nv_sa.sh (1 hunks)
  • examples/disaggregated/slurm/benchmark/start_server.sh (1 hunks)
  • examples/disaggregated/slurm/benchmark/submit.py (3 hunks)
  • examples/wide_ep/slurm_scripts/config.yaml (1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: venkywonka
Repo: NVIDIA/TensorRT-LLM PR: 6029
File: .github/pull_request_template.md:45-53
Timestamp: 2025-08-27T17:50:13.264Z
Learning: For PR templates in TensorRT-LLM, avoid suggesting changes that would increase developer overhead, such as converting plain bullets to mandatory checkboxes. The team prefers guidance-style bullets that don't require explicit interaction to reduce friction in the PR creation process.
📚 Learning: 2025-08-22T19:08:10.822Z
Learnt from: yuanjingx87
Repo: NVIDIA/TensorRT-LLM PR: 7176
File: jenkins/L0_Test.groovy:361-389
Timestamp: 2025-08-22T19:08:10.822Z
Learning: In Slurm job monitoring scripts, when jobs have built-in timeouts configured (via --time parameter or partition/system timeouts), an additional timeout mechanism in the monitoring loop is typically unnecessary. When a Slurm job times out, it gets terminated and removed from the active queue, causing `squeue -j $jobId` to return non-zero and break monitoring loops naturally. The job's final status can then be checked via `sacct` to determine if it failed due to timeout.

Applied to files:

  • examples/disaggregated/slurm/benchmark/run_benchmark_nv_sa.sh
📚 Learning: 2025-08-18T08:42:02.640Z
Learnt from: samuellees
Repo: NVIDIA/TensorRT-LLM PR: 6974
File: tensorrt_llm/serve/scripts/benchmark_dataset.py:558-566
Timestamp: 2025-08-18T08:42:02.640Z
Learning: In TensorRT-LLM's RandomDataset (tensorrt_llm/serve/scripts/benchmark_dataset.py), when using --random-token-ids option, sequence length accuracy is prioritized over semantic correctness for benchmarking purposes. The encode/decode operations should use skip_special_tokens=True and add_special_tokens=False to ensure exact target token lengths.

Applied to files:

  • examples/disaggregated/slurm/benchmark/config.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (9)
examples/disaggregated/slurm/benchmark/start_server.sh (1)

15-15: LGTM!

The capitalization change improves message consistency.

examples/wide_ep/slurm_scripts/config.yaml (1)

24-26: LGTM! Configuration consolidation improves clarity.

Moving input_length, output_length, and dataset_file into the benchmark section logically groups related configuration parameters.

examples/disaggregated/slurm/benchmark/accuracy_eval.sh (1)

14-23: LGTM! Validation improves robustness.

The added validation check ensures the script fails fast with a clear error message if hostname or port extraction fails, which is better than proceeding with empty values.

examples/disaggregated/slurm/benchmark/config.yaml (1)

18-20: LGTM! Configuration consolidation aligns with submit.py.

Moving benchmark-related parameters into the benchmark section improves configuration organization and aligns with the updated access patterns in submit.py.

examples/disaggregated/slurm/benchmark/submit.py (3)

96-97: LGTM! Configuration access updated consistently.

The migration from config['sequence'] to config['benchmark'] for input_length and output_length aligns with the configuration restructuring in config.yaml.


156-157: LGTM! Command arguments updated to use benchmark configuration.

The sbatch command arguments correctly reference config['benchmark'] for input and output lengths, consistent with the configuration restructuring.


167-167: LGTM! Dataset file now sourced from benchmark configuration.

Moving dataset_file from env_config to config['benchmark'] completes the consolidation of benchmark-related parameters.

examples/disaggregated/slurm/benchmark/run_benchmark.sh (1)

31-37: LGTM! Simplified with proper validation.

Removing the wait loop and adding validation for hostname/port extraction simplifies the script while maintaining robustness. The assumption that the server is already healthy is valid given the explicit synchronization in disaggr_torch.slurm.

examples/disaggregated/slurm/benchmark/run_benchmark_nv_sa.sh (1)

47-51: LGTM! Simplified with proper validation.

The direct extraction of hostname and port with validation is cleaner than the removed wait loops. The validation ensures the script fails fast if the config is malformed.

Signed-off-by: Zero Zeng <[email protected]>
@chuangz0 chuangz0 requested a review from qiaoxj07 November 18, 2025 04:04
@kaiyux kaiyux enabled auto-merge (squash) November 18, 2025 06:47
@kaiyux
Copy link
Member

kaiyux commented Nov 18, 2025

/bot skip --comment "slurm scripts are currently not protected in pipeline"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24858 [ skip ] triggered by Bot. Commit: 9431a06

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24858 [ skip ] completed with state SUCCESS. Commit: 9431a06
Skipping testing for commit 9431a06

@kaiyux kaiyux merged commit 43896af into NVIDIA:main Nov 18, 2025
5 checks passed
trtllm_wheel_path: "" # Path to pre-built TensorRT-LLM wheel. If provided, install from this wheel instead
dataset_file: "<dataset_file>"
work_dir: "<full_path_to_work_dir>"
worker_env_var: "TLLM_LOG_LEVEL=INFO TRTLLM_SERVER_DISABLE_GC=1 TRTLLM_WORKER_DISABLE_GC=1" # Environment variables for workers
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it mean we only disable gc for worker , but enable gc for disagg proxy server ?
@qiaoxj07

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should also add "TRTLLM_SERVER_DISABLE_GC=1 TRTLLM_WORKER_DISABLE_GC=1" into server_env_var, for better performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants