Skip to content

Separate pytest-benchmark into dedicated benchmark dependency group#2937

Merged
koxudaxi merged 1 commit intomainfrom
feature/separate-benchmark-dependencies
Jan 6, 2026
Merged

Separate pytest-benchmark into dedicated benchmark dependency group#2937
koxudaxi merged 1 commit intomainfrom
feature/separate-benchmark-dependencies

Conversation

@koxudaxi
Copy link
Copy Markdown
Owner

@koxudaxi koxudaxi commented Jan 6, 2026

Summary by CodeRabbit

  • Chores
    • Reorganized benchmark testing dependencies into a dedicated dependency group
    • Updated CI/CD workflow to support benchmark test execution
    • Enabled benchmark tests in the project's test configuration

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Jan 6, 2026

📝 Walkthrough

Walkthrough

The pull request reorganizes benchmark-related dependencies into a dedicated dependency group, separating them from test dependencies. The CI/CD workflow, project manifest, and test configuration are updated to recognize and utilize this new benchmark dependency group.

Changes

Cohort / File(s) Summary
CI/CD Configuration
.github/workflows/codspeed.yaml
Updated UV sync command to include --group benchmark flag, enabling installation of benchmark dependencies during the setup phase.
Dependency Organization
pyproject.toml
Created new benchmark dependency group containing pytest-benchmark and pytest-codspeed>=2.2, moved from the test group. Added benchmark test marker configuration.
Test Configuration
tox.ini
Expanded [testenv:perf] dependency_groups to include both test and benchmark groups (previously only test). Removed --benchmark-disable flag from main test run.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Poem

🐰 Benchmarks hopped away to their own little space,
No longer tangled in test-time embrace,
With a new group all set, they can run with great speed,
While the workflow keeps pace with each benchmark's need!

Pre-merge checks

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: moving pytest-benchmark and pytest-codspeed into a dedicated benchmark dependency group across configuration files.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Jan 6, 2026

@codspeed-hq
Copy link
Copy Markdown

codspeed-hq Bot commented Jan 6, 2026

CodSpeed Performance Report

Merging #2937 will improve performance by 22.28%

Comparing feature/separate-benchmark-dependencies (b726d8d) with main (da32bb7)

⚠️ Unknown Walltime execution environment detected

Using the Walltime instrument on standard Hosted Runners will lead to inconsistent data.

For the most accurate results, we recommend using CodSpeed Macro Runners: bare-metal machines fine-tuned for performance measurement consistency.

Summary

⚡ 11 improvements
⏩ 98 skipped1

Benchmarks breakdown

Mode Benchmark BASE HEAD Efficiency
WallTime test_perf_multiple_files_input 3.8 s 3.1 s +22.28%
WallTime test_perf_all_options_enabled 6.7 s 5.7 s +17.54%
WallTime test_perf_aws_style_openapi_pydantic_v2 2 s 1.7 s +18.96%
WallTime test_perf_openapi_large 2.9 s 2.5 s +18.19%
WallTime test_perf_stripe_style_pydantic_v2 2 s 1.7 s +17.49%
WallTime test_perf_kubernetes_style_pydantic_v2 2.6 s 2.2 s +17.19%
WallTime test_perf_complex_refs 2 s 1.7 s +18.26%
WallTime test_perf_graphql_style_pydantic_v2 807 ms 696.9 ms +15.79%
WallTime test_perf_duplicate_names 987.9 ms 841.8 ms +17.36%
WallTime test_perf_deep_nested 6 s 5 s +18.23%
WallTime test_perf_large_models_pydantic_v2 3.6 s 3.1 s +18.01%

Footnotes

  1. 98 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

@codecov
Copy link
Copy Markdown

codecov Bot commented Jan 6, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 100.00%. Comparing base (da32bb7) to head (b726d8d).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff            @@
##              main     #2937   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files           92        92           
  Lines        16969     16969           
  Branches      1976      1976           
=========================================
  Hits         16969     16969           
Flag Coverage Δ
unittests 100.00% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
pyproject.toml (1)

235-238: Consider clarifying the marker distinction.

The marker descriptions distinguish between "perf" (excluded from CI benchmarks) and "benchmark" (for benchmark tests), which is good. However, the "perf" description could be slightly clearer—consider rephrasing to "marks tests as performance tests (not tracked as benchmarks in CI)" to emphasize that they run but aren't tracked for regression.

🔎 Suggested clarification
 markers = [
-  "perf: marks tests as performance tests (excluded from CI benchmarks)",
+  "perf: marks tests as performance tests (not tracked as benchmarks in CI)",
   "benchmark: marks tests as benchmark tests",
 ]
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between da32bb7 and b726d8d.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock, !**/*.lock and included by none
📒 Files selected for processing (3)
  • .github/workflows/codspeed.yaml
  • pyproject.toml
  • tox.ini
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
  • GitHub Check: Analyze (python)
  • GitHub Check: 3.11 on Windows
  • GitHub Check: 3.10 on macOS
  • GitHub Check: 3.11 on macOS
  • GitHub Check: 3.13 on Windows
  • GitHub Check: 3.12 on Windows
  • GitHub Check: 3.10 on Windows
  • GitHub Check: 3.14 on macOS
  • GitHub Check: 3.14 on Windows
  • GitHub Check: benchmarks
🔇 Additional comments (3)
.github/workflows/codspeed.yaml (1)

29-29: LGTM! Correctly installs the new benchmark dependency group.

The addition of --group benchmark properly aligns with the new dependency group structure in pyproject.toml, ensuring benchmark-specific packages are available for the CodSpeed workflow.

tox.ini (1)

171-173: LGTM! Correctly includes both test and benchmark groups for performance testing.

The perf environment appropriately includes both the test and benchmark dependency groups, ensuring all necessary dependencies are available. The multi-line format also improves readability.

pyproject.toml (1)

107-110: Benchmark dependency group is properly isolated with compatible version constraint.

The benchmark group cleanly separates benchmark tooling from test dependencies. Verification confirms that no unintended benchmark imports exist in non-perf tests or source code, and the >=2.2 version constraint for pytest-codspeed is compatible with current releases.

@koxudaxi koxudaxi merged commit 4b29263 into main Jan 6, 2026
38 checks passed
@koxudaxi koxudaxi deleted the feature/separate-benchmark-dependencies branch January 6, 2026 07:28
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Jan 6, 2026

Breaking Change Analysis

Result: No breaking changes detected

Reasoning: This PR reorganizes internal development dependencies by moving pytest-benchmark and pytest-codspeed from the 'test' dependency group to a new dedicated 'benchmark' dependency group. These changes only affect development and CI infrastructure. End users of datamodel-code-generator (either as a CLI tool or Python library) are not impacted since they don't install dev dependencies. The code generation output, CLI options, Python API, default behaviors, error handling, and Python version support remain unchanged.


This analysis was performed by Claude Code Action

@github-actions
Copy link
Copy Markdown
Contributor

🎉 Released in 0.53.0

This PR is now available in the latest release. See the release notes for details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant