Skip to content

test: make integration helpers explicit about output model#3028

Merged
koxudaxi merged 1 commit intomainfrom
split/pr2b-prep-explicit-output
Mar 6, 2026
Merged

test: make integration helpers explicit about output model#3028
koxudaxi merged 1 commit intomainfrom
split/pr2b-prep-explicit-output

Conversation

@koxudaxi
Copy link
Copy Markdown
Owner

@koxudaxi koxudaxi commented Mar 6, 2026

Summary

  • inject an explicit --output-model-type pydantic.BaseModel into helper-driven E2E tests when the test does not specify one
  • keep one main E2E and one input_model E2E on the real CLI default via default_output_model_type=None
  • make later default-switch / v1-removal PRs reviewable by cutting implicit default churn out of most golden tests

Why

The current v1-removal work is functionally green, but the PR is too large to review because many helper-based tests inherit the CLI default output model implicitly.

This prep PR makes those tests explicit without changing product behavior. After this merges, the follow-up PR that flips the default to pydantic_v2.BaseModel will only need to touch the small set of tests that intentionally validate the real default.

Validation

  • tox run -e 3.13-parallel -- tests/test_input_model.py tests/main/test_main_general.py::test_space_and_special_characters_dict tests/main/test_main_yaml.py
  • .tox/3.13-parallel/bin/pytest -n auto --color=yes -m "not perf" --cov .tox/3.13-parallel/lib/python3.14/site-packages/datamodel_code_generator --cov tests --cov-config=pyproject.toml --cov-fail-under=0 --no-cov-on-fail --cov-report term-missing:skip-covered --cov-report xml:coverage.prep.xml --junitxml junit.prep.xml tests/main tests/test_input_model.py
  • .tox/3.13-parallel/bin/diff-cover --compare-branch origin/main coverage.prep.xml

Summary by CodeRabbit

  • Tests
    • Enhanced test infrastructure to support configurable default output model types, improving test coverage and flexibility for CLI argument handling scenarios.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Mar 6, 2026

@codspeed-hq
Copy link
Copy Markdown

codspeed-hq Bot commented Mar 6, 2026

Merging this PR will improve performance by 18.8%

⚠️ Unknown Walltime execution environment detected

Using the Walltime instrument on standard Hosted Runners will lead to inconsistent data.

For the most accurate results, we recommend using CodSpeed Macro Runners: bare-metal machines fine-tuned for performance measurement consistency.

⚡ 11 improved benchmarks
⏩ 98 skipped benchmarks1

Performance Changes

Mode Benchmark BASE HEAD Efficiency
WallTime test_perf_large_models_pydantic_v2 3.7 s 3.1 s +17.82%
WallTime test_perf_aws_style_openapi_pydantic_v2 2 s 1.7 s +14.45%
WallTime test_perf_complex_refs 2 s 1.7 s +18.8%
WallTime test_perf_kubernetes_style_pydantic_v2 2.7 s 2.3 s +17.93%
WallTime test_perf_all_options_enabled 6.8 s 6 s +13.59%
WallTime test_perf_multiple_files_input 3.8 s 3.2 s +18.34%
WallTime test_perf_stripe_style_pydantic_v2 2 s 1.7 s +15.78%
WallTime test_perf_deep_nested 5.9 s 5 s +17.5%
WallTime test_perf_duplicate_names 1,001.7 ms 857 ms +16.88%
WallTime test_perf_graphql_style_pydantic_v2 829.8 ms 709.6 ms +16.94%
WallTime test_perf_openapi_large 2.9 s 2.5 s +18.71%

Comparing split/pr2b-prep-explicit-output (18b607b) with main (b868fa8)

Open in CodSpeed

Footnotes

  1. 98 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

@codecov
Copy link
Copy Markdown

codecov Bot commented Mar 6, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 100.00%. Comparing base (b868fa8) to head (18b607b).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff            @@
##              main     #3028   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files           95        95           
  Lines        18165     18174    +9     
  Branches      2097      2099    +2     
=========================================
+ Hits         18165     18174    +9     
Flag Coverage Δ
unittests 100.00% <100.00%> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@koxudaxi koxudaxi marked this pull request as ready for review March 6, 2026 13:05
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 6, 2026

📝 Walkthrough

Walkthrough

The changes introduce a default_output_model_type parameter across test helper functions in the main conftest and input model tests. When extra CLI arguments omit --output-model-type, the helpers automatically inject a default value (PydanticBaseModel) unless explicitly overridden to None by individual tests.

Changes

Cohort / File(s) Summary
Test infrastructure helpers
tests/main/conftest.py
Added _get_argument_value and _normalize_extra_args utility functions to parse CLI arguments and inject default --output-model-type when missing. Updated _extend_args, _run_main, _run_main_url, and run_main_and_assert to accept and propagate default_output_model_type parameter (defaults to PydanticBaseModel.value).
Test invocations
tests/main/test_main_general.py, tests/test_input_model.py
Updated test invocations to use new default_output_model_type parameter. test_space_and_special_characters_dict passes None to disable auto-injection. run_input_model_and_assert now accepts and applies the same default normalization logic for extra arguments.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 With whiskers twitching, tests now gleam,
Default models flow like a gentle stream,
CLI arguments need not repeat,
Our test helpers make the pattern neat!
One value shared across the test suite dream. ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely describes the main change: making integration test helpers explicit about the output model type by injecting a default value.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch split/pr2b-prep-explicit-output

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
tests/main/conftest.py (1)

596-624: Consider updating run_main_url_and_assert to support default_output_model_type.

Unlike run_main_and_assert, this function doesn't accept or propagate default_output_model_type. It calls _run_main_url which now supports the parameter, but passes nothing. This means URL-based tests will not get the default injection.

If this is intentional, a brief comment would clarify. Otherwise, consider adding the parameter for consistency.

♻️ Proposed fix
 def run_main_url_and_assert(
     *,
     url: str,
     output_path: Path,
     input_file_type: InputFileTypeLiteral | None,
     assert_func: AssertFileContent,
     expected_file: str | Path,
     extra_args: Sequence[str] | None = None,
+    default_output_model_type: str | None = DataModelType.PydanticBaseModel.value,
     transform: Callable[[str], str] | None = None,
     force_exec_validation: bool = False,
 ) -> None:
     ...
     __tracebackhide__ = True
-    return_code = _run_main_url(url, output_path, input_file_type, extra_args=extra_args)
+    return_code = _run_main_url(
+        url, output_path, input_file_type,
+        extra_args=extra_args,
+        default_output_model_type=default_output_model_type,
+    )
     ...
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/main/conftest.py` around lines 596 - 624, run_main_url_and_assert
currently doesn't accept or forward default_output_model_type to _run_main_url,
so URL-based tests won't receive the default injection; update the
run_main_url_and_assert signature to accept default_output_model_type:
DefaultOutputModelType | None = None (or the same type used elsewhere), pass
that value through to the _run_main_url call, and update the docstring to
document the new parameter (or add a clarifying comment if omission was
intentional). Make sure to use the same parameter name and type as
run_main_and_assert to keep behavior consistent across both helpers.
tests/test_input_model.py (1)

881-901: Consider adding default_output_model_type to run_multiple_input_models_and_assert.

Unlike run_input_model_and_assert, this helper function does not include the default_output_model_type parameter and injection logic. If this is intentional (e.g., multiple input models always require explicit output type), it would be helpful to add a brief comment explaining why. Otherwise, consider adding the parameter for consistency.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_input_model.py` around lines 881 - 901, The helper
run_multiple_input_models_and_assert is missing the default_output_model_type
parameter and the logic to inject it into CLI args like
run_input_model_and_assert does; add an optional parameter
default_output_model_type: str | None = None to
run_multiple_input_models_and_assert and, when provided, append the
corresponding flag/value (e.g., "--default-output-model-type",
default_output_model_type) into the args list before calling main; if omitting
this is intentional, instead add a brief comment above
run_multiple_input_models_and_assert explaining why multiple input models
require explicit output type so future readers aren't confused.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/main/conftest.py`:
- Around line 596-624: run_main_url_and_assert currently doesn't accept or
forward default_output_model_type to _run_main_url, so URL-based tests won't
receive the default injection; update the run_main_url_and_assert signature to
accept default_output_model_type: DefaultOutputModelType | None = None (or the
same type used elsewhere), pass that value through to the _run_main_url call,
and update the docstring to document the new parameter (or add a clarifying
comment if omission was intentional). Make sure to use the same parameter name
and type as run_main_and_assert to keep behavior consistent across both helpers.

In `@tests/test_input_model.py`:
- Around line 881-901: The helper run_multiple_input_models_and_assert is
missing the default_output_model_type parameter and the logic to inject it into
CLI args like run_input_model_and_assert does; add an optional parameter
default_output_model_type: str | None = None to
run_multiple_input_models_and_assert and, when provided, append the
corresponding flag/value (e.g., "--default-output-model-type",
default_output_model_type) into the args list before calling main; if omitting
this is intentional, instead add a brief comment above
run_multiple_input_models_and_assert explaining why multiple input models
require explicit output type so future readers aren't confused.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 75c8c416-9069-49d5-b527-69fe22d99212

📥 Commits

Reviewing files that changed from the base of the PR and between b868fa8 and 18b607b.

📒 Files selected for processing (3)
  • tests/main/conftest.py
  • tests/main/test_main_general.py
  • tests/test_input_model.py

@koxudaxi koxudaxi merged commit 3bedaaa into main Mar 6, 2026
37 checks passed
@koxudaxi koxudaxi deleted the split/pr2b-prep-explicit-output branch March 6, 2026 15:11
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Mar 6, 2026

Breaking Change Analysis

Result: No breaking changes detected

Reasoning: This PR only modifies test infrastructure files (tests/main/conftest.py, tests/main/test_main_general.py, tests/test_input_model.py). It adds a default_output_model_type parameter to test helper functions to make tests explicit about the --output-model-type CLI argument. No source code, CLI behavior, code generation logic, templates, or Python API are changed. The PR description explicitly states "This prep PR makes those tests explicit without changing product behavior." This is preparatory work for a future PR that will change defaults, but this PR itself has no user-facing impact.


This analysis was performed by Claude Code Action

@github-actions
Copy link
Copy Markdown
Contributor

🎉 Released in 0.55.0

This PR is now available in the latest release. See the release notes for details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant