test: make integration helpers explicit about output model#3028
test: make integration helpers explicit about output model#3028
Conversation
Merging this PR will improve performance by 18.8%
|
| Mode | Benchmark | BASE |
HEAD |
Efficiency | |
|---|---|---|---|---|---|
| ⚡ | WallTime | test_perf_large_models_pydantic_v2 |
3.7 s | 3.1 s | +17.82% |
| ⚡ | WallTime | test_perf_aws_style_openapi_pydantic_v2 |
2 s | 1.7 s | +14.45% |
| ⚡ | WallTime | test_perf_complex_refs |
2 s | 1.7 s | +18.8% |
| ⚡ | WallTime | test_perf_kubernetes_style_pydantic_v2 |
2.7 s | 2.3 s | +17.93% |
| ⚡ | WallTime | test_perf_all_options_enabled |
6.8 s | 6 s | +13.59% |
| ⚡ | WallTime | test_perf_multiple_files_input |
3.8 s | 3.2 s | +18.34% |
| ⚡ | WallTime | test_perf_stripe_style_pydantic_v2 |
2 s | 1.7 s | +15.78% |
| ⚡ | WallTime | test_perf_deep_nested |
5.9 s | 5 s | +17.5% |
| ⚡ | WallTime | test_perf_duplicate_names |
1,001.7 ms | 857 ms | +16.88% |
| ⚡ | WallTime | test_perf_graphql_style_pydantic_v2 |
829.8 ms | 709.6 ms | +16.94% |
| ⚡ | WallTime | test_perf_openapi_large |
2.9 s | 2.5 s | +18.71% |
Comparing split/pr2b-prep-explicit-output (18b607b) with main (b868fa8)
Footnotes
-
98 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports. ↩
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #3028 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 95 95
Lines 18165 18174 +9
Branches 2097 2099 +2
=========================================
+ Hits 18165 18174 +9
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
📝 WalkthroughWalkthroughThe changes introduce a Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
tests/main/conftest.py (1)
596-624: Consider updatingrun_main_url_and_assertto supportdefault_output_model_type.Unlike
run_main_and_assert, this function doesn't accept or propagatedefault_output_model_type. It calls_run_main_urlwhich now supports the parameter, but passes nothing. This means URL-based tests will not get the default injection.If this is intentional, a brief comment would clarify. Otherwise, consider adding the parameter for consistency.
♻️ Proposed fix
def run_main_url_and_assert( *, url: str, output_path: Path, input_file_type: InputFileTypeLiteral | None, assert_func: AssertFileContent, expected_file: str | Path, extra_args: Sequence[str] | None = None, + default_output_model_type: str | None = DataModelType.PydanticBaseModel.value, transform: Callable[[str], str] | None = None, force_exec_validation: bool = False, ) -> None: ... __tracebackhide__ = True - return_code = _run_main_url(url, output_path, input_file_type, extra_args=extra_args) + return_code = _run_main_url( + url, output_path, input_file_type, + extra_args=extra_args, + default_output_model_type=default_output_model_type, + ) ...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/main/conftest.py` around lines 596 - 624, run_main_url_and_assert currently doesn't accept or forward default_output_model_type to _run_main_url, so URL-based tests won't receive the default injection; update the run_main_url_and_assert signature to accept default_output_model_type: DefaultOutputModelType | None = None (or the same type used elsewhere), pass that value through to the _run_main_url call, and update the docstring to document the new parameter (or add a clarifying comment if omission was intentional). Make sure to use the same parameter name and type as run_main_and_assert to keep behavior consistent across both helpers.tests/test_input_model.py (1)
881-901: Consider addingdefault_output_model_typetorun_multiple_input_models_and_assert.Unlike
run_input_model_and_assert, this helper function does not include thedefault_output_model_typeparameter and injection logic. If this is intentional (e.g., multiple input models always require explicit output type), it would be helpful to add a brief comment explaining why. Otherwise, consider adding the parameter for consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_input_model.py` around lines 881 - 901, The helper run_multiple_input_models_and_assert is missing the default_output_model_type parameter and the logic to inject it into CLI args like run_input_model_and_assert does; add an optional parameter default_output_model_type: str | None = None to run_multiple_input_models_and_assert and, when provided, append the corresponding flag/value (e.g., "--default-output-model-type", default_output_model_type) into the args list before calling main; if omitting this is intentional, instead add a brief comment above run_multiple_input_models_and_assert explaining why multiple input models require explicit output type so future readers aren't confused.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/main/conftest.py`:
- Around line 596-624: run_main_url_and_assert currently doesn't accept or
forward default_output_model_type to _run_main_url, so URL-based tests won't
receive the default injection; update the run_main_url_and_assert signature to
accept default_output_model_type: DefaultOutputModelType | None = None (or the
same type used elsewhere), pass that value through to the _run_main_url call,
and update the docstring to document the new parameter (or add a clarifying
comment if omission was intentional). Make sure to use the same parameter name
and type as run_main_and_assert to keep behavior consistent across both helpers.
In `@tests/test_input_model.py`:
- Around line 881-901: The helper run_multiple_input_models_and_assert is
missing the default_output_model_type parameter and the logic to inject it into
CLI args like run_input_model_and_assert does; add an optional parameter
default_output_model_type: str | None = None to
run_multiple_input_models_and_assert and, when provided, append the
corresponding flag/value (e.g., "--default-output-model-type",
default_output_model_type) into the args list before calling main; if omitting
this is intentional, instead add a brief comment above
run_multiple_input_models_and_assert explaining why multiple input models
require explicit output type so future readers aren't confused.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 75c8c416-9069-49d5-b527-69fe22d99212
📒 Files selected for processing (3)
tests/main/conftest.pytests/main/test_main_general.pytests/test_input_model.py
Breaking Change AnalysisResult: No breaking changes detected Reasoning: This PR only modifies test infrastructure files (tests/main/conftest.py, tests/main/test_main_general.py, tests/test_input_model.py). It adds a This analysis was performed by Claude Code Action |
|
🎉 Released in 0.55.0 This PR is now available in the latest release. See the release notes for details. |
Summary
--output-model-type pydantic.BaseModelinto helper-driven E2E tests when the test does not specify onemainE2E and oneinput_modelE2E on the real CLI default viadefault_output_model_type=NoneWhy
The current v1-removal work is functionally green, but the PR is too large to review because many helper-based tests inherit the CLI default output model implicitly.
This prep PR makes those tests explicit without changing product behavior. After this merges, the follow-up PR that flips the default to
pydantic_v2.BaseModelwill only need to touch the small set of tests that intentionally validate the real default.Validation
tox run -e 3.13-parallel -- tests/test_input_model.py tests/main/test_main_general.py::test_space_and_special_characters_dict tests/main/test_main_yaml.py.tox/3.13-parallel/bin/pytest -n auto --color=yes -m "not perf" --cov .tox/3.13-parallel/lib/python3.14/site-packages/datamodel_code_generator --cov tests --cov-config=pyproject.toml --cov-fail-under=0 --no-cov-on-fail --cov-report term-missing:skip-covered --cov-report xml:coverage.prep.xml --junitxml junit.prep.xml tests/main tests/test_input_model.py.tox/3.13-parallel/bin/diff-cover --compare-branch origin/main coverage.prep.xmlSummary by CodeRabbit