Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (2)
📝 WalkthroughWalkthroughDocumentation lists in README.md and docs/index.md were updated to reflect the current downstream projects using datamodel-code-generator. Two projects were removed and four new projects were added to both files, with links pointing to relevant repository sections. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Generated by GitHub Actions
Merging this PR will degrade performance by 18.79%
|
| Mode | Benchmark | BASE |
HEAD |
Efficiency | |
|---|---|---|---|---|---|
| ❌ | WallTime | test_perf_kubernetes_style_pydantic_v2 |
2.2 s | 2.7 s | -16% |
| ❌ | WallTime | test_perf_openapi_large |
2.5 s | 3 s | -16.32% |
| ❌ | WallTime | test_perf_aws_style_openapi_pydantic_v2 |
1.6 s | 1.9 s | -15.55% |
| ❌ | WallTime | test_perf_complex_refs |
1.8 s | 2.2 s | -18.79% |
| ❌ | WallTime | test_perf_stripe_style_pydantic_v2 |
1.7 s | 2 s | -16.06% |
| ❌ | WallTime | test_perf_large_models_pydantic_v2 |
3.1 s | 3.7 s | -17.46% |
| ❌ | WallTime | test_perf_deep_nested |
5.2 s | 6.1 s | -15.15% |
| ❌ | WallTime | test_perf_duplicate_names |
900.4 ms | 1,065.8 ms | -15.52% |
| ❌ | WallTime | test_perf_graphql_style_pydantic_v2 |
705.7 ms | 831.2 ms | -15.1% |
| ❌ | WallTime | test_perf_multiple_files_input |
3.1 s | 3.7 s | -15.95% |
| ❌ | WallTime | test_perf_all_options_enabled |
5.6 s | 6.5 s | -14.1% |
Comparing codex/update-project-usage-list (45e46d9) with main (7d41fef)
Footnotes
-
98 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports. ↩
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #3072 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 87 87
Lines 18237 18237
Branches 2087 2087
=========================================
Hits 18237 18237
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Breaking Change AnalysisResult: Breaking changes detected Reasoning: Despite the PR title suggesting only documentation changes, this PR contains substantial code changes: (1) Generated code output changes significantly - default values for model-referencing fields switch from Content for Release NotesCode Generation Changes
Custom Template Update Required
API/CLI Changes
Error Handling Changes
This analysis was performed by Claude Code Action |
|
🎉 Released in 0.56.0 This PR is now available in the latest release. See the release notes for details. |
Summary
Refresh the "Projects that use datamodel-code-generator" examples in both
README.mdanddocs/index.md.What Changed
awslabs/aws-lambda-powertools-pythonwithopenai/codexArize-ai/phoenixwithapache/airflowbrowser-use/browser-usetensorzero/tensorzeroWhy
The updated list favors well-known projects with clear, verifiable usage links in the repository itself. This keeps the section useful as social proof while still linking to concrete examples of how the tool is used.
Impact
Users browsing the README or docs will see more recognizable examples of real-world adoption, with direct links to the referenced usage locations.
Validation
git diff --checkSummary by CodeRabbit