feat(integrations): smart config merging for Codex and improve Chines…#249
Conversation
jundot
left a comment
There was a problem hiding this comment.
Thanks for the codex config merging work and the README sync. Few things:
1. PR_DESCRIPTION.md should not be committed to the repo. Please remove it.
2. TOML merge bug: the code uses not in_omlx_section to identify top-level keys, but this also matches keys inside other provider sections. A model key under [model_providers.custom] would get overwritten. You need to track whether you're inside any section, not just the omlx one.
3. Reasoning model detection uses raw substring matching ("o1", "r1", etc), so names like "modelo1x" would false-positive. A word boundary pattern would be safer.
4. Missing tests for the new merge logic. OpenCode and OpenClaw both have test_configure_preserves_existing style tests. The new codex merge should have similar coverage.
|
Thanks for the thorough review, jundot! I've pushed an update that addresses all your points:
Let me know if anything else is needed! |
f6faf2f to
c2beead
Compare
|
Thanks for the clean update, everything from the first review is addressed properly. I'll merge this now and do a small follow-up commit for a couple of minor things i noticed (dead template cleanup, reasoning flag edge case). Nothing major, just housekeeping on top of your work. Appreciate the contribution! |
- remove unused _CONFIG_TEMPLATE - move `import re` to module level - clear stale model_reasoning_effort when switching to non-reasoning model - add tests for reasoning model detection and flag cleanup
|
Thank you for the merge. Appreciate your work. |
PR Description: feat(integrations): smart config merging for Codex and improve Chinese README
Overview
This PR introduces two main improvements to enhance the developer experience, especially for users utilizing advanced reasoning models and regional mirrors.
1. Refactor Codex Integration (
omlx/integrations/codex.py)~/.codex/config.toml. It now preserves existing user settings (liketrust_levelor other model providers) and only updates or inserts the[model_providers.omlx]section.thinking,o1,o3, orr1, it automatically configuresmodel_reasoning_effort = "high"in the generated config. This is a crucial feature for the latest models (like GPT-5.4 or Qwen reasoning variants).2. Documentation Improvements (
README.zh.md)--hf-endpointCLI example which was present in the English version but missing in the Chinese translation. This is vital for users in regions with restricted access to HuggingFace.Testing Conducted
~/.codex/config.tomlare correctly preserved when oMLX is configured.model_reasoning_effort = "high"is correctly injected for models withthinkingoro1/o3/r1in their names.README.zh.mdto ensure correct formatting and accuracy.