You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
wsjwong
changed the title
Models: add OpenAI Codex gpt-5.4 forward compat
Models/OpenAI Codex: add gpt-5.4 forward-compat and xhigh support
Mar 5, 2026
This PR adds forward-compatibility support for openai-codex/gpt-5.4 across model discovery, resolution, and xhigh thinking, mirroring existing patterns for earlier Codex versions. The new catalog injection and XHIGH_MODEL_REFS entry are correct.
One item warrants attention:
Lost test regression coverage for gpt-5.3-codex in the list command: list.list-command.forward-compat.test.ts replaced the existing gpt-5.3-codex forward-compat scenario with gpt-5.4 rather than extending it. Since the gpt-5.3-codex forward-compat behavior in the list command is unchanged, the previous test case should be preserved alongside the new one to maintain proper regression coverage.
Confidence Score: 4/5
Safe to merge; feature implementation is solid with only a minor test coverage regression.
The core forward-compat logic for gpt-5.4 is correct and well-implemented. The refactored resolveOpenAICodexForwardCompatModel properly handles both gpt-5.3-codex and gpt-5.4 with appropriate template fallbacks and provider eligibility checks. The only issue is a test coverage regression: the dedicated test for gpt-5.3-codex forward-compat in the list command was replaced rather than extended, which reduces visibility into regressions for that unchanged path.
src/commands/models/list.list-command.forward-compat.test.ts (add back test coverage for gpt-5.3-codex scenario)
Comments Outside Diff (1)
src/commands/models/list.list-command.forward-compat.test.ts, line 79-95 (link)
The previous test verified that a configured openai-codex/gpt-5.3-codex is not marked as missing when resolveForwardCompatModel can build a fallback. That entire scenario was replaced by the parallel gpt-5.4 scenario rather than being extended.
Since gpt-5.3-codex forward-compat behavior in the list command is unchanged, consider preserving the original test case alongside the new one to maintain regression coverage:
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
Closing in favor of #36590, which already covers the same openai-codex/gpt-5.4 / xhigh support plus the broader openai/gpt-5.4 work.
I also independently validated on a live local OpenClaw install that openai-codex/gpt-5.4 works end-to-end and accepts --thinking xhigh with OpenAI Codex OAuth.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
openai-codex/gpt-5.4shows up in model discovery/listing whengpt-5.3-codexis availableopenai-codex/gpt-5.4xhighthinking foropenai-codex/gpt-5.4gpt-5.4in modern-model filtering and regression coverageWhy
OpenAI Codex OAuth is already accepting
gpt-5.4, but current OpenClaw builds still lag behind the upstream registry/capability tables:models listdoes not surfaceopenai-codex/gpt-5.4openai-codex/gpt-5.4can be treated as missing without forward-compat fallback/thinking xhighrejectsopenai-codex/gpt-5.4even though the model works end-to-endThis makes the new Codex model harder to discover/use until the underlying registries catch up.
Validation
Targeted tests
corepack pnpm exec vitest run \ src/agents/model-catalog.test.ts \ src/agents/model-compat.test.ts \ src/agents/pi-embedded-runner/model.forward-compat.test.ts \ src/agents/pi-embedded-runner/model.test.ts \ src/auto-reply/thinking.test.ts \ src/commands/models/list.list-command.forward-compat.test.tsManual runtime verification
Validated locally against an installed OpenClaw + OpenAI Codex OAuth setup:
openclaw models list --all --provider openai-codex --plainsurfacesopenai-codex/gpt-5.4openclaw agent --local --agent main --message "Reply with exactly: GPT54_OK"succeeds onopenai-codex/gpt-5.4openclaw agent --local --agent main --message "Reply with exactly: XHIGH_54_OK" --thinking xhighsucceeds onopenai-codex/gpt-5.4