-
-
Notifications
You must be signed in to change notification settings - Fork 69.7k
Legacy manual openai-codex provider override breaks Codex OAuth after #37558 / #38026; doctor should detect and fix #40066
Description
Summary
Users who manually configured models.providers.openai-codex before the recent Codex OAuth fixes can remain broken even after upgrading to a version that includes:
- fix(auth): remove bogus codex oauth responses probe #37558 — remove bogus codex oauth responses probe
- fix(openai): Enable gpt-5.4 support via Chat Completions fallback on scope error #38026 — enable GPT-5.4 support via Chat Completions fallback on scope error
In our case, Codex OAuth continued failing until we manually removed the legacy openai-codex provider override from openclaw.json.
Once that override was removed, openai-codex/gpt-5.4 worked immediately.
This appears to be a stale manual-config shadowing problem, not a fresh OAuth bug.
Why this matters
A lot of users likely manually set up Codex weeks ago during the period when people were experimenting with:
- ChatGPT/Codex OAuth
- custom
models.providers.openai-codex - explicit
baseUrl - explicit
api - manually defined
gpt-5.3-codex/ related model entries
Those manual overrides can survive upgrades and silently block the newer built-in Codex OAuth provider behavior.
So the upstream fixes may be present, but affected users still stay broken.
Environment
- macOS
- local gateway
openai-codex:defaultauth profile withmode: "oauth"- primary model set to
openai-codex/gpt-5.4 - upgraded from an older manual-Codex-config era setup
Symptoms
Before removing the override:
- Codex appeared configured correctly
- requests failed with 401 / scope-related behavior
- model often fell through to fallback providers
- behavior looked like “Codex OAuth still broken”
After removing the override:
- same auth profile
- same account
- same target model
openai-codex/gpt-5.4worked immediately
Root cause
We had a legacy manual config block under:
models.providers.openai-codexThat override was still forcing the old explicit provider shape, instead of letting the built-in Codex OAuth provider synthesize the correct behavior after the recent fixes.
Repro sketch
- On an older OpenClaw version, manually configure
models.providers.openai-codex - Add explicit base URL / API / model entries for Codex
- Upgrade OpenClaw to a version containing fix(auth): remove bogus codex oauth responses probe #37558 and fix(openai): Enable gpt-5.4 support via Chat Completions fallback on scope error #38026
- Re-auth Codex OAuth
- Try
openai-codex/gpt-5.4 - Observe failure / fallback behavior
- Remove the manual
models.providers.openai-codexoverride - Retry
- Observe success
Expected behavior
OpenClaw should detect that a legacy manual openai-codex provider override may be shadowing the built-in OAuth provider and do one of:
-
Warn loudly
- “Legacy manual
models.providers.openai-codexoverride detected. This may break built-in Codex OAuth behavior after recent fixes.”
- “Legacy manual
-
Offer an automatic migration
- remove or rewrite stale
openai-codexprovider config to the supported modern form
- remove or rewrite stale
-
Teach
openclaw doctorto catch this- detect manual
models.providers.openai-codex - check whether an OAuth profile exists for
openai-codex - warn if the manual override is likely shadowing the built-in provider
- suggest or optionally apply a fix
- detect manual
Strong suggestion: update openclaw doctor
This feels like exactly the kind of thing doctor should catch.
Suggested doctor check:
- if
auth.profilescontains an OAuth profile foropenai-codex - and config also contains a manual
models.providers.openai-codex - then warn that the manual override may be stale and may block the built-in Codex OAuth provider
Suggested output:
Detected manual
models.providers.openai-codexoverride alongside Codex OAuth auth profile.
This may be a legacy configuration that shadows the built-in Codex OAuth provider.
Recent Codex OAuth fixes may not apply until this override is removed or migrated.
Even better: doctor --fix could offer to back up config and remove the stale override automatically.
Related fixes / context
This issue looks like a downstream / legacy-config follow-up to:
- fix(auth): remove bogus codex oauth responses probe #37558
- fix(openai): Enable gpt-5.4 support via Chat Completions fallback on scope error #38026
Those fixes appear to solve the runtime/provider logic, but users with older manual Codex config can still be stuck because their config shadows the repaired built-in path.
Workaround
Remove the manual models.providers.openai-codex override from config and restart OpenClaw.
In our case, that was the key step that made openai-codex/gpt-5.4 start working.