Skip to content

fix: use codex-responses transport for openai-codex gpt-5.4 fallback#38736

Merged
steipete merged 1 commit intoopenclaw:mainfrom
0xsline:fix/issue-38706
Mar 8, 2026
Merged

fix: use codex-responses transport for openai-codex gpt-5.4 fallback#38736
steipete merged 1 commit intoopenclaw:mainfrom
0xsline:fix/issue-38706

Conversation

@0xsline
Copy link
Copy Markdown
Contributor

@0xsline 0xsline commented Mar 7, 2026

Summary

  • normalize openai-codex resolved models so GPT-5.4 does not use the OpenAI /v1/responses transport when OAuth is Codex-based
  • coerce openai-codex api/baseUrl from openai-responses + api.openai.com/v1 to openai-codex-responses + chatgpt.com/backend-api
  • add regression test for openai-codex provider overrides configured with api.openai.com/v1

Testing

  • pnpm vitest run src/agents/pi-embedded-runner/model.test.ts
  • pnpm vitest run src/agents/pi-embedded-runner/model.forward-compat.test.ts src/commands/models/list.list-command.forward-compat.test.ts

Fixes #38706

@openclaw-barnacle openclaw-barnacle bot added agents Agent runtime and tooling size: S labels Mar 7, 2026
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 7, 2026

Greptile Summary

This PR fixes a transport misconfiguration bug where openai-codex provider instances resolved through the forward-compat fallback path would have their api overridden to openai-responses and their baseUrl set to api.openai.com/v1 when a user's provider config contained those values — causing GPT-5.4 Codex requests to be routed through the wrong /v1/responses endpoint instead of the Codex-specific chatgpt.com/backend-api.

Key changes:

  • Adds isOpenAIApiBaseUrl and isOpenAICodexBaseUrl helpers to detect "known-codex" and "known-openai-api" base URLs.
  • Adds normalizeOpenAICodexTransport which, for openai-codex providers only, coerces openai-responsesopenai-codex-responses and resets api.openai.com/v1chatgpt.com/backend-api, while leaving intentional custom proxy URLs untouched.
  • Wraps the four model-resolution return sites in normalizeResolvedModel, which chains normalizeOpenAICodexTransport before the existing normalizeModelCompat.
  • Adds a regression test covering exactly the misconfigured api.openai.com/v1 + openai-responses scenario.

Confidence Score: 4/5

  • Safe to merge — narrowly-scoped fix with targeted test coverage and no changes to other provider flows.
  • The normalization logic is correct: useCodexTransport and nextBaseUrl are computed consistently, the early-return identity check prevents unnecessary object allocation, and the fix is only applied to the openai-codex provider. All four resolution sites are updated. The one minor observation is that the OpenRouter hardcoded path (line 204) still calls normalizeModelCompat directly instead of the new normalizeResolvedModel wrapper — but since normalizeOpenAICodexTransport is a strict no-op for any provider other than openai-codex, this has no runtime impact. Score is 4 rather than 5 only because there is no explicit test for the symmetric scenario where baseUrl is already chatgpt.com/backend-api but api is incorrectly openai-responses.
  • No files require special attention.

Last reviewed commit: 8082894

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 80828948dd

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +60 to +62
const nextBaseUrl =
!params.model.baseUrl || isOpenAIApiBaseUrl(params.model.baseUrl)
? OPENAI_CODEX_BASE_URL
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Restrict Codex base URL rewrite to Codex-compatible APIs

nextBaseUrl is rewritten to https://chatgpt.com/backend-api whenever baseUrl is empty or points to api.openai.com, even if the model API is not converted to openai-codex-responses. This creates mismatched transport/base URL pairs (for example, models.providers.openai-codex.api: "openai-completions", which is allowed by the model API schema) and can route non-Codex requests to a Codex-only endpoint. The rewrite should be gated on APIs that actually use the Codex transport, not just on the base URL shape.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in commit 428c855 — gated Codex baseUrl rewrite to Codex-compatible transport only, and added regression coverage for openai-completions + api.openai.com/v1 (no rewrite).

@hiramzamora111-cmd
Copy link
Copy Markdown

fix

@steipete steipete merged commit 0248570 into openclaw:main Mar 8, 2026
9 of 10 checks passed
@steipete
Copy link
Copy Markdown
Contributor

steipete commented Mar 8, 2026

Landed via temp rebase onto main.

  • Gate: pnpm exec vitest run src/agents/pi-embedded-runner/model.test.ts src/commands/models/list.list-command.forward-compat.test.ts; pnpm build; pnpm check
  • Land commit: 63f82e0
  • Merge commit: $(gh pr view 38736 --repo openclaw/openclaw --json mergeCommit --jq '.mergeCommit.oid')

Thanks @0xsline!

jackal092927 pushed a commit to jackal092927/openclaw that referenced this pull request Mar 9, 2026
… paths

After openclaw#38736, openai-codex/gpt-5.4 still timed out in some paths
because model discovery, media tools, and image understanding used
the stale openai-responses transport instead of openai-codex-responses.

Hoist normalizeResolvedProviderModel to a shared module and apply it
at model discovery (Proxy wrapper on ModelRegistry), media tool
resolution, and image understanding model lookup.

Fixes openclaw#41282
jackal092927 pushed a commit to jackal092927/openclaw that referenced this pull request Mar 9, 2026
… paths

After openclaw#38736, openai-codex/gpt-5.4 still timed out in some paths
because model discovery, media tools, and image understanding used
the stale openai-responses transport instead of openai-codex-responses.

Hoist normalizeResolvedProviderModel to a shared module and apply it
at model discovery (Proxy wrapper on ModelRegistry), media tool
resolution, and image understanding model lookup.

Fixes openclaw#41282
jackal092927 pushed a commit to jackal092927/openclaw that referenced this pull request Mar 10, 2026
… paths

After openclaw#38736, openai-codex/gpt-5.4 still timed out in some paths
because model discovery, media tools, and image understanding used
the stale openai-responses transport instead of openai-codex-responses.

Hoist normalizeResolvedProviderModel to a shared module and apply it
at model discovery (Proxy wrapper on ModelRegistry), media tool
resolution, and image understanding model lookup.

Fixes openclaw#41282
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agents Agent runtime and tooling size: S

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Bug: GPT-5.4 via openai-codex OAuth uses wrong API (responses vs codex-responses)

4 participants