Skip to content

[Bug]: Regression/incomplete fix: openai-codex still times out on GPT-5.4 after #38736 due to remaining non-codex transport path #41282

@pascalkienast

Description

@pascalkienast

Bug type

Regression (worked before, now fails)

Summary

After #38736, openai-codex/gpt-5.4 still times out in some runtime paths. The issue appears to be that OpenClaw still uses a non-Codex transport/base-URL combination for part of the openai-codex flow.

Steps to reproduce

  1. Install OpenClaw 2026.3.8 (also observed on 2026.3.7).
  2. Authenticate openai-codex via ChatGPT / Codex OAuth.
  3. Set openai-codex/gpt-5.4 as the primary model.
  4. Run:
openclaw models status --probe --probe-provider openai-codex --probe-timeout 30000 --probe-max-tokens 32 --json
  1. Run:
openclaw agent --agent main --message 'Reply exactly with OK.' --json --timeout 120
  1. Compare behavior before/after forcing:

Expected behavior

Actual behavior

  • openai-codex provider probes still time out in some cases.
  • Embedded GPT-5.4 runs still stall or time out.
  • The same setup recovers immediately when forcing the Codex-specific base URL and SSE transport.
  • This suggests at least one remaining runtime path is still using a non-Codex transport/base-URL combination.

OpenClaw version

2026.3.8 (also observed on 2026.3.7)

Operating system

Ubuntu Linux

Install method

npm global install

Logs, screenshots, and evidence

Related existing issue/fix:
- #38706
- #38736

Official Codex upstream references:
- openai/codex uses https://chatgpt.com/backend-api/codex for ChatGPT-authenticated Codex base URL:
  https://github.com/openai/codex/blob/main/codex-rs/core/src/model_provider_info.rs
- official Codex source also references:
  https://chatgpt.com/backend-api/codex/responses
  https://github.com/openai/codex/blob/main/codex-rs/core/src/error.rs

Local workaround that restores normal behavior:

{
  "models": {
    "providers": {
      "openai-codex": {
        "baseUrl": "https://chatgpt.com/backend-api/codex"
      }
    }
  },
  "agents": {
    "defaults": {
      "models": {
        "openai-codex/gpt-5.4": {
          "params": {
            "transport": "sse"
          }
        },
        "openai-codex/gpt-5.3-codex": {
          "params": {
            "transport": "sse"
          }
        }
      }
    }
  }
}


Observed A/B result:
- without /codex in base URL -> reproducible timeout
- with /codex in base URL -> provider probe returns status: ok and GPT-5.4 runs recover

Impact and severity

Affected users: users of openai-codex/gpt-5.4 via ChatGPT / Codex OAuth
Severity: High
Frequency: Reproducible on my setup
Consequence: GPT-5.4 appears configured correctly but runtime behavior remains unstable, causing timeouts, stalls, and misleading fallback/debugging behavior

Additional information

  • This does not appear to be only a session-size/compaction issue; I reproduced provider-level failure with direct probes.
  • This also does not appear to be only a pure 2026.3.8 regression, since I saw similar behavior on 2026.3.7.
  • My best reading is:

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingregressionBehavior that previously worked and now fails

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions