Summary
On openclaw 2026.3.7, openai-codex/gpt-5.4 can resolve to a model object with api: undefined when the user has a custom models.providers.openai-codex block in ~/.openclaw/openclaw.json that defines the model but omits api.
In my case this caused the gateway to crash-loop on startup when cron replayed missed isolated jobs, with:
Unhandled promise rejection: Error: No API provider registered for api: undefined
Environment
- OpenClaw:
2026.3.7
- Install type: non-Docker user install
- Gateway launched via user systemd service
- Provider:
openai-codex
- Model:
gpt-5.4
Minimal config shape that triggers it
This custom provider block was present in ~/.openclaw/openclaw.json:
{
"models": {
"providers": {
"openai-codex": {
"baseUrl": "https://chatgpt.com/backend-api",
"models": [
{
"id": "gpt-5.3-codex",
"name": "gpt-5.3-codex"
},
{
"id": "gpt-5.4",
"name": "gpt-5.4"
}
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "openai-codex/gpt-5.4"
}
}
}
}
Note that api is omitted from the custom provider block.
Repro
- Configure a custom
models.providers.openai-codex block like the above, without api.
- Set
agents.defaults.model.primary to openai-codex/gpt-5.4.
- Run an isolated cron
agentTurn job, or restart the gateway so missed cron jobs replay.
- The gateway eventually crashes with:
No API provider registered for api: undefined
Expected
One of these should happen:
- the built-in
openai-codex/gpt-5.4 metadata should still win or be merged, preserving api: openai-codex-responses
- or the config should be rejected/validated with a clear error because the custom provider/model definition is incomplete
Actual
The custom models.providers.openai-codex definition appears to shadow the built-in gpt-5.4 model metadata. That leaves gpt-5.4 with no resolved API, which later crashes in the streaming path with api: undefined.
Likely root cause
From local inspection of the built bundle in 2026.3.7:
gpt-5.4 is known to the built-in catalog / forward-compat path for openai-codex
- but
buildInlineProviderModels() creates inline models with:
api: model.api ?? entry.api
- if the custom provider block omits both, the inline model has no
api
- and that custom provider/model path seems to take precedence over the built-in forward-compat model for
openai-codex/gpt-5.4
So this looks like a precedence/merge issue between custom provider config and built-in model metadata.
Local workaround
Adding this fixed it immediately for me:
"models": {
"providers": {
"openai-codex": {
"api": "openai-codex-responses",
"baseUrl": "https://chatgpt.com/backend-api",
"models": [
{ "id": "gpt-5.4", "name": "gpt-5.4" }
]
}
}
}
After adding api: "openai-codex-responses", the gateway came up cleanly on openai-codex/gpt-5.4 and stopped crashing.
Suggestion
It would help if OpenClaw either:
- merged built-in model metadata into custom provider models when
provider/model matches a known built-in model, or
- validated
models.providers.* entries so required transport fields like api cannot be omitted silently for providers that need them
Summary
On
openclaw 2026.3.7,openai-codex/gpt-5.4can resolve to a model object withapi: undefinedwhen the user has a custommodels.providers.openai-codexblock in~/.openclaw/openclaw.jsonthat defines the model but omitsapi.In my case this caused the gateway to crash-loop on startup when cron replayed missed isolated jobs, with:
Environment
2026.3.7openai-codexgpt-5.4Minimal config shape that triggers it
This custom provider block was present in
~/.openclaw/openclaw.json:{ "models": { "providers": { "openai-codex": { "baseUrl": "https://chatgpt.com/backend-api", "models": [ { "id": "gpt-5.3-codex", "name": "gpt-5.3-codex" }, { "id": "gpt-5.4", "name": "gpt-5.4" } ] } } }, "agents": { "defaults": { "model": { "primary": "openai-codex/gpt-5.4" } } } }Note that
apiis omitted from the custom provider block.Repro
models.providers.openai-codexblock like the above, withoutapi.agents.defaults.model.primarytoopenai-codex/gpt-5.4.agentTurnjob, or restart the gateway so missed cron jobs replay.Expected
One of these should happen:
openai-codex/gpt-5.4metadata should still win or be merged, preservingapi: openai-codex-responsesActual
The custom
models.providers.openai-codexdefinition appears to shadow the built-ingpt-5.4model metadata. That leavesgpt-5.4with no resolved API, which later crashes in the streaming path withapi: undefined.Likely root cause
From local inspection of the built bundle in
2026.3.7:gpt-5.4is known to the built-in catalog / forward-compat path foropenai-codexbuildInlineProviderModels()creates inline models with:api: model.api ?? entry.apiapiopenai-codex/gpt-5.4So this looks like a precedence/merge issue between custom provider config and built-in model metadata.
Local workaround
Adding this fixed it immediately for me:
After adding
api: "openai-codex-responses", the gateway came up cleanly onopenai-codex/gpt-5.4and stopped crashing.Suggestion
It would help if OpenClaw either:
provider/modelmatches a known built-in model, ormodels.providers.*entries so required transport fields likeapicannot be omitted silently for providers that need them