-
-
Notifications
You must be signed in to change notification settings - Fork 39.6k
Description
Summary
openclaw models list reports a significantly smaller context window for openai-codex/gpt-5.2 (~266k/272k) than expected (400k). This affects model selection / planning and appears to cap the usable context for the Codex OAuth provider.
What I expected
openai-codex/gpt-5.2should reflect a 400k context window (as documented for GPT-5.2).- If the Codex OAuth backend truly has a smaller limit, the CLI should clearly distinguish that this is a provider/backend-specific cap (and ideally link to where the value comes from), rather than silently presenting it as the model’s context window.
What I see instead
-
openclaw models listshows:openai-codex/gpt-5.2→266kopenrouter/x-ai/grok-4.1-fast→1953kkimi-coding/k2p5→256k
-
openclaw models list --jsonreturns (excerpt):openai-codex/gpt-5.2→contextWindow: 272000openrouter/x-ai/grok-4.1-fast→contextWindow: 2000000kimi-coding/k2p5→contextWindow: 262144
-
Config override is applied and visible via:
openclaw config get modelsshowscontextWindow: 400000foropenai-codex/gpt-5.2
Yet models list continues to report ~266k/272k for openai-codex/gpt-5.2.
Steps to reproduce
- On a fresh install, authenticate the Codex provider:
openclaw models auth login --provider openai-codex
- Set default model to
openai-codex/gpt-5.2(or add as fallback). - Run:
openclaw models listopenclaw models list --jsonopenclaw config get models
Environment
- OpenClaw: 2026.2.1
- OS: Ubuntu Server (headless)
- Models involved:
openai-codex/gpt-5.2openrouter/x-ai/grok-4.1-fastkimi-coding/k2p5
Notes / hypothesis
It looks like the model registry is applying a provider-specific context window cap for the Codex OAuth backend (272000), which overrides the configured contextWindow and is presented in models list. If this is intentional (Codex OAuth has a smaller max context), it would be helpful to:
- document the Codex OAuth context limit explicitly, and
- surface the source (provider cap vs configured cap) in
models list/models status.
If it is not intentional, then the configured contextWindow should be respected (or at least models list should reflect the configured value when models.mode=replace).