Skip to content

models list shows smaller context window for openai-codex/gpt-5.2 (~266k/272k) than expected 400k #8506

@antoniomonteirojr

Description

@antoniomonteirojr

Summary

openclaw models list reports a significantly smaller context window for openai-codex/gpt-5.2 (~266k/272k) than expected (400k). This affects model selection / planning and appears to cap the usable context for the Codex OAuth provider.

What I expected

  • openai-codex/gpt-5.2 should reflect a 400k context window (as documented for GPT-5.2).
  • If the Codex OAuth backend truly has a smaller limit, the CLI should clearly distinguish that this is a provider/backend-specific cap (and ideally link to where the value comes from), rather than silently presenting it as the model’s context window.

What I see instead

  • openclaw models list shows:

    • openai-codex/gpt-5.2266k
    • openrouter/x-ai/grok-4.1-fast1953k
    • kimi-coding/k2p5256k
  • openclaw models list --json returns (excerpt):

    • openai-codex/gpt-5.2contextWindow: 272000
    • openrouter/x-ai/grok-4.1-fastcontextWindow: 2000000
    • kimi-coding/k2p5contextWindow: 262144
  • Config override is applied and visible via:

    • openclaw config get models shows contextWindow: 400000 for openai-codex/gpt-5.2

Yet models list continues to report ~266k/272k for openai-codex/gpt-5.2.

Steps to reproduce

  1. On a fresh install, authenticate the Codex provider:
    • openclaw models auth login --provider openai-codex
  2. Set default model to openai-codex/gpt-5.2 (or add as fallback).
  3. Run:
    • openclaw models list
    • openclaw models list --json
    • openclaw config get models

Environment

  • OpenClaw: 2026.2.1
  • OS: Ubuntu Server (headless)
  • Models involved:
    • openai-codex/gpt-5.2
    • openrouter/x-ai/grok-4.1-fast
    • kimi-coding/k2p5

Notes / hypothesis

It looks like the model registry is applying a provider-specific context window cap for the Codex OAuth backend (272000), which overrides the configured contextWindow and is presented in models list. If this is intentional (Codex OAuth has a smaller max context), it would be helpful to:

  • document the Codex OAuth context limit explicitly, and
  • surface the source (provider cap vs configured cap) in models list / models status.

If it is not intentional, then the configured contextWindow should be respected (or at least models list should reflect the configured value when models.mode=replace).

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingstaleMarked as stale due to inactivity

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions