-
-
Notifications
You must be signed in to change notification settings - Fork 69.4k
openrouter/auto fails with "Unknown model: openrouter/auto" — model resolution mismatch #5395
Description
Summary
When using openrouter/auto as the primary agent model (e.g. in agents.defaults.model.primary), the embedded agent fails before replying with:
Error: Unknown model: openrouter/auto
Embedded agent failed before reply: Unknown model: openrouter/auto
The Control UI Chat then appears stuck (agent shows "A ..." indefinitely).
Cause
- OpenRouter supports
openrouter/autoas a valid model (Auto Router): https://openrouter.ai/docs/guides/routing/auto-model-selection - Config uses
primary: "openrouter/auto", which is parsed asprovider = "openrouter",modelId = "auto"(seeparseModelRefinmodel-selection.js). - The embedded runner calls
modelRegistry.find(provider, modelId)→find("openrouter", "auto"). - In pi-ai's
models.generated.js, the OpenRouter Auto model is stored withid: "openrouter/auto"(full string), notid: "auto". - The registry's
find(provider, modelId)doesm.provider === provider && m.id === modelId, sofind("openrouter", "auto")does not matchm.id === "openrouter/auto".
So the lookup fails and the user gets "Unknown model" even though openrouter/auto is valid for the API.
Environment
- OpenClaw:
2026.1.29(global npm) - Config:
agents.defaults.model.primary: "openrouter/auto", auth profileopenrouter:defaultwith API key set - Observed in: Control UI → Chat; gateway logs in
~/.openclaw/logs/gateway.err.log
Suggested fix
In dist/agents/pi-embedded-runner/model.js resolveModel(), when modelRegistry.find(provider, modelId) returns no model and provider === "openrouter" and modelId === "auto", try modelRegistry.find(provider, "openrouter/auto") before falling through to inline/custom/fallback logic, since pi-ai stores that model under the full id.
(Alternatively, the pi-coding-agent model registry could normalize openrouter + auto to openrouter/auto when loading built-in models, or the model picker could stop hiding openrouter/auto and the resolver could accept both auto and openrouter/auto for that provider.)
Workaround
Use a concrete OpenRouter model id in config instead of openrouter/auto, e.g. openai/gpt-4o-mini or anthropic/claude-3-5-haiku, which are in the registry and resolve correctly.