fix(cli): canonicalize infer model run refs#73717
fix(cli): canonicalize infer model run refs#73717ai-hpc wants to merge 1 commit intoopenclaw:mainfrom
Conversation
Greptile SummaryThis PR fixes a bug in Confidence Score: 5/5This PR is safe to merge; the canonicalization is narrowly scoped, well-guarded, and fully covered by the new tests. The change is minimal and targeted: it only fires when both a provider and model are present in the ref and at least one has non-lowercase characters, avoiding unnecessary catalog loads for the common case. All four dispatch variants (local canonical, local no-match, gateway canonical, gateway no-match) are now tested. No pre-existing patterns are altered, and custom mixed-case refs without a catalog match are explicitly preserved. No files require special attention. Reviews (2): Last reviewed commit: "fix(cli): canonicalize infer model refs" | Re-trigger Greptile |
|
Codex review: needs changes before merge. Summary Reproducibility: yes. by source inspection and linked report. The related issue gives concrete Real behavior proof Next step before merge Security Review findings
Review detailsBest possible solution: Land this PR or an equivalent catalog-backed fix after preserving trailing auth-profile suffixes, adding focused regression coverage, and recording the user-facing CLI fix in the changelog. Do we have a high-confidence way to reproduce the issue? Yes, by source inspection and linked report. The related issue gives concrete Is this the best way to solve the issue? No, not as-is. Catalog-backed canonicalization before dispatch is the right narrow solution, but it should split and reattach supported Full review comments:
Overall correctness: patch is incorrect Acceptance criteria:
What I checked:
Likely related people:
Remaining risk / open question:
Codex review notes: model gpt-5.5, reasoning high; reviewed against 0b88d6286c4b. |
9da6425 to
550dec8
Compare
|
Human verification update for Verified scenarios:
Edge cases checked:
What I did not verify:
Re-review progress:
|
049a947 to
bbdcee6
Compare
|
Hi @steipete, could you take a look at this PR when you have a moment? Thanks! |
c947bc1 to
2bdc4fa
Compare
2bdc4fa to
2d491c1
Compare
2d491c1 to
a0456d4
Compare
a0456d4 to
38aca38
Compare
Summary
infer model run --modelforwarded case-mismatched catalog model ids such asAnthropic/CLAUDE-OPUS-4-7to the model runtime, which could surface as a misleading empty-output provider error.infer model runcommands.Change Type (select all)
Scope (select all touched areas)
Linked Issue/PR
Root Cause (if applicable)
infer model runparsed and forwarded explicit model overrides without canonicalizing case-only matches against the known model catalog.Regression Test Plan (if applicable)
src/cli/capability-cli.test.tsAnthropic/CLAUDE-OPUS-4-7is canonicalized toanthropic/claude-opus-4-7before local and gateway model-run dispatch.User-visible / Behavior Changes
openclaw infer model run --model <provider/model>now accepts case-only mismatches for catalog models when the match is unique, dispatching the canonical model id instead of the raw casing.Diagram (if applicable)
Security Impact (required)
Yes/No): NoYes/No): NoYes/No): NoYes/No): NoYes/No): NoYes, explain risk + mitigation: N/ARepro + Verification
Environment
infer model runSteps
openclaw infer model run --model Anthropic/CLAUDE-OPUS-4-7 --prompt "hello" --json.Expected
Actual
Evidence
Human Verification (required)
Verified scenarios: local dispatch canonicalization, gateway dispatch canonicalization, custom mixed-case ref preservation.
Edge cases checked: no catalog match leaves the explicit model override unchanged.
What you did not verify: live Anthropic provider call.
pnpm test src/cli/capability-cli.test.ts -- --reporter=verbosepassed: 51 tests.pnpm buildpassed after refreshing the local install withpnpm install.Live CLI smoke with existing local OpenRouter auth profile passed:
pnpm --silent openclaw infer model run --model OpenRouter/OPENROUTER/AUTO --prompt 'Reply with OK only.' --jsonok=true,provider=openrouter,model=openrouter/auto, output textOK.Additional live smoke with
OpenRouter/ANTHROPIC/CLAUDE-3-HAIKUexited 1 because the provider returned no text, but the error reported canonicalizedprovider "openrouter" model "anthropic/claude-3-haiku", not the uppercase input.Review Conversations
Compatibility / Migration
Yes/No): YesYes/No): NoYes/No): NoRisks and Mitigations
Real behavior proof
Behavior or issue addressed:
openclaw infer model run --model <ref>previously forwarded the user-supplied model ref to the simple-completion transport without case canonicalization. A user typingDeepSeek/DeepSeek-V4-Flash(matching the docs / catalog display name) hit a "model not found" / wrong-routing path because catalog ids are stored lowercased. After this patch, mixed-case refs are looked up against the model catalog and rewritten to the canonicalprovider/idbefore transport setup, so case-insensitive references just work.Real environment tested: macOS Darwin 25.2.0 (arm64), Node 24.15.0 (Homebrew
node@24), local OpenClaw source checkout at/Users/a1111/openclaw/running directly vianode --import tsxagainst the patchedsrc/cli/capability-cli.tsand the productionfindModelInCatalog/loadModelCataloghelpers. Real configured providers (DeepSeek native + OpenAI catalog entries) loaded from~/.openclaw/openclaw.json— no mocked catalog.Exact steps or command run after this patch:
fix/infer-model-case-canonicalization(HEAD2d491c11f6, rebased onto upstreamb8f9137d31).findModelInCatalogwith several mixed-caseprovider/idinputs — the same lookup path that the patchedcanonicalizeModelRunRefcalls.Evidence after fix:
Live terminal capture from the patched source running on my Mac, exercising the canonical lookup with mixed-case inputs (the exact code path that
runModelRunnow uses before transport setup):The patched
canonicalizeModelRunRefinsrc/cli/capability-cli.tsruns only when the input has uppercase characters and a parsableprovider/idshape, then reuses the catalog lookup above and substitutes the canonical pair before invoking transport setup. Already-canonical lowercase inputs short-circuit and are returned untouched.Observed result after fix:
Mixed-case
--modelinputs (e.g.,DeepSeek/DeepSeek-V4-Flash,OpenAI/GPT-4o-mini) resolve to the lowercased catalog id (deepseek/deepseek-v4-flash,openai/gpt-4o-mini) before the local simple-completion transport or the gateway lane is invoked. Already-canonical lowercase refs are returned unchanged, so this patch is a no-op for the dominant call shape and only changes behavior for previously-failing mixed-case input.What was not tested:
model !== model.toLowerCase()against ASCII catalog ids and was not exercised against non-ASCII names.