feat(cli): allow switching toolcall model at runtime (#1182)#1192
Conversation
`/model set <provider>` only persisted the reasoning model, leaving the toolcall model stuck at its default and giving users no documented way to change it from the interactive shell. Add a `--toolcall-model` flag to `/model set` and a `/model toolcall set <model>` subcommand that writes `<PROVIDER>_TOOLCALL_MODEL` to .env via the existing env_sync path. Provider metadata gains a `toolcall_model_env` field; CLI providers (codex/claude-code/ollama) set it to None and reject toolcall overrides. The NL action schema also accepts an optional `toolcall_model` and a new `switch_toolcall_model` shape so the assistant can honor "switch the toolcall model also to X".
Greptile SummaryThis PR wires up runtime toolcall-model switching in the interactive shell, fixing the write/read asymmetry where Confidence Score: 5/5Safe to merge; all findings are P2 style/UX nits with no impact on correctness in current callers. No P0 or P1 issues found. The three P2 comments cover: extra-arg handling inconsistency between app/cli/interactive_shell/commands.py — minor edge-case handling in Important Files Changed
Reviews (2): Last reviewed commit: "fix(cli): label reasoning vs toolcall sl..." | Re-trigger Greptile |
…oud#1182) Greptile flagged two P2s on PR Tracer-Cloud#1192: - `default_toolcall_model` was added to `ProviderOption` and populated for all five API-backed providers, but neither `switch_llm_provider` nor `switch_toolcall_model` ever read it. The toolcall env var either comes from a prior user override or `LLMSettings.from_env()`'s static default constants, so the field was dead code. Remove it (and the now unused TOOLCALL_MODEL imports). - `--toolcall` was silently accepted as a shorthand alias for `--toolcall-model` in `_parse_model_set_args` but missing from the usage line, the SLASH_COMMANDS help text, and the action label, which would surprise users who copy it from source. Drop the alias.
|
@davincios @rrajan94 @VaibhavUpreti kindly review |
Reviewer (Tracer-Cloud#1192) asked for a more specific error than the generic usage line when `/model set anthropic --toolcall-model` is typed without a value. Today the parser returns None and the dispatcher prints the same usage line for every malformed input, which leaves the user guessing what went wrong. Switch `_parse_model_set_args` to raise `ValueError` with a targeted message ("missing value for --toolcall-model", "unknown flag: <flag>", "unexpected extra argument: <token>", "missing provider name"). The dispatcher prints the message in red on the first line and the usage template on a dim follow-up line, so the user sees both *what* broke and *what shape* the command should take.
|
@Devesh36 done |
yashksaini-coder
left a comment
There was a problem hiding this comment.
Did a local test, even though we don't have an anthropic key, the function still executes, shows that it switched LLM provider, and updates the provider, which should not be the case.
The expected behaviour should be that it throws error saying env vars not setup or provided, and must stop from updating the LLM provider config in settings.
…racer-Cloud#1182) Reviewer (Tracer-Cloud#1192) caught that /model set anthropic happily updates .env and prints "switched LLM provider" even when ANTHROPIC_API_KEY is not set, leaving the user in a half-broken state where the very next /model show prints "LLM settings unavailable". Add a precheck in switch_llm_provider that calls has_llm_api_key on the target provider's api_key_env BEFORE writing to .env or os.environ, and return False with a specific error + a copy-pastable export hint when the credential is missing. Skip the check for providers whose credential is not a secret (ollama / OLLAMA_HOST has a working default) and for CLI-backed providers (codex, claude-code) that authenticate through the vendor CLI and have no api_key_env. Existing /model set tests now have to opt in by setting a fake ANTHROPIC_API_KEY since the helper used to be credential-blind.
…utput (Tracer-Cloud#1182) Reviewer (Tracer-Cloud#1192) ran `/model set anthropic claude-haiku-4-5-20251001` and asked "how is the reasoning model changing then?" because the success line read switched LLM provider: anthropic (claude-haiku-4-5-20251001) with no indication of which slot the parenthetical model went into. The /model show table below it then showed reasoning and toolcall both equal to that model (toolcall was already haiku, reasoning had just changed), which made it look like one command had silently overwritten both. It hadn't — the parser treats the second positional as the reasoning model and the toolcall row was unchanged — but the message made the right behavior look wrong. Always label both slots on a switch: switched LLM provider: anthropic reasoning model: claude-haiku-4-5-20251001 (ANTHROPIC_REASONING_MODEL) [toolcall model: ... (ANTHROPIC_TOOLCALL_MODEL)] # only when --toolcall-model was passed
|
@Devesh36 the behavior here is actually correct, but I see why it read like a bug. The second positional argument in before: reasoning = claude-sonnet-4-6 you ran: /model set anthropic claude-haiku-4-5-20251001 after: reasoning = claude-haiku-4-5-20251001 (just changed) So both rows show the same value because you happened to pick a reasoning model that matches the existing toolcall default. The tool call slot itself was not rewritten. That said, you were right to flag it , the success message switched LLM provider: anthropic So next time the source of each row in the connection table is unambiguous from the lines above it. For reference, the three forms now all behave distinctly: /model set anthropic # changes reasoning only |
|
@yashksaini-coder fixed. |
|
@greptileai review |
|
@Davidson3556 this is great add on 🔥 |
|
🌮 @Davidson3556's PR: showed up unannounced, improved everything, left zero bugs. Just like a perfect taco. 🌮 👋 Join us on Discord - OpenSRE : hang out, contribute, or hunt for features and issues. Everyone's welcome. |


Fixes #1182
Describe the changes you have made in this PR -
The interactive shell now exposes a runtime-supported way to change the toolcall model, matching the "Expected behavior" in the issue.
/model set <provider> [model] --toolcall-model <model>— switch provider, reasoning model, and toolcall model in one command./model toolcall set <model>— change only the toolcall model on the currently active provider, leaving the reasoning model untouched.toolcall modelrow now reflects the value the user just set (it was previously stuck at the default).codex,claude-code,ollama) reject the override with a clear yellow message instead of silently lying.toolcall_modelonswitch_llm_provider, plus a newswitch_toolcall_modelaction — so requests like "switch the toolcall model also to claude-opus-4-7" actually work instead of returning the "OpenSRE doesn't currently expose a separate action..." refusal shown in the bug report.Files touched:
app/cli/wizard/config.py— addedtoolcall_model_envanddefault_toolcall_modeltoProviderOption; populated for anthropic/openai/openrouter/gemini/nvidia.app/cli/interactive_shell/commands.py— extendedswitch_llm_provider, addedswitch_toolcall_model, added a small flag parser for/model set, added/model toolcallsubcommand, and updated the/modelhelp text.app/cli/interactive_shell/cli_agent.py— updated_ACTION_RULEand_execute_action_planso the LLM-driven assistant can request a toolcall switch.tests/cli/interactive_shell/test_commands.py— added 5 tests covering the flag, the subcommand, the unsupported-provider rejection, and the missing-arg usage paths.Demo/Screenshot for feature changes and bug fixes -
Code Understanding and AI Usage
Did you use AI assistance (ChatGPT, Claude, Copilot, etc.) to write any part of this code?
If you used AI assistance:
Explain your implementation approach:
The bug was a write/read asymmetry:
LLMSettings.from_env()already reads<PROVIDER>_TOOLCALL_MODEL, and the connection table already renders that value, butswitch_llm_provideronly ever wroteLLM_PROVIDERand the reasoning model env var — so the toolcall row was effectively read-only from the user's perspective.Two alternatives I considered and rejected:
/model set. Rejected because it silently destroys the user's explicit toolcall override and makes the two values uncontrollable independently — the connection table would still display two rows but they'd always be equal./toolcalltop-level slash command. Rejected because the discoverability story is worse: users already type/modelwhen thinking about models, and the help text for one command is easier to maintain than two.Chosen approach: keep
/modelas the single entry point. Add a flag (--toolcall-model) for the combined case and a subcommand (/model toolcall set) for the toolcall-only case, both of which funnel through the same env-sync path that already handles the reasoning model. Provider metadata gains atoolcall_model_envfield so each provider declares which env var holds its toolcall model; CLI providers (codex/claude-code/ollama) set it toNoneand the helper returns a polite rejection instead of writing nothing or pretending it worked.Key functions:
switch_llm_provider(provider, console, model=None, *, toolcall_model=None)— extended with a keyword-onlytoolcall_model. When provided, it addsprovider.toolcall_model_env=<value>to the dict thatsync_env_valueswrites, so the reasoning + toolcall + provider all land in.envatomically.switch_toolcall_model(model, console, *, provider_name=None)— new helper. Resolves the active provider fromLLM_PROVIDER(defaulting to anthropic, mirroring how the rest of the shell defaults), looks uptoolcall_model_env, and writes only that one key. This is what makes/model toolcall setnon-destructive to the reasoning model._parse_model_set_args(args)— small flag parser. ReturnsNonefor malformed input (unknown flag, missing flag value, two positional models) so/model setcan print a single usage line instead of guessing._cmd_model— adds atoolcallbranch and routessetthrough the new parser._ACTION_RULE/_execute_action_planincli_agent.py— taught the assistant about the new action shapes, including a label that shows both reasoning and toolcall in the action preview ("switch LLM provider to anthropic (claude-opus-4-7) + toolcall claude-opus-4-7").Edge cases tested:
--made-up-flag) → prints usage, no env write.codex) → prints "does not expose a separate toolcall model", no env write./model toolcall setwith no model → prints usage.LLM_PROVIDERor the reasoning model — verified by asserting absence of those keys in the resulting.env./model set anthropic claude-opus-4-7 --toolcall-model claude-opus-4-7writes all three keys.Checklist before requesting a review
Note: Please check Allow edits from maintainers if you would like us to assist in the PR.