fix(core): prevent agent loop from stopping after tool calls with OpenAI-compatible providers#14973
Conversation
…nAI-compatible providers Some OpenAI-compatible providers (Gemini, LiteLLM) return finish_reason "stop" instead of "tool_calls" when the response contains tool calls. This caused the agent loop to exit prematurely after executing a tool, instead of continuing to process the tool results. The fix adds a check for tool parts in the last assistant message. If tool calls were made, the loop continues regardless of the provider's reported finish_reason. Fixes anomalyco#14972 Related: anomalyco#14063 Co-Authored-By: Claude Opus 4.6 <[email protected]>
|
Thanks for updating your PR! It now meets our contributing guidelines. 👍 |
|
Hey everyone, thanks for all the thumbs up! 👍 I've been keeping this branch updated with the latest On our end, we need this fix since our team uses a LiteLLM proxy to route requests to Gemini, OpenAI, and other providers. Without it, the agent stops after every tool call, which makes it essentially unusable for agentic workflows. Great to see this is also helping folks running local LLMs — glad it's not just us! Hoping a maintainer gets a chance to review this at some point. The fix is minimal (7 lines) and all CI checks pass. |
|
@Hona Would appreciate a look when you have a chance. This seems to affect a broad set of users of OpenAI-compatible providers, especially many companies using proxy layers. Thanks! |
|
@rekram1-node The tests failed on your comment-only commit, looks like unrelated flaky tests. |
|
Hi everyone! Just updated the branch with the latest from dev — all CI checks are passing. 🟢 |
Dev removed "unknown" from the finish reason check array. Our fix (!hasToolCalls) is preserved as it's the proper solution for OpenAI-compatible providers returning wrong finish reasons.
Branch updated — merged latest
|
…o#14973 (agent loop fix) PR anomalyco#14743 — fix(cache): improve Anthropic prompt cache hit rate - Split system prompt into stable (global) + dynamic (project) blocks - Remove cwd from bash tool schema (was busting cache per-repo) - Freeze date under OPENCODE_EXPERIMENTAL_CACHE_STABILIZATION flag - Add optional 1h TTL on first system block (OPENCODE_EXPERIMENTAL_CACHE_1H_TTL) - Add OPENCODE_CACHE_AUDIT logging for per-call cache accounting - Track global vs project skill scope for stable cache prefix - Add splitSystemPrompt provider option to opt out PR anomalyco#14973 — fix(core): prevent agent loop stopping after tool calls - Check lastAssistantMsg.parts for tool type before exiting loop - Fixes OpenAI-compatible providers (Gemini, LiteLLM) returning finish_reason 'stop' instead of 'tool_calls' when tools were called ci: add FORCE_JAVASCRIPT_ACTIONS_TO_NODE24 to upstream-sync workflow build: relax bun version check to minor-level for local builds
… sync workflow
- llm.ts: restore systemSplit to StreamInput, apply split system construction
against upstream's new StreamRequest pattern (abort moved out of StreamInput)
- prompt.ts: update system assembly to use skills.{global,project} +
instructions.{global,project} split API; pass systemSplit; add hasToolCalls
check (PR anomalyco#14973) against upstream's effectified loop structure
- upstream-sync.yml: replace issues-based conflict notification with job
summary (issues disabled on fork); drop issues: write permission
…compatible-tool-calls # Conflicts: # packages/opencode/src/session/prompt.ts
|
Yeah I think this makes sense some providers dont map their stop reasons correctly. |
…nAI-compatible providers (anomalyco#14973) Co-authored-by: Aiden Cline <[email protected]> Co-authored-by: Aiden Cline <[email protected]>
Issue for this PR
Closes #14972
Related: #14063
Type of change
What does this PR do?
Some OpenAI-compatible providers (Gemini, LiteLLM) return
finish_reason: "stop"instead of"tool_calls"when their response contains tool calls. This differs from the OpenAI standard.The agent loop exit condition in
prompt.ts(line ~318) only checks thefinishreason to decide whether to continue. When it sees"stop", it breaks the loop — even though tools were just executed and need their results processed by the model.The fix adds one extra check: before exiting the loop, look at the last assistant message parts for any
type === "tool"entries. If tool parts exist, the model did call tools, so the loop must continue regardless of whatfinish_reasonthe provider reported.How did you verify your code works?
bun run devwith Gemini 3 Flash configured as an OpenAI-compatible provider. Before the fix, the agent stopped after every tool call. After the fix, it continues processing tool results normally.Screenshots / recordings
N/A — not a UI change.
Checklist