Skip to content

fix(core): prevent agent loop from stopping after tool calls with OpenAI-compatible providers#14973

Merged
rekram1-node merged 15 commits intoanomalyco:devfrom
valenvivaldi:fix/agent-loop-openai-compatible-tool-calls
Apr 2, 2026
Merged

fix(core): prevent agent loop from stopping after tool calls with OpenAI-compatible providers#14973
rekram1-node merged 15 commits intoanomalyco:devfrom
valenvivaldi:fix/agent-loop-openai-compatible-tool-calls

Conversation

@valenvivaldi
Copy link
Copy Markdown
Contributor

@valenvivaldi valenvivaldi commented Feb 25, 2026

Issue for this PR

Closes #14972
Related: #14063

Type of change

  • Bug fix
  • New feature
  • Refactor / code improvement
  • Documentation

What does this PR do?

Some OpenAI-compatible providers (Gemini, LiteLLM) return finish_reason: "stop" instead of "tool_calls" when their response contains tool calls. This differs from the OpenAI standard.

The agent loop exit condition in prompt.ts (line ~318) only checks the finish reason to decide whether to continue. When it sees "stop", it breaks the loop — even though tools were just executed and need their results processed by the model.

The fix adds one extra check: before exiting the loop, look at the last assistant message parts for any type === "tool" entries. If tool parts exist, the model did call tools, so the loop must continue regardless of what finish_reason the provider reported.

How did you verify your code works?

  • Typecheck passes (16/16 packages)
  • Tested manually running bun run dev with Gemini 3 Flash configured as an OpenAI-compatible provider. Before the fix, the agent stopped after every tool call. After the fix, it continues processing tool results normally.

Screenshots / recordings

N/A — not a UI change.

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

…nAI-compatible providers

Some OpenAI-compatible providers (Gemini, LiteLLM) return finish_reason
"stop" instead of "tool_calls" when the response contains tool calls.
This caused the agent loop to exit prematurely after executing a tool,
instead of continuing to process the tool results.

The fix adds a check for tool parts in the last assistant message. If
tool calls were made, the loop continues regardless of the provider's
reported finish_reason.

Fixes anomalyco#14972
Related: anomalyco#14063

Co-Authored-By: Claude Opus 4.6 <[email protected]>
@github-actions github-actions bot added needs:compliance This means the issue will auto-close after 2 hours. and removed needs:compliance This means the issue will auto-close after 2 hours. labels Feb 25, 2026
@github-actions
Copy link
Copy Markdown
Contributor

Thanks for updating your PR! It now meets our contributing guidelines. 👍

@valenvivaldi
Copy link
Copy Markdown
Contributor Author

Hey everyone, thanks for all the thumbs up! 👍

I've been keeping this branch updated with the latest dev — the bug is still present upstream as of today.

On our end, we need this fix since our team uses a LiteLLM proxy to route requests to Gemini, OpenAI, and other providers. Without it, the agent stops after every tool call, which makes it essentially unusable for agentic workflows.

Great to see this is also helping folks running local LLMs — glad it's not just us!

Hoping a maintainer gets a chance to review this at some point. The fix is minimal (7 lines) and all CI checks pass.

@martin-liu
Copy link
Copy Markdown

@Hona Would appreciate a look when you have a chance.

This seems to affect a broad set of users of OpenAI-compatible providers, especially many companies using proxy layers. Thanks!

@martin-liu
Copy link
Copy Markdown

@rekram1-node The tests failed on your comment-only commit, looks like unrelated flaky tests.

@valenvivaldi
Copy link
Copy Markdown
Contributor Author

Hi everyone! Just updated the branch with the latest from dev — all CI checks are passing. 🟢

Dev removed "unknown" from the finish reason check array.
Our fix (!hasToolCalls) is preserved as it's the proper solution
for OpenAI-compatible providers returning wrong finish reasons.
@valenvivaldi
Copy link
Copy Markdown
Contributor Author

Branch updated — merged latest dev

Conflict resolved in packages/opencode/src/session/prompt.ts

What changed in dev: The "unknown" finish reason was removed from the exit condition array. It was originally included as a workaround for providers that mapped bad stop reasons to "unknown", but it was commented out and removed as it's no longer considered necessary (see the comment in dev: "in v6 unknown became other but other existed in v5 too and was distinctly different").

Why our fix is still necessary: Removing "unknown" doesn't address the core issue this PR solves. The problem is that some OpenAI-compatible providers return finish_reason: "stop" instead of "tool_calls" even when tools were actually called, causing the agent loop to exit prematurely. Our fix adds a !hasToolCalls check that inspects the actual message parts for tool calls, regardless of what the provider reports as the finish reason. This is a more robust solution than relying on finish reason values.

henry701 added a commit to henry701/opencode that referenced this pull request Mar 31, 2026
…o#14973 (agent loop fix)

PR anomalyco#14743 — fix(cache): improve Anthropic prompt cache hit rate
- Split system prompt into stable (global) + dynamic (project) blocks
- Remove cwd from bash tool schema (was busting cache per-repo)
- Freeze date under OPENCODE_EXPERIMENTAL_CACHE_STABILIZATION flag
- Add optional 1h TTL on first system block (OPENCODE_EXPERIMENTAL_CACHE_1H_TTL)
- Add OPENCODE_CACHE_AUDIT logging for per-call cache accounting
- Track global vs project skill scope for stable cache prefix
- Add splitSystemPrompt provider option to opt out

PR anomalyco#14973 — fix(core): prevent agent loop stopping after tool calls
- Check lastAssistantMsg.parts for tool type before exiting loop
- Fixes OpenAI-compatible providers (Gemini, LiteLLM) returning
  finish_reason 'stop' instead of 'tool_calls' when tools were called

ci: add FORCE_JAVASCRIPT_ACTIONS_TO_NODE24 to upstream-sync workflow
build: relax bun version check to minor-level for local builds
henry701 added a commit to henry701/opencode that referenced this pull request Mar 31, 2026
… sync workflow

- llm.ts: restore systemSplit to StreamInput, apply split system construction
  against upstream's new StreamRequest pattern (abort moved out of StreamInput)
- prompt.ts: update system assembly to use skills.{global,project} +
  instructions.{global,project} split API; pass systemSplit; add hasToolCalls
  check (PR anomalyco#14973) against upstream's effectified loop structure
- upstream-sync.yml: replace issues-based conflict notification with job
  summary (issues disabled on fork); drop issues: write permission
@rekram1-node
Copy link
Copy Markdown
Collaborator

Yeah I think this makes sense some providers dont map their stop reasons correctly.

@rekram1-node rekram1-node merged commit 733a3bd into anomalyco:dev Apr 2, 2026
13 of 16 checks passed
jeromelau pushed a commit to jeromelau/opencode that referenced this pull request Apr 2, 2026
…nAI-compatible providers (anomalyco#14973)

Co-authored-by: Aiden Cline <[email protected]>
Co-authored-by: Aiden Cline <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Agent stops after tool execution with OpenAI-compatible providers (Gemini, LiteLLM)

3 participants