Skip to content

Conversation

@zerob13
Copy link
Collaborator

@zerob13 zerob13 commented Nov 10, 2025

fix #1088

Summary by CodeRabbit

  • New Features

    • Enhanced native tool-calling with robust multi-part response handling and clearer tool-call lifecycle events.
    • Added GPT-4.1 support with extended token control in request previews.
  • Improvements

    • More reliable streaming behavior for function/tool calls, including better handling of split or merged tool outputs and unclosed blocks.
    • Improved fallback and normalization for tool-call identifiers to reduce mismatches.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 10, 2025

Walkthrough

Added comprehensive native and mock tool-call scaffolding to OpenAICompatibleProvider: tracking structures, ID management, multi-part content splitting, mock response injection, streaming/function_call edge-case handling, and a parseFunctionCalls signature update with fallbackIdPrefix; also expanded model-version gating to include gpt-4.1.

Changes

Cohort / File(s) Summary
OpenAICompatibleProvider core
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
Introduced native tool-call tracking (pendingToolCallOrder, pendingNativeToolCallIds, pendingNativeToolCallSet), serialization helpers, enqueue/consume/snapshot helpers, mock response push logic, and tool_call_id normalization across native/mock flows. Added multi-part tool content splitting (splitMergedToolContent with trySplitJsonArray, trySplitByDelimiter, trySplitByHeaderRepeats) and finalization / tool_call_end emission. Enhanced streaming handling to split/route multi-part tool responses and handle unclosed think/function_call blocks at stream end. Extended model-version gating to include gpt-4.1 in max-token logic. Updated parseFunctionCalls signature to accept fallbackIdPrefix.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Provider as OpenAICompatibleProvider
    participant Tracker as ToolCallTracker
    participant Parser as FunctionCallParser

    User->>Provider: formatMessages(messages)
    activate Provider

    Provider->>Parser: parseFunctionCalls(response, fallbackIdPrefix)
    Parser-->>Provider: function_call entries (with IDs)

    alt Native tool-call path
        Provider->>Tracker: enqueue native tool_call_ids
        Tracker-->>Provider: pendingNativeToolCallIds snapshot
    else Mock tool-call path
        Provider->>Tracker: map to mock tool responses
    end

    Provider->>Provider: splitMergedToolContent (if merged)
    Provider->>Tracker: pushMockToolResponse / consumeNativeToolCallId
    Provider->>Provider: emit tool_call_end (when finalizable)
    Provider-->>User: return formatted messages/stream events
    deactivate Provider
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

  • Areas needing extra attention:
    • lifecycle and concurrency of pendingNativeToolCallIds / pendingToolCallOrder.
    • Correctness and edge cases of splitMergedToolContent strategies.
    • Streaming edge-case handling for unclosed function_call/think blocks.
    • Impact of parseFunctionCalls signature change across usages.

Possibly related PRs

Poem

🐰
I hop through messages, IDs in a row,
Splitting merged bits so each tool can grow.
Enqueue, snapshot, then finally send—
A rabbit's small magic to stitch every end. 🥕

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The PR title 'fix: the message format for the role tool' is vague and does not clearly convey the substantial changes made to tool-call handling, including native tool-calling scaffolding, multi-part tool response splitting, and tool_call_id normalization. Consider a more specific title that reflects the primary fix, such as 'fix: tool_call_id handling and multi-part tool response routing in OpenAI-compatible provider' to better communicate the scope of changes.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Linked Issues check ✅ Passed The PR addresses the tool_call_id error from issue #1088 by implementing comprehensive tool-call handling fixes including native tool-calling scaffolding, tool_call_id normalization, and multi-part response routing.
Out of Scope Changes check ✅ Passed All changes are scoped to the openAICompatibleProvider and directly address tool_call_id error resolution. Model version gating updates for gpt-4.1 and getRequestPreview enhancements are supportive changes to the primary fix.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch bugfix/tool-call-muti-resp

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1)

1523-1527: Base class signature mismatch requires correction.

The parseFunctionCalls method is overridden in derived classes with an additional fallbackIdPrefix parameter, but the base class in baseProvider.ts:425 retains the original single-parameter signature. This violates the Liskov Substitution Principle and creates type safety issues.

Update required: Add the fallbackIdPrefix parameter to the base class signature in src/main/presenter/llmProviderPresenter/baseProvider.ts:425 to match the derived implementations:

protected parseFunctionCalls(
  response: string,
  fallbackIdPrefix: string = 'tool-call'
): { id: string; type: string; function: { name: string; arguments: string } }[] {

All call sites correctly pass appropriate prefixes or use defaults, but the type hierarchy must be consistent.

🧹 Nitpick comments (4)
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (4)

332-361: Add error handling for the function_call_record structure.

The pushMockToolResponse function constructs a function_call_record object, but if the arguments parsing fails (caught at line 343), it falls back to an empty object {}. However, the consuming code might expect the arguments to maintain some structure or type information from the original failed parse.

Consider logging a warning when the parse fails to aid debugging:

       argsObj =
         typeof pendingCall.arguments === 'string'
           ? JSON.parse(pendingCall.arguments)
           : pendingCall.arguments
     } catch {
+      console.warn('[formatMessages] Failed to parse tool call arguments:', pendingCall.arguments)
       argsObj = {}
     }

405-412: Consider more robust ID generation for tool calls.

Lines 406 and 431 use tool-${Date.now()}-${Math.random()} to generate fallback IDs. While Math.random() collisions are unlikely in practice, using a more robust approach like a counter or UUID would eliminate any theoretical collision risk, especially in high-throughput scenarios.

-            const toolCallId = toolCall.id || `tool-${Date.now()}-${Math.random()}`
+            let toolCallIdCounter = 0
+            const toolCallId = toolCall.id || `tool-${Date.now()}-${toolCallIdCounter++}`

Alternatively, consider using a UUID library if available in the codebase.


588-647: Simplify the header repeat detection logic.

The trySplitByHeaderRepeats method has complex nested logic with two fallback approaches (lines 608-625 and 627-644) that are very similar. This increases cognitive complexity and maintenance burden.

Consider refactoring to a single, clearer approach:

private trySplitByHeaderRepeats(content: string, expectedParts: number): string[] | null {
  const headerRegex = /(?:^|\n)([-*]?\s*[A-Za-z][A-Za-z0-9\s,'"-]{0,80}?:)/g
  const matches = [...content.matchAll(headerRegex)]
  
  if (matches.length !== expectedParts) {
    return null
  }
  
  const segments: string[] = []
  for (let i = 0; i < matches.length; i++) {
    const match = matches[i]
    const start = (match.index ?? 0) + (match[0].startsWith('\n') ? 1 : 0)
    const end = i + 1 < matches.length ? (matches[i + 1].index ?? content.length) : content.length
    const segment = content.slice(start, end).trim()
    
    if (!segment) return null
    segments.push(segment)
  }
  
  return segments.length === expectedParts ? segments : null
}

This eliminates the normalization logic and focuses on the simpler case where header count exactly matches expected parts.


1376-1401: Consider documenting incomplete tool call behavior.

The code handles unclosed <function_call> tags by appending '-incomplete' to the tool call ID and intentionally not emitting a tool_call_end event (line 1395). While this is a reasonable approach to signal incomplete calls, it's unclear if downstream consumers are aware of this convention.

Document this behavior:

  1. In the method JSDoc explaining when incomplete tool calls can occur
  2. In any interface definition for tool call events
  3. Consider adding a property to the tool call event to explicitly mark it as incomplete rather than relying on ID suffix parsing

Based on learnings about tool call event requirements (tool_call_start → tool_call_chunk* → tool_call_end).

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 792837c and 7f8c02f.

📒 Files selected for processing (1)
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (8 hunks)
🧰 Additional context used
📓 Path-based instructions (11)
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriate stop_reason in the standardized format.
Provider implementations should yield error events in the standardized format...

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx,js,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for all logs and comments

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)

Use PascalCase for TypeScript types and classes

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Place Electron main-process presenters under src/main/presenter/ (Window, Tab, Thread, Mcp, Config, LLMProvider)

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx,js,jsx,vue,css,scss,md,json,yml,yaml}

📄 CodeRabbit inference engine (AGENTS.md)

Prettier style: single quotes, no semicolons, print width 100; run pnpm run format

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,js,jsx,vue}: Use OxLint for JS/TS code; keep lint clean
Use camelCase for variables and functions
Use SCREAMING_SNAKE_CASE for constants

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
🧠 Learnings (11)
📓 Common learnings
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using `convertToProviderTools`) and included in the API request.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-09-04T11:03:30.184Z
Learning: Tool calls must follow tool_call_start → tool_call_chunk* → tool_call_end; tool_call_id is required and stable
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using `convertToProviderTools`) and included in the API request.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (`tool_call_start`, `tool_call_chunk`, `tool_call_end`) in the standardized format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., `prepareFunctionCallPrompt`) before making the API call.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop in `llmProviderPresenter/index.ts` should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with `needContinueConversation` and `toolCallCount`.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-08-28T05:55:31.482Z
Learnt from: zerob13
Repo: ThinkInAIXYZ/deepchat PR: 804
File: src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts:153-156
Timestamp: 2025-08-28T05:55:31.482Z
Learning: TokenFlux models generally support function calling by default, so it's reasonable to assume hasFunctionCalling = true for TokenFlux provider implementations in src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (3)
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (3)

481-521: Good fallback handling for orphaned tool responses.

The code properly handles the case where a tool response arrives without a matching pending tool call (lines 507-520). The fallback creates a mock function_call_record with name: 'unknown', which allows the conversation to continue gracefully rather than failing.

This defensive approach aligns with the goal of fixing tool_call_id errors.


452-467: Add safety check for undefined toolCallId.

At line 456, toolCallId is assigned from pendingIds[index], but there's no verification that pendingIds[index] exists before using it. If the array is shorter than expected or contains undefined values, this could lead to issues downstream.

Add a guard:

             splitParts.forEach((part, index) => {
               const toolCallId = pendingIds[index]
+              if (!toolCallId) {
+                console.warn('[formatMessages] Missing toolCallId for split part at index', index)
+                return
+              }
               consumeNativeToolCallId(toolCallId)
               result.push({
                 role: 'tool',
                 content: part,
                 tool_call_id: toolCallId
               } as ChatCompletionMessageParam)
             })

Likely an incorrect or invalid review comment.


673-679: No action required — the code is correct as written.

GPT-4.1 is a real model family released April 14, 2025 (including variants: gpt-4.1, gpt-4.1-mini, gpt-4.1-nano) that supports the max_completion_tokens parameter in the chat/completions API. The use of .includes('gpt-4.1') correctly matches all model variants in this family, and is semantically appropriate since these models can have hyphenated suffixes. This approach is intentional and consistent with how gpt-5 is handled elsewhere in the same conditionals. The original review comment misidentified this as an inconsistency.

Likely an incorrect or invalid review comment.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7f8c02f and 28cd73c.

📒 Files selected for processing (1)
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (8 hunks)
🧰 Additional context used
📓 Path-based instructions (11)
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriate stop_reason in the standardized format.
Provider implementations should yield error events in the standardized format...

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx,js,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for all logs and comments

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)

Use PascalCase for TypeScript types and classes

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Place Electron main-process presenters under src/main/presenter/ (Window, Tab, Thread, Mcp, Config, LLMProvider)

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx,js,jsx,vue,css,scss,md,json,yml,yaml}

📄 CodeRabbit inference engine (AGENTS.md)

Prettier style: single quotes, no semicolons, print width 100; run pnpm run format

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,js,jsx,vue}: Use OxLint for JS/TS code; keep lint clean
Use camelCase for variables and functions
Use SCREAMING_SNAKE_CASE for constants

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
🧠 Learnings (11)
📓 Common learnings
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using `convertToProviderTools`) and included in the API request.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (`tool_call_start`, `tool_call_chunk`, `tool_call_end`) in the standardized format.
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using `convertToProviderTools`) and included in the API request.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (`tool_call_start`, `tool_call_chunk`, `tool_call_end`) in the standardized format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., `prepareFunctionCallPrompt`) before making the API call.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop in `llmProviderPresenter/index.ts` should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with `needContinueConversation` and `toolCallCount`.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-10-14T08:02:59.495Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-10-14T08:02:59.495Z
Learning: Applies to src/main/presenter/LLMProvider/**/*.ts : Implement the two-layer LLM provider (Agent Loop + Provider) under src/main/presenter/LLMProvider

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)

@zerob13 zerob13 merged commit 2295435 into dev Nov 10, 2025
2 checks passed
@zerob13 zerob13 deleted the bugfix/tool-call-muti-resp branch November 23, 2025 13:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] 版本升级到 0.4.4 或 0.4.5 后遇到 tool_call_id error 错误,0.4.4 前版本未碰到过

2 participants