Skip to content

Conversation

@zerob13
Copy link
Collaborator

@zerob13 zerob13 commented Dec 1, 2025

finished #1149

Summary by CodeRabbit

  • New Features
    • Support for models' reasoning content: reasoning is preserved and attached to assistant replies when applicable.
  • Improvements
    • Better propagation of reasoning through message flows, including when tool calls occur.
    • Improved variant selection with a fallback to ensure usable assistant content is chosen.
  • Bug Fixes
    • More graceful handling of concurrent streaming limits — emits an error event instead of crashing.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 1, 2025

Walkthrough

Accumulates streaming reasoning_content in AgentLoopHandler, conditionally attaches it to final assistant messages for models that require reasoning, preserves reasoning_content in OpenAI-compatible message formatting, and adds variant-selection helpers and minor refactors in thread prompt construction and stream generation.

Changes

Cohort / File(s) Summary
Agent loop — reasoning accumulation & flow
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
Adds private requiresReasoningField(modelId: string), initializes and accumulates currentReasoning from provider 'reasoning' chunks, conditionally attaches reasoning_content to assistant messages when model requires it and tool calls exist, and changes concurrent-stream error emission.
Provider — reasoning propagation & function-call parsing
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
Preserves and propagates reasoning_content from assistant messages into formatted ChatCompletion payloads (native and mock paths); carries reasoning_content into base messages; updates parseFunctionCalls(response: string, fallbackIdPrefix: string = 'tool-call') signature to accept an optional fallbackIdPrefix.
Stream generation — variant selection & fallback
src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
Adds hasUsableAssistantContent() and applyVariantToAssistant() helpers, replaces ad-hoc variant-selection with these helpers, and adds a fallback pass to pick a usable variant when none explicitly chosen.
Prompt builder — minor refactor of assistant message construction
src/main/presenter/threadPresenter/utils/promptBuilder.ts
Refactors addContextMessages to build assistant message objects in named constants before pushing (no public API changes); prepares for reasoning injection/propagation in prompt construction.

Sequence Diagram(s)

sequenceDiagram
    participant Agent as AgentLoopHandler
    participant Provider as OpenAICompatibleProvider
    participant Prompt as PromptBuilder
    participant Stream as StreamGenerationHandler

    Note over Agent,Provider: Streaming reasoning accumulation & propagation

    Provider->>Agent: stream chunks (including 'reasoning' chunks)
    Agent->>Agent: accumulate currentReasoning
    Agent->>Agent: check requiresReasoningField(modelId)
    Agent->>Provider: final assistant message payload (attach reasoning_content if applicable)

    Provider->>Prompt: formatted messages (reasoning_content preserved)
    Prompt->>Prompt: optional injectReasoningForToolCalls logic
    Prompt->>Stream: context/messages (with reasoning_content if injected)

    Stream->>Stream: applyVariantToAssistant() / hasUsableAssistantContent()
    Stream->>Stream: assemble final message (may include reasoning_content)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Focus review on: agentLoopHandler.ts reasoning accumulation and final-message assembly conditions.
  • Verify openAICompatibleProvider.ts branches correctly preserve reasoning_content across native/mock paths and that parseFunctionCalls default behavior remains compatible.
  • Confirm streamGenerationHandler.ts variant fallback covers edge cases (empty/malformed variants).
  • Quick pass on promptBuilder.ts refactor to ensure no behavioral changes introduced.

Possibly related PRs

Poem

🐰
I nibble logic, stitch each thread,
Reason hops from chunk to head,
Variants twirl, a fallback song,
Tool calls hum and move along,
The rabbit grins — reasoning belongs.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add deepseek v3.2 thinking' accurately summarizes the main objective of the pull request, which is to add support for DeepSeek v3.2 reasoning/thinking capabilities.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/add-deepseek-v3.2-thinking

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts (1)

409-415: Type assertion bypasses type safety for reasoning_content.

The use of (assistantMessage as any).reasoning_content works but bypasses TypeScript's type checking. Since reasoning_content is being introduced across the codebase, consider extending the ChatMessage type to include this optional field.

If the ChatMessage type in @shared/presenter doesn't already include reasoning_content, consider adding it:

interface ChatMessage {
  role: 'system' | 'user' | 'assistant' | 'tool'
  content?: string | ChatMessageContent[]
  tool_calls?: ...
  tool_call_id?: string
  reasoning_content?: string  // Add this optional field
}

This would allow removing the type assertion and provide better compile-time safety.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cb2935a and 2c49ffa.

📒 Files selected for processing (4)
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts (4 hunks)
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (4 hunks)
  • src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts (2 hunks)
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts (11 hunks)
🧰 Additional context used
📓 Path-based instructions (15)
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for logs and comments (Chinese text exists in legacy code, but new code should use English)

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
  • src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Enable and maintain strict TypeScript type checking for all files

**/*.{ts,tsx}: Always use try-catch to handle possible errors in TypeScript code
Provide meaningful error messages when catching errors
Log detailed error logs including error details, context, and stack traces
Distinguish and handle different error types (UserError, NetworkError, SystemError, BusinessError) with appropriate handlers in TypeScript
Use structured logging with logger.error(), logger.warn(), logger.info(), logger.debug() methods from logging utilities
Do not suppress errors (avoid empty catch blocks or silently ignoring errors)
Provide user-friendly error messages for user-facing errors in TypeScript components
Implement error retry mechanisms for transient failures in TypeScript
Avoid logging sensitive information (passwords, tokens, PII) in logs

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
  • src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Organize core business logic into dedicated Presenter classes, with one presenter per functional domain

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
  • src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Use EventBus from src/main/eventbus.ts for main-to-renderer communication, broadcasting events via mainWindow.webContents.send()

src/main/**/*.ts: Use EventBus pattern for inter-process communication within the main process to decouple modules
Use Electron's built-in APIs for file system and native dialogs instead of Node.js or custom implementations

src/main/**/*.ts: Electron main process code belongs in src/main/ with presenters in presenter/ (Window/Tab/Thread/Mcp/Config/LLMProvider) and eventbus.ts for app events
Use the Presenter pattern in the main process for UI coordination

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
  • src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Do not include AI co-authoring information (e.g., 'Co-Authored-By: Claude') in git commits

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
  • src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
**/*.{js,ts,jsx,tsx,mjs,cjs}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

Write logs and comments in English

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
  • src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
{src/main/presenter/**/*.ts,src/renderer/**/*.ts}

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Implement proper inter-process communication (IPC) patterns using Electron's ipcRenderer and ipcMain APIs

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
  • src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
src/main/presenter/llmProviderPresenter/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

Define the standardized LLMCoreStreamEvent interface with fields: type (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), content (for text), reasoning_content (for reasoning), tool_call_id, tool_call_name, tool_call_arguments_chunk (for streaming), tool_call_arguments_complete (for complete arguments), error_message, usage object with token counts, stop_reason (tool_use | max_tokens | stop_sequence | error | complete), and image_data object with Base64-encoded data and mimeType

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/**/*

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

New features should be developed in the src directory

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
  • src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
src/main/**/*.{js,ts}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

Main process code for Electron should be placed in src/main

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
  • src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
src/**/*.{ts,tsx,vue,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

Use Prettier with single quotes, no semicolons, and 100 character width

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
  • src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
src/**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

Use OxLint for linting JavaScript and TypeScript files

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
  • src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

src/**/*.{ts,tsx}: Use camelCase for variable and function names in TypeScript files
Use PascalCase for type and class names in TypeScript
Use SCREAMING_SNAKE_CASE for constant names

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
  • src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
src/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Use EventBus for inter-process communication events

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
  • src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each LLM provider must implement the coreStream method following the standardized event interface for tool calling and response streaming
Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation

src/main/presenter/llmProviderPresenter/providers/*.ts: In Provider implementations (src/main/presenter/llmProviderPresenter/providers/*.ts), the coreStream(messages, modelId, temperature, maxTokens) method should perform a single-pass streaming API request for each conversation round without containing multi-turn tool call loop logic
In Provider implementations, handle native tool support by converting MCP tools to Provider format using convertToProviderTools and including them in the API request; for Providers without native function call support, prepare messages using prepareFunctionCallPrompt before making the API call
In Provider implementations, parse Provider-specific data chunks from the streaming response and yield standardized LLMCoreStreamEvent objects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data
In Provider implementations, include helper methods for Provider-specific operations such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
🧠 Learnings (13)
📓 Common learnings
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-11-25T05:27:39.200Z
Learning: Applies to **/*Provider**/index.ts : Reasoning events: `reasoning` is optional; if provided, ensure it contains the complete chain
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, handle `reasoning`, `text`, `image_data`, and `usage` events by processing and forwarding them through `STREAM_EVENTS.RESPONSE` events to the frontend
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, parse Provider-specific data chunks from the streaming response and `yield` standardized `LLMCoreStreamEvent` objects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, listen for standardized events yielded by `coreStream` and handle them accordingly: buffer text content (`currentContent`), handle `tool_call_start/chunk/end` events by collecting tool details and calling `presenter.mcpPresenter.callTool`, send frontend events via `eventBus` with tool call status, format tool results for the next LLM call, and set `needContinueConversation = true`
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Implement separation of concerns where `src/main/presenter/llmProviderPresenter/index.ts` manages the Agent loop and conversation history, while Provider files handle LLM API interactions, Provider-specific request/response formatting, tool definition conversion, and native vs non-native tool call mechanisms
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/**/*.ts : Define the standardized `LLMCoreStreamEvent` interface with fields: `type` (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), `content` (for text), `reasoning_content` (for reasoning), `tool_call_id`, `tool_call_name`, `tool_call_arguments_chunk` (for streaming), `tool_call_arguments_complete` (for complete arguments), `error_message`, `usage` object with token counts, `stop_reason` (tool_use | max_tokens | stop_sequence | error | complete), and `image_data` object with Base64-encoded data and mimeType
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts` (`startStreamCompletion`), implement the Agent loop that manages the overall conversation flow, including multiple rounds of LLM calls and tool usage, maintaining `conversationMessages` history, calling `provider.coreStream()` on each iteration, and controlling the loop using `needContinueConversation` and `toolCallCount` (compared against `MAX_TOOL_CALLS`)
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Implement separation of concerns where `src/main/presenter/llmProviderPresenter/index.ts` manages the Agent loop and conversation history, while Provider files handle LLM API interactions, Provider-specific request/response formatting, tool definition conversion, and native vs non-native tool call mechanisms

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts` (`startStreamCompletion`), implement the Agent loop that manages the overall conversation flow, including multiple rounds of LLM calls and tool usage, maintaining `conversationMessages` history, calling `provider.coreStream()` on each iteration, and controlling the loop using `needContinueConversation` and `toolCallCount` (compared against `MAX_TOOL_CALLS`)

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
  • src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, handle `reasoning`, `text`, `image_data`, and `usage` events by processing and forwarding them through `STREAM_EVENTS.RESPONSE` events to the frontend

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:27:39.200Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-11-25T05:27:39.200Z
Learning: Applies to **/*Provider**/index.ts : Reasoning events: `reasoning` is optional; if provided, ensure it contains the complete chain

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, handle `stop` events by checking `stop_reason`: if `'tool_use'`, add the buffered assistant message and prepare for the next loop iteration; otherwise, add the final assistant message and exit the loop

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
  • src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, listen for standardized events yielded by `coreStream` and handle them accordingly: buffer text content (`currentContent`), handle `tool_call_start/chunk/end` events by collecting tool details and calling `presenter.mcpPresenter.callTool`, send frontend events via `eventBus` with tool call status, format tool results for the next LLM call, and set `needContinueConversation = true`

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
  • src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/**/*.ts : Define the standardized `LLMCoreStreamEvent` interface with fields: `type` (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), `content` (for text), `reasoning_content` (for reasoning), `tool_call_id`, `tool_call_name`, `tool_call_arguments_chunk` (for streaming), `tool_call_arguments_complete` (for complete arguments), `error_message`, `usage` object with token counts, `stop_reason` (tool_use | max_tokens | stop_sequence | error | complete), and `image_data` object with Base64-encoded data and mimeType

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations (`src/main/presenter/llmProviderPresenter/providers/*.ts`), the `coreStream(messages, modelId, temperature, maxTokens)` method should perform a *single-pass* streaming API request for each conversation round without containing multi-turn tool call loop logic

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, parse Provider-specific data chunks from the streaming response and `yield` standardized `LLMCoreStreamEvent` objects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, include helper methods for Provider-specific operations such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt`

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
  • src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, handle native tool support by converting MCP tools to Provider format using `convertToProviderTools` and including them in the API request; for Providers without native function call support, prepare messages using `prepareFunctionCallPrompt` before making the API call

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
🧬 Code graph analysis (1)
src/main/presenter/threadPresenter/utils/promptBuilder.ts (1)
src/shared/types/presenters/legacy.presenters.d.ts (1)
  • ChatMessage (1573-1573)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (9)
src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts (2)

364-409: LGTM! Well-structured variant handling helpers.

The new helper functions cleanly encapsulate variant selection logic:

  • hasUsableAssistantContent correctly identifies usable blocks including the new reasoning_content type
  • applyVariantToAssistant provides robust fallback logic by iterating backwards to find usable variants

456-462: Good fallback mechanism for empty assistant content.

The fallback pass ensures that if an assistant message has no usable content and no variant was explicitly selected, the system automatically selects a usable variant. This prevents empty messages from propagating through the conversation context.

src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts (1)

132-133: LGTM!

Proper parallel initialization of currentContent and currentReasoning at the start of each agent loop iteration.

src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (2)

403-430: Consistent reasoning_content propagation in formatMessages.

The implementation correctly preserves reasoning_content through both native function-call and mock function-call paths:

  • Extracts reasoning_content once per assistant message with tool_calls
  • Uses conditional spread to avoid adding undefined properties
  • Applied consistently to both paths

530-536: LGTM!

Correctly handles reasoning_content for assistant messages without tool_calls, ensuring consistent propagation across all assistant message types.

src/main/presenter/threadPresenter/utils/promptBuilder.ts (4)

45-46: LGTM! Clean parameter addition with safe default.

The new injectReasoningForToolCalls optional parameter is properly typed and defaults to false, maintaining backward compatibility with existing callers.

Also applies to: 75-77


195-200: Intentional: reasoning injection disabled for tool-call continuation flows.

The hardcoded false for injectReasoningForToolCalls in buildContinueToolCallContext is appropriate since this flow handles permission-granted tool execution, not initial reasoning context.


515-523: LGTM! Correct reasoning_content accumulation.

The logic properly:

  1. Gates collection on injectReasoningForToolCalls flag
  2. Handles multiple reasoning blocks by concatenating with newlines
  3. Uses the ternary pattern to avoid empty string initialization issues

531-558: LGTM! Clean optional property attachment pattern.

The implementation correctly:

  1. Uses TypeScript intersection type for type safety
  2. Conditionally attaches reasoning_content only when present
  3. Handles the edge case where only reasoning_content exists (no text content, no tool_calls) on line 549

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2c49ffa and a03eae4.

📒 Files selected for processing (2)
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts (4 hunks)
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/main/presenter/threadPresenter/utils/promptBuilder.ts
🧰 Additional context used
📓 Path-based instructions (14)
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for logs and comments (Chinese text exists in legacy code, but new code should use English)

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Enable and maintain strict TypeScript type checking for all files

**/*.{ts,tsx}: Always use try-catch to handle possible errors in TypeScript code
Provide meaningful error messages when catching errors
Log detailed error logs including error details, context, and stack traces
Distinguish and handle different error types (UserError, NetworkError, SystemError, BusinessError) with appropriate handlers in TypeScript
Use structured logging with logger.error(), logger.warn(), logger.info(), logger.debug() methods from logging utilities
Do not suppress errors (avoid empty catch blocks or silently ignoring errors)
Provide user-friendly error messages for user-facing errors in TypeScript components
Implement error retry mechanisms for transient failures in TypeScript
Avoid logging sensitive information (passwords, tokens, PII) in logs

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Organize core business logic into dedicated Presenter classes, with one presenter per functional domain

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Use EventBus from src/main/eventbus.ts for main-to-renderer communication, broadcasting events via mainWindow.webContents.send()

src/main/**/*.ts: Use EventBus pattern for inter-process communication within the main process to decouple modules
Use Electron's built-in APIs for file system and native dialogs instead of Node.js or custom implementations

src/main/**/*.ts: Electron main process code belongs in src/main/ with presenters in presenter/ (Window/Tab/Thread/Mcp/Config/LLMProvider) and eventbus.ts for app events
Use the Presenter pattern in the main process for UI coordination

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Do not include AI co-authoring information (e.g., 'Co-Authored-By: Claude') in git commits

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
**/*.{js,ts,jsx,tsx,mjs,cjs}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

Write logs and comments in English

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
{src/main/presenter/**/*.ts,src/renderer/**/*.ts}

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Implement proper inter-process communication (IPC) patterns using Electron's ipcRenderer and ipcMain APIs

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
src/main/presenter/llmProviderPresenter/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

Define the standardized LLMCoreStreamEvent interface with fields: type (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), content (for text), reasoning_content (for reasoning), tool_call_id, tool_call_name, tool_call_arguments_chunk (for streaming), tool_call_arguments_complete (for complete arguments), error_message, usage object with token counts, stop_reason (tool_use | max_tokens | stop_sequence | error | complete), and image_data object with Base64-encoded data and mimeType

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
src/**/*

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

New features should be developed in the src directory

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
src/main/**/*.{js,ts}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

Main process code for Electron should be placed in src/main

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
src/**/*.{ts,tsx,vue,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

Use Prettier with single quotes, no semicolons, and 100 character width

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
src/**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

Use OxLint for linting JavaScript and TypeScript files

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

src/**/*.{ts,tsx}: Use camelCase for variable and function names in TypeScript files
Use PascalCase for type and class names in TypeScript
Use SCREAMING_SNAKE_CASE for constant names

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
src/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Use EventBus for inter-process communication events

Files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
🧠 Learnings (11)
📓 Common learnings
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts` (`startStreamCompletion`), implement the Agent loop that manages the overall conversation flow, including multiple rounds of LLM calls and tool usage, maintaining `conversationMessages` history, calling `provider.coreStream()` on each iteration, and controlling the loop using `needContinueConversation` and `toolCallCount` (compared against `MAX_TOOL_CALLS`)
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts` (`startStreamCompletion`), implement the Agent loop that manages the overall conversation flow, including multiple rounds of LLM calls and tool usage, maintaining `conversationMessages` history, calling `provider.coreStream()` on each iteration, and controlling the loop using `needContinueConversation` and `toolCallCount` (compared against `MAX_TOOL_CALLS`)

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, handle `reasoning`, `text`, `image_data`, and `usage` events by processing and forwarding them through `STREAM_EVENTS.RESPONSE` events to the frontend

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Implement separation of concerns where `src/main/presenter/llmProviderPresenter/index.ts` manages the Agent loop and conversation history, while Provider files handle LLM API interactions, Provider-specific request/response formatting, tool definition conversion, and native vs non-native tool call mechanisms

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, handle `stop` events by checking `stop_reason`: if `'tool_use'`, add the buffered assistant message and prepare for the next loop iteration; otherwise, add the final assistant message and exit the loop

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/**/*.ts : Define the standardized `LLMCoreStreamEvent` interface with fields: `type` (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), `content` (for text), `reasoning_content` (for reasoning), `tool_call_id`, `tool_call_name`, `tool_call_arguments_chunk` (for streaming), `tool_call_arguments_complete` (for complete arguments), `error_message`, `usage` object with token counts, `stop_reason` (tool_use | max_tokens | stop_sequence | error | complete), and `image_data` object with Base64-encoded data and mimeType

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, listen for standardized events yielded by `coreStream` and handle them accordingly: buffer text content (`currentContent`), handle `tool_call_start/chunk/end` events by collecting tool details and calling `presenter.mcpPresenter.callTool`, send frontend events via `eventBus` with tool call status, format tool results for the next LLM call, and set `needContinueConversation = true`

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations (`src/main/presenter/llmProviderPresenter/providers/*.ts`), the `coreStream(messages, modelId, temperature, maxTokens)` method should perform a *single-pass* streaming API request for each conversation round without containing multi-turn tool call loop logic

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
📚 Learning: 2025-11-25T05:27:39.200Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-11-25T05:27:39.200Z
Learning: Applies to **/*Provider**/index.ts : Reasoning events: `reasoning` is optional; if provided, ensure it contains the complete chain

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, parse Provider-specific data chunks from the streaming response and `yield` standardized `LLMCoreStreamEvent` objects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (3)
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts (3)

27-30: LGTM! Past review concern addressed.

The use of .includes() correctly handles both kimi-k2-thinking and moonshot/kimi-k2-thinking model ID formats, addressing the previous review comment.


133-133: LGTM!

The initialization of currentReasoning follows the same pattern as currentContent and correctly prepares for accumulating reasoning content during the stream.


202-202: LGTM!

The accumulation of reasoning_content correctly mirrors the pattern used for currentContent, ensuring that reasoning data is properly aggregated during streaming.

@zerob13 zerob13 merged commit 623f453 into dev Dec 1, 2025
2 checks passed
@zerob13 zerob13 deleted the feat/add-deepseek-v3.2-thinking branch December 13, 2025 06:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants