-
Notifications
You must be signed in to change notification settings - Fork 614
feat: add deepseek v3.2 thinking #1151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAccumulates streaming Changes
Sequence Diagram(s)sequenceDiagram
participant Agent as AgentLoopHandler
participant Provider as OpenAICompatibleProvider
participant Prompt as PromptBuilder
participant Stream as StreamGenerationHandler
Note over Agent,Provider: Streaming reasoning accumulation & propagation
Provider->>Agent: stream chunks (including 'reasoning' chunks)
Agent->>Agent: accumulate currentReasoning
Agent->>Agent: check requiresReasoningField(modelId)
Agent->>Provider: final assistant message payload (attach reasoning_content if applicable)
Provider->>Prompt: formatted messages (reasoning_content preserved)
Prompt->>Prompt: optional injectReasoningForToolCalls logic
Prompt->>Stream: context/messages (with reasoning_content if injected)
Stream->>Stream: applyVariantToAssistant() / hasUsableAssistantContent()
Stream->>Stream: assemble final message (may include reasoning_content)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts (1)
409-415: Type assertion bypasses type safety forreasoning_content.The use of
(assistantMessage as any).reasoning_contentworks but bypasses TypeScript's type checking. Sincereasoning_contentis being introduced across the codebase, consider extending theChatMessagetype to include this optional field.If the
ChatMessagetype in@shared/presenterdoesn't already includereasoning_content, consider adding it:interface ChatMessage { role: 'system' | 'user' | 'assistant' | 'tool' content?: string | ChatMessageContent[] tool_calls?: ... tool_call_id?: string reasoning_content?: string // Add this optional field }This would allow removing the type assertion and provide better compile-time safety.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts(4 hunks)src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts(4 hunks)src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts(2 hunks)src/main/presenter/threadPresenter/utils/promptBuilder.ts(11 hunks)
🧰 Additional context used
📓 Path-based instructions (15)
**/*.{ts,tsx,js,jsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for logs and comments (Chinese text exists in legacy code, but new code should use English)
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.tssrc/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Enable and maintain strict TypeScript type checking for all files
**/*.{ts,tsx}: Always use try-catch to handle possible errors in TypeScript code
Provide meaningful error messages when catching errors
Log detailed error logs including error details, context, and stack traces
Distinguish and handle different error types (UserError, NetworkError, SystemError, BusinessError) with appropriate handlers in TypeScript
Use structured logging with logger.error(), logger.warn(), logger.info(), logger.debug() methods from logging utilities
Do not suppress errors (avoid empty catch blocks or silently ignoring errors)
Provide user-friendly error messages for user-facing errors in TypeScript components
Implement error retry mechanisms for transient failures in TypeScript
Avoid logging sensitive information (passwords, tokens, PII) in logs
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.tssrc/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
src/main/presenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Organize core business logic into dedicated Presenter classes, with one presenter per functional domain
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.tssrc/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Use EventBus from
src/main/eventbus.tsfor main-to-renderer communication, broadcasting events viamainWindow.webContents.send()
src/main/**/*.ts: Use EventBus pattern for inter-process communication within the main process to decouple modules
Use Electron's built-in APIs for file system and native dialogs instead of Node.js or custom implementations
src/main/**/*.ts: Electron main process code belongs insrc/main/with presenters inpresenter/(Window/Tab/Thread/Mcp/Config/LLMProvider) andeventbus.tsfor app events
Use the Presenter pattern in the main process for UI coordination
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.tssrc/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Do not include AI co-authoring information (e.g., 'Co-Authored-By: Claude') in git commits
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.tssrc/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
**/*.{js,ts,jsx,tsx,mjs,cjs}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
Write logs and comments in English
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.tssrc/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
{src/main/presenter/**/*.ts,src/renderer/**/*.ts}
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Implement proper inter-process communication (IPC) patterns using Electron's ipcRenderer and ipcMain APIs
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.tssrc/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
src/main/presenter/llmProviderPresenter/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)
Define the standardized
LLMCoreStreamEventinterface with fields:type(text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data),content(for text),reasoning_content(for reasoning),tool_call_id,tool_call_name,tool_call_arguments_chunk(for streaming),tool_call_arguments_complete(for complete arguments),error_message,usageobject with token counts,stop_reason(tool_use | max_tokens | stop_sequence | error | complete), andimage_dataobject with Base64-encoded data and mimeType
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/**/*
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
New features should be developed in the
srcdirectory
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.tssrc/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
src/main/**/*.{js,ts}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
Main process code for Electron should be placed in
src/main
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.tssrc/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
src/**/*.{ts,tsx,vue,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use Prettier with single quotes, no semicolons, and 100 character width
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.tssrc/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
src/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use OxLint for linting JavaScript and TypeScript files
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.tssrc/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
src/**/*.{ts,tsx}: Use camelCase for variable and function names in TypeScript files
Use PascalCase for type and class names in TypeScript
Use SCREAMING_SNAKE_CASE for constant names
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.tssrc/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
src/**/*.ts
📄 CodeRabbit inference engine (AGENTS.md)
Use EventBus for inter-process communication events
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.tssrc/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
src/main/presenter/llmProviderPresenter/providers/*.ts: Each LLM provider must implement thecoreStreammethod following the standardized event interface for tool calling and response streaming
Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation
src/main/presenter/llmProviderPresenter/providers/*.ts: In Provider implementations (src/main/presenter/llmProviderPresenter/providers/*.ts), thecoreStream(messages, modelId, temperature, maxTokens)method should perform a single-pass streaming API request for each conversation round without containing multi-turn tool call loop logic
In Provider implementations, handle native tool support by converting MCP tools to Provider format usingconvertToProviderToolsand including them in the API request; for Providers without native function call support, prepare messages usingprepareFunctionCallPromptbefore making the API call
In Provider implementations, parse Provider-specific data chunks from the streaming response andyieldstandardizedLLMCoreStreamEventobjects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data
In Provider implementations, include helper methods for Provider-specific operations such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPrompt
Files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
🧠 Learnings (13)
📓 Common learnings
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-11-25T05:27:39.200Z
Learning: Applies to **/*Provider**/index.ts : Reasoning events: `reasoning` is optional; if provided, ensure it contains the complete chain
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, handle `reasoning`, `text`, `image_data`, and `usage` events by processing and forwarding them through `STREAM_EVENTS.RESPONSE` events to the frontend
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, parse Provider-specific data chunks from the streaming response and `yield` standardized `LLMCoreStreamEvent` objects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, listen for standardized events yielded by `coreStream` and handle them accordingly: buffer text content (`currentContent`), handle `tool_call_start/chunk/end` events by collecting tool details and calling `presenter.mcpPresenter.callTool`, send frontend events via `eventBus` with tool call status, format tool results for the next LLM call, and set `needContinueConversation = true`
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Implement separation of concerns where `src/main/presenter/llmProviderPresenter/index.ts` manages the Agent loop and conversation history, while Provider files handle LLM API interactions, Provider-specific request/response formatting, tool definition conversion, and native vs non-native tool call mechanisms
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/**/*.ts : Define the standardized `LLMCoreStreamEvent` interface with fields: `type` (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), `content` (for text), `reasoning_content` (for reasoning), `tool_call_id`, `tool_call_name`, `tool_call_arguments_chunk` (for streaming), `tool_call_arguments_complete` (for complete arguments), `error_message`, `usage` object with token counts, `stop_reason` (tool_use | max_tokens | stop_sequence | error | complete), and `image_data` object with Base64-encoded data and mimeType
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts` (`startStreamCompletion`), implement the Agent loop that manages the overall conversation flow, including multiple rounds of LLM calls and tool usage, maintaining `conversationMessages` history, calling `provider.coreStream()` on each iteration, and controlling the loop using `needContinueConversation` and `toolCallCount` (compared against `MAX_TOOL_CALLS`)
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Implement separation of concerns where `src/main/presenter/llmProviderPresenter/index.ts` manages the Agent loop and conversation history, while Provider files handle LLM API interactions, Provider-specific request/response formatting, tool definition conversion, and native vs non-native tool call mechanisms
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts` (`startStreamCompletion`), implement the Agent loop that manages the overall conversation flow, including multiple rounds of LLM calls and tool usage, maintaining `conversationMessages` history, calling `provider.coreStream()` on each iteration, and controlling the loop using `needContinueConversation` and `toolCallCount` (compared against `MAX_TOOL_CALLS`)
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.tssrc/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, handle `reasoning`, `text`, `image_data`, and `usage` events by processing and forwarding them through `STREAM_EVENTS.RESPONSE` events to the frontend
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:27:39.200Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-11-25T05:27:39.200Z
Learning: Applies to **/*Provider**/index.ts : Reasoning events: `reasoning` is optional; if provided, ensure it contains the complete chain
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, handle `stop` events by checking `stop_reason`: if `'tool_use'`, add the buffered assistant message and prepare for the next loop iteration; otherwise, add the final assistant message and exit the loop
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.tssrc/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, listen for standardized events yielded by `coreStream` and handle them accordingly: buffer text content (`currentContent`), handle `tool_call_start/chunk/end` events by collecting tool details and calling `presenter.mcpPresenter.callTool`, send frontend events via `eventBus` with tool call status, format tool results for the next LLM call, and set `needContinueConversation = true`
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.tssrc/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/**/*.ts : Define the standardized `LLMCoreStreamEvent` interface with fields: `type` (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), `content` (for text), `reasoning_content` (for reasoning), `tool_call_id`, `tool_call_name`, `tool_call_arguments_chunk` (for streaming), `tool_call_arguments_complete` (for complete arguments), `error_message`, `usage` object with token counts, `stop_reason` (tool_use | max_tokens | stop_sequence | error | complete), and `image_data` object with Base64-encoded data and mimeType
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations (`src/main/presenter/llmProviderPresenter/providers/*.ts`), the `coreStream(messages, modelId, temperature, maxTokens)` method should perform a *single-pass* streaming API request for each conversation round without containing multi-turn tool call loop logic
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, parse Provider-specific data chunks from the streaming response and `yield` standardized `LLMCoreStreamEvent` objects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, include helper methods for Provider-specific operations such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt`
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.tssrc/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, handle native tool support by converting MCP tools to Provider format using `convertToProviderTools` and including them in the API request; for Providers without native function call support, prepare messages using `prepareFunctionCallPrompt` before making the API call
Applied to files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation
Applied to files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
🧬 Code graph analysis (1)
src/main/presenter/threadPresenter/utils/promptBuilder.ts (1)
src/shared/types/presenters/legacy.presenters.d.ts (1)
ChatMessage(1573-1573)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
🔇 Additional comments (9)
src/main/presenter/threadPresenter/handlers/streamGenerationHandler.ts (2)
364-409: LGTM! Well-structured variant handling helpers.The new helper functions cleanly encapsulate variant selection logic:
hasUsableAssistantContentcorrectly identifies usable blocks including the newreasoning_contenttypeapplyVariantToAssistantprovides robust fallback logic by iterating backwards to find usable variants
456-462: Good fallback mechanism for empty assistant content.The fallback pass ensures that if an assistant message has no usable content and no variant was explicitly selected, the system automatically selects a usable variant. This prevents empty messages from propagating through the conversation context.
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts (1)
132-133: LGTM!Proper parallel initialization of
currentContentandcurrentReasoningat the start of each agent loop iteration.src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (2)
403-430: Consistent reasoning_content propagation in formatMessages.The implementation correctly preserves
reasoning_contentthrough both native function-call and mock function-call paths:
- Extracts
reasoning_contentonce per assistant message with tool_calls- Uses conditional spread to avoid adding undefined properties
- Applied consistently to both paths
530-536: LGTM!Correctly handles reasoning_content for assistant messages without tool_calls, ensuring consistent propagation across all assistant message types.
src/main/presenter/threadPresenter/utils/promptBuilder.ts (4)
45-46: LGTM! Clean parameter addition with safe default.The new
injectReasoningForToolCallsoptional parameter is properly typed and defaults tofalse, maintaining backward compatibility with existing callers.Also applies to: 75-77
195-200: Intentional: reasoning injection disabled for tool-call continuation flows.The hardcoded
falseforinjectReasoningForToolCallsinbuildContinueToolCallContextis appropriate since this flow handles permission-granted tool execution, not initial reasoning context.
515-523: LGTM! Correct reasoning_content accumulation.The logic properly:
- Gates collection on
injectReasoningForToolCallsflag- Handles multiple reasoning blocks by concatenating with newlines
- Uses the ternary pattern to avoid empty string initialization issues
531-558: LGTM! Clean optional property attachment pattern.The implementation correctly:
- Uses TypeScript intersection type for type safety
- Conditionally attaches
reasoning_contentonly when present- Handles the edge case where only
reasoning_contentexists (no text content, no tool_calls) on line 549
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts(4 hunks)src/main/presenter/threadPresenter/utils/promptBuilder.ts(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- src/main/presenter/threadPresenter/utils/promptBuilder.ts
🧰 Additional context used
📓 Path-based instructions (14)
**/*.{ts,tsx,js,jsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for logs and comments (Chinese text exists in legacy code, but new code should use English)
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Enable and maintain strict TypeScript type checking for all files
**/*.{ts,tsx}: Always use try-catch to handle possible errors in TypeScript code
Provide meaningful error messages when catching errors
Log detailed error logs including error details, context, and stack traces
Distinguish and handle different error types (UserError, NetworkError, SystemError, BusinessError) with appropriate handlers in TypeScript
Use structured logging with logger.error(), logger.warn(), logger.info(), logger.debug() methods from logging utilities
Do not suppress errors (avoid empty catch blocks or silently ignoring errors)
Provide user-friendly error messages for user-facing errors in TypeScript components
Implement error retry mechanisms for transient failures in TypeScript
Avoid logging sensitive information (passwords, tokens, PII) in logs
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
src/main/presenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Organize core business logic into dedicated Presenter classes, with one presenter per functional domain
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Use EventBus from
src/main/eventbus.tsfor main-to-renderer communication, broadcasting events viamainWindow.webContents.send()
src/main/**/*.ts: Use EventBus pattern for inter-process communication within the main process to decouple modules
Use Electron's built-in APIs for file system and native dialogs instead of Node.js or custom implementations
src/main/**/*.ts: Electron main process code belongs insrc/main/with presenters inpresenter/(Window/Tab/Thread/Mcp/Config/LLMProvider) andeventbus.tsfor app events
Use the Presenter pattern in the main process for UI coordination
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Do not include AI co-authoring information (e.g., 'Co-Authored-By: Claude') in git commits
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
**/*.{js,ts,jsx,tsx,mjs,cjs}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
Write logs and comments in English
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
{src/main/presenter/**/*.ts,src/renderer/**/*.ts}
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Implement proper inter-process communication (IPC) patterns using Electron's ipcRenderer and ipcMain APIs
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
src/main/presenter/llmProviderPresenter/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)
Define the standardized
LLMCoreStreamEventinterface with fields:type(text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data),content(for text),reasoning_content(for reasoning),tool_call_id,tool_call_name,tool_call_arguments_chunk(for streaming),tool_call_arguments_complete(for complete arguments),error_message,usageobject with token counts,stop_reason(tool_use | max_tokens | stop_sequence | error | complete), andimage_dataobject with Base64-encoded data and mimeType
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
src/**/*
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
New features should be developed in the
srcdirectory
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
src/main/**/*.{js,ts}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
Main process code for Electron should be placed in
src/main
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
src/**/*.{ts,tsx,vue,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use Prettier with single quotes, no semicolons, and 100 character width
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
src/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use OxLint for linting JavaScript and TypeScript files
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
src/**/*.{ts,tsx}: Use camelCase for variable and function names in TypeScript files
Use PascalCase for type and class names in TypeScript
Use SCREAMING_SNAKE_CASE for constant names
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
src/**/*.ts
📄 CodeRabbit inference engine (AGENTS.md)
Use EventBus for inter-process communication events
Files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
🧠 Learnings (11)
📓 Common learnings
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts` (`startStreamCompletion`), implement the Agent loop that manages the overall conversation flow, including multiple rounds of LLM calls and tool usage, maintaining `conversationMessages` history, calling `provider.coreStream()` on each iteration, and controlling the loop using `needContinueConversation` and `toolCallCount` (compared against `MAX_TOOL_CALLS`)
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts` (`startStreamCompletion`), implement the Agent loop that manages the overall conversation flow, including multiple rounds of LLM calls and tool usage, maintaining `conversationMessages` history, calling `provider.coreStream()` on each iteration, and controlling the loop using `needContinueConversation` and `toolCallCount` (compared against `MAX_TOOL_CALLS`)
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, handle `reasoning`, `text`, `image_data`, and `usage` events by processing and forwarding them through `STREAM_EVENTS.RESPONSE` events to the frontend
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Implement separation of concerns where `src/main/presenter/llmProviderPresenter/index.ts` manages the Agent loop and conversation history, while Provider files handle LLM API interactions, Provider-specific request/response formatting, tool definition conversion, and native vs non-native tool call mechanisms
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, handle `stop` events by checking `stop_reason`: if `'tool_use'`, add the buffered assistant message and prepare for the next loop iteration; otherwise, add the final assistant message and exit the loop
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/**/*.ts : Define the standardized `LLMCoreStreamEvent` interface with fields: `type` (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), `content` (for text), `reasoning_content` (for reasoning), `tool_call_id`, `tool_call_name`, `tool_call_arguments_chunk` (for streaming), `tool_call_arguments_complete` (for complete arguments), `error_message`, `usage` object with token counts, `stop_reason` (tool_use | max_tokens | stop_sequence | error | complete), and `image_data` object with Base64-encoded data and mimeType
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, listen for standardized events yielded by `coreStream` and handle them accordingly: buffer text content (`currentContent`), handle `tool_call_start/chunk/end` events by collecting tool details and calling `presenter.mcpPresenter.callTool`, send frontend events via `eventBus` with tool call status, format tool results for the next LLM call, and set `needContinueConversation = true`
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations (`src/main/presenter/llmProviderPresenter/providers/*.ts`), the `coreStream(messages, modelId, temperature, maxTokens)` method should perform a *single-pass* streaming API request for each conversation round without containing multi-turn tool call loop logic
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
📚 Learning: 2025-11-25T05:27:39.200Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-11-25T05:27:39.200Z
Learning: Applies to **/*Provider**/index.ts : Reasoning events: `reasoning` is optional; if provided, ensure it contains the complete chain
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, parse Provider-specific data chunks from the streaming response and `yield` standardized `LLMCoreStreamEvent` objects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation
Applied to files:
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
🔇 Additional comments (3)
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts (3)
27-30: LGTM! Past review concern addressed.The use of
.includes()correctly handles bothkimi-k2-thinkingandmoonshot/kimi-k2-thinkingmodel ID formats, addressing the previous review comment.
133-133: LGTM!The initialization of
currentReasoningfollows the same pattern ascurrentContentand correctly prepares for accumulating reasoning content during the stream.
202-202: LGTM!The accumulation of
reasoning_contentcorrectly mirrors the pattern used forcurrentContent, ensuring that reasoning data is properly aggregated during streaming.
finished #1149
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.