Skip to content

Conversation

@zerob13
Copy link
Collaborator

@zerob13 zerob13 commented Nov 13, 2025

Summary by CodeRabbit

  • New Features
    • Expose provider listing and current-provider selection, plus model & embedding operations (list, add/remove custom models, get embeddings/dimensions).
    • Ollama operations: list, inspect, pull and show running models with progress updates.
  • Refactor
    • Reorganized provider and stream handling for more reliable concurrency, rate-limit enforcement, and tool-call streaming.
    • Improved ModelScope MCP server synchronization with better error handling.
  • Style
    • Ollama model refresh now skips disabled providers to avoid unnecessary calls.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 13, 2025

Walkthrough

Major refactor of LLMProviderPresenter: decomposes provider logic into modular managers (ProviderInstanceManager, RateLimitManager, ModelManager, OllamaManager, EmbeddingManager, AgentLoopHandler, ToolCallProcessor, ModelScopeSyncManager) and delegates provider lifecycle, streaming, rate-limits, embeddings, model ops, tool-call processing, and MCP sync to those managers.

Changes

Cohort / File(s) Change Summary
Manager Infrastructure
src/main/presenter/llmProviderPresenter/managers/*
Adds 8 manager classes: ProviderInstanceManager, RateLimitManager, ModelManager, OllamaManager, EmbeddingManager, AgentLoopHandler, ToolCallProcessor, ModelScopeSyncManager implementing provider lifecycle, rate-limiting, model/embedding ops, Ollama flows, stream/agent loop orchestration, tool-call execution, and ModelScope MCP sync.
Core Presenter Refactor
src/main/presenter/llmProviderPresenter/index.ts
Rewires presenter to initialize and delegate to managers; removes in-class provider/storage/rate-limit logic; exposes getProviders() and getCurrentProvider() and forwards existing public APIs to managers.
Provider Implementations
src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
Expands ModelScope MCP types and API: new ModelScopeMcpServerResponse/ModelScopeMcpServer, syncMcpServers signature updated to accept ModelScopeMcpSyncOptions, and convertMcpServerToConfig now returns MCPServerConfig with richer mapping logic.
Types
src/main/presenter/llmProviderPresenter/types.ts
Adds types supporting managers: RateLimitConfig, QueueItem, ProviderRateLimitState, StreamState, ProviderConfig.
Renderer Settings
src/renderer/src/stores/settings.ts
Limits Ollama model refreshes to enabled providers and clears Ollama data for disabled providers on provider changes and pull completion.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant Presenter as LLMProviderPresenter
    participant PIM as ProviderInstanceManager
    participant RLM as RateLimitManager
    participant ALH as AgentLoopHandler
    participant Provider as BaseLLMProvider
    participant Config as IConfigPresenter

    User->>Presenter: init()
    Presenter->>PIM: init()
    PIM->>Config: getProviders()
    PIM->>Provider: create instances
    Presenter->>RLM: initializeProviderRateLimitConfigs()

    User->>Presenter: startStreamCompletion(...)
    Presenter->>RLM: executeWithRateLimit(providerId)
    alt allowed immediately
        RLM->>ALH: allow execution
    else queued
        RLM->>ALH: execute when dequeued
    end
    ALH->>Provider: coreStream(...)
    Provider-->>ALH: stream events (text, tool_call, usage, images, errors)
    ALH->>Presenter: yield LLMAgentEvent(s)
    ALH->>Config: append assistant content / persist updates
    ALH-->>User: end event + usage
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45–60 minutes

Areas requiring extra attention:

  • agentLoopHandler.ts — async generator, tool-call lifecycle, concurrency and error handling
  • providerInstanceManager.ts — instance lifecycle, updates, and resource cleanup
  • rateLimitManager.ts — queue processing, timing and correctness of QPS/backoff
  • modelScopeSyncManager.ts & modelscopeProvider.ts — MCP response handling, conversion to config
  • cross-manager wiring in index.ts — ensure public API parity and initialization order

Possibly related PRs

Suggested labels

codex

Poem

🐰 I hopped through code to split the load,

Managers now tend each busy road.
Streams and tools dance in tidy rows,
Models hum where the data flows;
A rabbit cheers — modular growth bestowed.

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title accurately describes the main change: refactoring the LLMProviderPresenter into modular manager components and reorganizing the project structure, which is the primary objective evident throughout the changeset.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch refactor/code-file-size

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts (1)

325-331: Honor the “disabled by default” contract for synced servers

This block sets disable: false while the accompanying comment says “Default to disabled for safety.” As written, every synced ModelScope server becomes active immediately, which is the opposite of the stated intent and can unexpectedly enable unreviewed remote endpoints. Flip the flag so freshly synced servers stay disabled until the user opts in.

-      disable: false, // Default to disabled for safety
+      disable: true, // Default to disabled for safety
🧹 Nitpick comments (4)
src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts (4)

15-28: Consider error handling in initialization loop.

If setProviderRateLimitConfig throws an error for one provider, the entire initialization fails. Consider wrapping the per-provider initialization in try-catch to ensure partial initialization succeeds and only logs errors for problematic providers.

Apply this diff to add per-provider error handling:

 initializeProviderRateLimitConfigs(): void {
   const providers = this.configPresenter.getProviders()
   for (const provider of providers) {
-    if (provider.rateLimit) {
-      this.setProviderRateLimitConfig(provider.id, {
-        enabled: provider.rateLimit.enabled,
-        qpsLimit: provider.rateLimit.qpsLimit
-      })
-    }
+    try {
+      if (provider.rateLimit) {
+        this.setProviderRateLimitConfig(provider.id, {
+          enabled: provider.rateLimit.enabled,
+          qpsLimit: provider.rateLimit.qpsLimit
+        })
+      }
+    } catch (error) {
+      console.error(`[RateLimitManager] Failed to initialize rate limit config for ${provider.id}:`, error)
+    }
   }
   console.log(
     `[RateLimitManager] Initialized rate limit configs for ${providers.length} providers`
   )
 }

30-58: Consolidate duplicate getProviderById calls and clarify missing provider behavior.

Lines 42 and 46 both call getProviderById. If the provider doesn't exist, the update silently skips persistence without logging or throwing an error. Consider:

  1. Consolidating the two calls into one
  2. Logging a warning when the provider is not found
  3. Clarifying whether this silent failure is intentional

Apply this diff to consolidate calls and add logging:

 updateProviderRateLimit(providerId: string, enabled: boolean, qpsLimit: number): void {
   let finalConfig = { enabled, qpsLimit }
   if (
     finalConfig.qpsLimit !== undefined &&
     (finalConfig.qpsLimit <= 0 || !isFinite(finalConfig.qpsLimit))
   ) {
     if (finalConfig.enabled === true) {
       console.warn(
         `[RateLimitManager] Invalid qpsLimit (${finalConfig.qpsLimit}) for provider ${providerId}, disabling rate limit`
       )
       finalConfig.enabled = false
     }
     const provider = this.configPresenter.getProviderById(providerId)
     finalConfig.qpsLimit = provider?.rateLimit?.qpsLimit ?? 0.1
   }
   this.setProviderRateLimitConfig(providerId, finalConfig)
-  const provider = this.configPresenter.getProviderById(providerId)
+  
+  const provider = this.configPresenter.getProviderById(providerId)
   if (provider) {
     const updatedProvider: LLM_PROVIDER = {
       ...provider,
       rateLimit: {
         enabled: finalConfig.enabled,
         qpsLimit: finalConfig.qpsLimit
       }
     }
     this.configPresenter.setProviderById(providerId, updatedProvider)
     console.log(`[RateLimitManager] Updated persistent config for ${providerId}`)
+  } else {
+    console.warn(`[RateLimitManager] Provider ${providerId} not found, skipping persistence`)
   }
 }

240-254: Preserve error details when rejecting queued requests.

When processRateLimitQueue encounters an error (line 240), all queued items are rejected with a generic message 'Rate limit processing failed' (line 248). The original error is logged but not propagated to callers. Consider including error details in the rejection to help callers diagnose issues.

Apply this diff to include error context:

   } catch (error) {
     console.error(
       `[RateLimitManager] Error processing rate limit queue for ${providerId}:`,
       error
     )
+    const errorMessage = error instanceof Error ? error.message : String(error)
     while (state.queue.length > 0) {
       const queueItem = state.queue.shift()
       if (queueItem) {
-        queueItem.reject(new Error('Rate limit processing failed'))
+        queueItem.reject(new Error(`Rate limit processing failed: ${errorMessage}`))
       }
     }
   } finally {

270-277: getCurrentQps returns a binary value, not actual queries per second.

The method returns either 1 or 0 based on whether a request occurred within the rate limit interval. This is a binary indicator, not a true QPS measurement. The name getCurrentQps is misleading. Consider:

  1. Renaming to isWithinRateLimit() or hasRecentRequest() for clarity
  2. If true QPS is needed for monitoring/events, implementing a sliding window calculation

Based on learnings, rate_limit events should include meaningful currentQps values for monitoring.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ce66ab3 and 4097de2.

📒 Files selected for processing (12)
  • src/main/presenter/llmProviderPresenter/index.ts (9 hunks)
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts (4 hunks)
  • src/main/presenter/llmProviderPresenter/types.ts (1 hunks)
  • src/renderer/src/stores/settings.ts (4 hunks)
🧰 Additional context used
📓 Path-based instructions (21)
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/renderer/src/stores/settings.ts
  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/renderer/src/stores/settings.ts
  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/renderer/src/stores/settings.ts
  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
src/renderer/src/**/*

📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)

src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system

Files:

  • src/renderer/src/stores/settings.ts
src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.cursor/rules/pinia-best-practices.mdc)

src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx}: Use modules to organize related state and actions
Implement proper state persistence for maintaining data across sessions
Use getters for computed state properties
Utilize actions for side effects and asynchronous operations
Keep the store focused on global state, not component-specific data

Files:

  • src/renderer/src/stores/settings.ts
src/renderer/**/*.{vue,ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

渲染进程代码放在 src/renderer

Files:

  • src/renderer/src/stores/settings.ts
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)

src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling

Files:

  • src/renderer/src/stores/settings.ts
src/renderer/**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.

Files:

  • src/renderer/src/stores/settings.ts
src/renderer/**/*.{vue,ts}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

Implement lazy loading for routes and components.

Files:

  • src/renderer/src/stores/settings.ts
src/renderer/**/*.{ts,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.

Use Pinia for frontend state management (do not introduce alternative state libraries)

Files:

  • src/renderer/src/stores/settings.ts
**/*.{ts,tsx,js,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for all logs and comments

Files:

  • src/renderer/src/stores/settings.ts
  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)

Use PascalCase for TypeScript types and classes

Files:

  • src/renderer/src/stores/settings.ts
  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
src/renderer/src/**

📄 CodeRabbit inference engine (AGENTS.md)

Place Vue 3 app source under src/renderer/src (components, stores, views, i18n, lib)

Files:

  • src/renderer/src/stores/settings.ts
src/renderer/src/**/*.{vue,ts}

📄 CodeRabbit inference engine (AGENTS.md)

All user-facing strings must use vue-i18n ($t/keys) rather than hardcoded literals

Files:

  • src/renderer/src/stores/settings.ts
**/*.{ts,tsx,js,jsx,vue,css,scss,md,json,yml,yaml}

📄 CodeRabbit inference engine (AGENTS.md)

Prettier style: single quotes, no semicolons, print width 100; run pnpm run format

Files:

  • src/renderer/src/stores/settings.ts
  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,js,jsx,vue}: Use OxLint for JS/TS code; keep lint clean
Use camelCase for variables and functions
Use SCREAMING_SNAKE_CASE for constants

Files:

  • src/renderer/src/stores/settings.ts
  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Place Electron main-process presenters under src/main/presenter/ (Window, Tab, Thread, Mcp, Config, LLMProvider)

Files:

  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriate stop_reason in the standardized format.
Provider implementations should yield error events in the standardized format...

Files:

  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
src/main/presenter/llmProviderPresenter/index.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/index.ts: src/main/presenter/llmProviderPresenter/index.ts should manage the overall Agent loop, conversation history, tool execution via McpPresenter, and frontend communication via eventBus.
The main Agent loop in llmProviderPresenter/index.ts should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with needContinueConversation and toolCallCount.
The main Agent loop should send standardized STREAM_EVENTS (RESPONSE, END, ERROR) to the frontend via eventBus.
The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
🧠 Learnings (27)
📓 Common learnings
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : `src/main/presenter/llmProviderPresenter/index.ts` should manage the overall Agent loop, conversation history, tool execution via `McpPresenter`, and frontend communication via `eventBus`.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-10-14T08:02:59.495Z
Learning: Applies to src/main/presenter/LLMProvider/**/*.ts : Implement the two-layer LLM provider (Agent Loop + Provider) under src/main/presenter/LLMProvider
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : New LLM providers must be added under src/main/presenter/llmProviderPresenter/providers/ as separate files
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-09-04T11:03:30.184Z
Learning: Integrate via the llmProviderPresenter entry point (src/main/presenter/llmProviderPresenter/index.ts) as the related implementation entry
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop in `llmProviderPresenter/index.ts` should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with `needContinueConversation` and `toolCallCount`.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a `coreStream` method that yields standardized stream events to decouple the main loop from provider-specific details.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/mcpPresenter/inMemoryServers/*.ts : Implement new MCP tools under src/main/presenter/mcpPresenter/inMemoryServers/

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/mcpPresenter/index.ts : Register new MCP tools in src/main/presenter/mcpPresenter/index.ts

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : `src/main/presenter/llmProviderPresenter/index.ts` should manage the overall Agent loop, conversation history, tool execution via `McpPresenter`, and frontend communication via `eventBus`.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
📚 Learning: 2025-10-14T08:02:59.495Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-10-14T08:02:59.495Z
Learning: Applies to src/main/presenter/LLMProvider/**/*.ts : Implement the two-layer LLM provider (Agent Loop + Provider) under src/main/presenter/LLMProvider

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : New LLM providers must be added under src/main/presenter/llmProviderPresenter/providers/ as separate files

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using `convertToProviderTools`) and included in the API request.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration entries in src/main/presenter/configPresenter/providers.ts

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
📚 Learning: 2025-10-14T08:02:59.495Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-10-14T08:02:59.495Z
Learning: Applies to src/main/presenter/**/*.ts : Place Electron main-process presenters under src/main/presenter/ (Window, Tab, Thread, Mcp, Config, LLMProvider)

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
📚 Learning: 2025-09-04T11:03:30.184Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-09-04T11:03:30.184Z
Learning: Integrate via the llmProviderPresenter entry point (src/main/presenter/llmProviderPresenter/index.ts) as the related implementation entry

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (`tool_call_start`, `tool_call_chunk`, `tool_call_end`) in the standardized format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop in `llmProviderPresenter/index.ts` should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with `needContinueConversation` and `toolCallCount`.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each LLM provider implementation must expose a coreStream method following the standardized event interface

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield events asynchronously using the async generator pattern.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield image data events in the standardized format when applicable.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield text events in the standardized format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should send standardized `STREAM_EVENTS` (`RESPONSE`, `END`, `ERROR`) to the frontend via `eventBus`.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a `coreStream` method that yields standardized stream events to decouple the main loop from provider-specific details.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should aggregate and yield usage events as part of the standardized stream.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/streamEvents.ts : Standardized stream events should conform to the `LLMCoreStreamEvent` interface, ideally defined in a shared file such as `src/main/presenter/llmProviderPresenter/streamEvents.ts`.

Applied to files:

  • src/main/presenter/llmProviderPresenter/types.ts
📚 Learning: 2025-09-04T11:03:30.184Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-09-04T11:03:30.184Z
Learning: Send a rate_limit event when limits are approached with providerId, qpsLimit, currentQps, queueLength, estimatedWaitTime?; do not block the event channel

Applied to files:

  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield stop events with appropriate `stop_reason` in the standardized format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/types.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts
🧬 Code graph analysis (10)
src/renderer/src/stores/settings.ts (1)
scripts/fetch-provider-db.mjs (1)
  • providers (29-29)
src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts (2)
src/shared/types/presenters/legacy.presenters.d.ts (2)
  • IConfigPresenter (381-551)
  • MCPServerConfig (1128-1142)
src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts (2)
  • ModelscopeProvider (48-333)
  • ModelScopeMcpServer (25-46)
src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts (1)
src/shared/types/presenters/legacy.presenters.d.ts (2)
  • ChatMessage (1389-1389)
  • ModelConfig (132-150)
src/main/presenter/llmProviderPresenter/managers/modelManager.ts (1)
src/shared/types/presenters/legacy.presenters.d.ts (1)
  • IConfigPresenter (381-551)
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts (5)
src/shared/types/presenters/legacy.presenters.d.ts (2)
  • IConfigPresenter (381-551)
  • ChatMessage (1389-1389)
src/main/presenter/llmProviderPresenter/types.ts (1)
  • StreamState (22-28)
src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts (1)
  • RateLimitManager (6-288)
src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts (1)
  • ToolCallProcessor (31-370)
src/main/presenter/index.ts (1)
  • presenter (223-223)
src/main/presenter/llmProviderPresenter/providers/modelscopeProvider.ts (1)
src/shared/types/presenters/legacy.presenters.d.ts (1)
  • MCPServerConfig (1128-1142)
src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts (4)
src/shared/types/presenters/legacy.presenters.d.ts (1)
  • IConfigPresenter (381-551)
src/main/presenter/llmProviderPresenter/types.ts (1)
  • StreamState (22-28)
src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts (1)
  • RateLimitManager (6-288)
src/shared/provider-operations.ts (2)
  • ProviderBatchUpdate (62-67)
  • ProviderChange (23-34)
src/main/presenter/llmProviderPresenter/index.ts (11)
src/shared/types/presenters/llmprovider.presenter.d.ts (6)
  • ILlmProviderPresenter (129-215)
  • LLM_PROVIDER (44-59)
  • MODEL_META (27-42)
  • OllamaModel (93-113)
  • ModelScopeMcpSyncOptions (115-118)
  • ModelScopeMcpSyncResult (120-127)
src/main/presenter/llmProviderPresenter/types.ts (2)
  • StreamState (22-28)
  • ProviderConfig (30-32)
src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts (1)
  • RateLimitManager (6-288)
src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts (1)
  • ProviderInstanceManager (50-381)
src/main/presenter/llmProviderPresenter/managers/modelManager.ts (1)
  • ModelManager (10-80)
src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts (1)
  • OllamaManager (12-77)
src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts (1)
  • EmbeddingManager (8-39)
src/main/presenter/llmProviderPresenter/managers/agentLoopHandler.ts (1)
  • AgentLoopHandler (16-453)
src/main/presenter/llmProviderPresenter/managers/modelScopeSyncManager.ts (1)
  • ModelScopeSyncManager (15-152)
src/main/eventbus.ts (1)
  • eventBus (151-151)
src/main/events.ts (1)
  • CONFIG_EVENTS (12-41)
src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts (1)
src/main/eventbus.ts (1)
  • eventBus (151-151)
src/main/presenter/llmProviderPresenter/managers/rateLimitManager.ts (3)
src/main/presenter/llmProviderPresenter/types.ts (3)
  • ProviderRateLimitState (15-20)
  • RateLimitConfig (3-6)
  • QueueItem (8-13)
src/shared/types/presenters/legacy.presenters.d.ts (1)
  • IConfigPresenter (381-551)
src/main/eventbus.ts (1)
  • eventBus (151-151)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (9)
src/renderer/src/stores/settings.ts (1)

378-383: Nice guard on local refreshes.

Limiting the initial Ollama refresh to enabled providers keeps the renderer from hitting local services unnecessarily and keeps the event handlers’ semantics consistent. Looks good.

src/main/presenter/llmProviderPresenter/types.ts (1)

22-32: Type additions align with the manager split.

The new StreamState and rate-limit shapes are a clean way to share state across the managers. No issues spotted.

src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts (1)

124-147: Permission-handling flow looks solid.

Halting the loop and yielding a dedicated permission-required event keeps the agent loop from racing ahead before the UI responds. This integrates nicely with the new manager design.

src/main/presenter/llmProviderPresenter/managers/ollamaManager.ts (1)

25-76: Good defensive wrapper around Ollama provider access.

Catching provider lookups and short-circuiting with empty arrays keeps the renderer-facing API resilient while still surfacing real errors for info calls. Nicely done.

src/main/presenter/llmProviderPresenter/index.ts (5)

44-91: Clean manager-based architecture with proper dependency injection.

The refactoring successfully extracts responsibilities into dedicated managers while maintaining clean dependencies:

  • RateLimitManager has minimal dependencies (only configPresenter)
  • ProviderInstanceManager receives shared state (activeStreams, rateLimitManager) via options
  • Other managers receive focused dependencies (getProviderInstance, configPresenter)
  • Initialization order is correct (managers created before init() calls)
  • Event handling properly delegates to managers

This aligns well with the learnings about the two-layer architecture (Agent Loop + Provider).

Based on learnings


93-106: LGTM: New public API methods provide safe provider access.

The new getProviders() and getCurrentProvider() methods properly delegate to the manager layer and handle errors gracefully. getCurrentProvider() returns null instead of throwing when the provider is not found, which is appropriate for a getter method.


108-175: LGTM: Clean delegation to specialized managers.

Methods properly delegate to their respective managers while maintaining interface contracts. The defensive approach of stopping streams before provider changes (lines 113-114, 127) prevents resource leaks and state inconsistencies.


204-234: LGTM: Agent loop properly delegated to specialized handler.

The startStreamCompletion method now delegates to AgentLoopHandler, which encapsulates the multi-round conversation loop, tool execution, and event streaming. This maintains the presenter's role as the entry point while improving modularity and testability.

Based on learnings


367-445: LGTM: Feature-specific methods properly routed to specialized managers.

Custom model operations (ModelManager), Ollama operations (OllamaManager), embedding operations (EmbeddingManager), and ModelScope MCP sync (ModelScopeSyncManager) are all cleanly delegated. The consistent delegation pattern throughout the file improves maintainability and testability.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts (2)

11-27: Error propagation improved; consider structured logging.

The error handling now correctly preserves the original error context, addressing the previous review concern. However, the logging could be enhanced with structured format including timestamps, error codes, and context per the coding guidelines.

Consider this enhancement for better observability:

     } catch (error) {
-      console.error(`Embedding failed for providerId: ${providerId}, modelId: ${modelId}:`, error)
+      logger.error('EMBEDDING_REQUEST_FAILED', {
+        providerId,
+        modelId,
+        textCount: texts.length,
+        error: error instanceof Error ? error.message : String(error),
+        stack: error instanceof Error ? error.stack : undefined
+      })

29-46: Verify the different error handling pattern.

getDimensions returns an error object with a fallback value, while getEmbeddings throws. This asymmetry might confuse callers. Additionally, returning dimensions: 0 on error could be misinterpreted by callers who don't check errorMsg.

Please confirm whether:

  1. The different error handling approaches are intentional (based on how these methods are consumed)
  2. Callers reliably check the errorMsg field before using dimensions: 0

If dimensions are critical metadata, consider throwing instead:

async getDimensions(
  providerId: string,
  modelId: string
): Promise<LLM_EMBEDDING_ATTRS> {
  try {
    const provider = this.options.getProviderInstance(providerId)
    return await provider.getDimensions(modelId)
  } catch (error) {
    console.error(`Failed to get embedding dimensions for model ${modelId}:`, error)
    throw error instanceof Error 
      ? error 
      : new Error(`Failed to get dimensions: ${String(error)}`)
  }
}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4097de2 and 4e7c3ec.

📒 Files selected for processing (2)
  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/main/presenter/llmProviderPresenter/managers/modelManager.ts
🧰 Additional context used
📓 Path-based instructions (10)
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
**/*.{ts,tsx,js,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for all logs and comments

Files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)

Use PascalCase for TypeScript types and classes

Files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Place Electron main-process presenters under src/main/presenter/ (Window, Tab, Thread, Mcp, Config, LLMProvider)

Files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
**/*.{ts,tsx,js,jsx,vue,css,scss,md,json,yml,yaml}

📄 CodeRabbit inference engine (AGENTS.md)

Prettier style: single quotes, no semicolons, print width 100; run pnpm run format

Files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,js,jsx,vue}: Use OxLint for JS/TS code; keep lint clean
Use camelCase for variables and functions
Use SCREAMING_SNAKE_CASE for constants

Files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
🧠 Learnings (13)
📓 Common learnings
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-10-14T08:02:59.495Z
Learning: Applies to src/main/presenter/LLMProvider/**/*.ts : Implement the two-layer LLM provider (Agent Loop + Provider) under src/main/presenter/LLMProvider
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : `src/main/presenter/llmProviderPresenter/index.ts` should manage the overall Agent loop, conversation history, tool execution via `McpPresenter`, and frontend communication via `eventBus`.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : New LLM providers must be added under src/main/presenter/llmProviderPresenter/providers/ as separate files
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop in `llmProviderPresenter/index.ts` should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with `needContinueConversation` and `toolCallCount`.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a `coreStream` method that yields standardized stream events to decouple the main loop from provider-specific details.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-09-04T11:03:30.184Z
Learning: Integrate via the llmProviderPresenter entry point (src/main/presenter/llmProviderPresenter/index.ts) as the related implementation entry
📚 Learning: 2025-10-14T08:02:59.495Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-10-14T08:02:59.495Z
Learning: Applies to src/main/presenter/LLMProvider/**/*.ts : Implement the two-layer LLM provider (Agent Loop + Provider) under src/main/presenter/LLMProvider

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : New LLM providers must be added under src/main/presenter/llmProviderPresenter/providers/ as separate files

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
📚 Learning: 2025-09-04T11:03:30.184Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-09-04T11:03:30.184Z
Learning: Integrate via the llmProviderPresenter entry point (src/main/presenter/llmProviderPresenter/index.ts) as the related implementation entry

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : `src/main/presenter/llmProviderPresenter/index.ts` should manage the overall Agent loop, conversation history, tool execution via `McpPresenter`, and frontend communication via `eventBus`.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield text events in the standardized format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield image data events in the standardized format when applicable.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield events asynchronously using the async generator pattern.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield error events in the standardized format when errors occur.

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
📚 Learning: 2025-07-21T01:45:54.229Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-07-21T01:45:54.229Z
Learning: Applies to src/{main,renderer}/**/*.ts : Implement proper error handling and logging for debugging

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (1)
src/main/presenter/llmProviderPresenter/managers/embeddingManager.ts (1)

1-6: LGTM! Clean interface design.

The imports and interface definition are well-structured. The dependency injection pattern via EmbeddingManagerOptions provides good testability and decoupling.

@zerob13 zerob13 merged commit 44f023b into dev Nov 13, 2025
2 checks passed
@zerob13 zerob13 deleted the refactor/code-file-size branch November 23, 2025 13:52
@coderabbitai coderabbitai bot mentioned this pull request Dec 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants