Skip to content

Conversation

@zerob13
Copy link
Collaborator

@zerob13 zerob13 commented Dec 17, 2025

Summary

Testing

  • pnpm run build
  • pnpm run format
  • pnpm run lint

Codex Task

Summary by CodeRabbit

Release Notes

  • Dependencies

    • Updated @modelcontextprotocol/sdk to version 1.25.1
  • Improvements

    • Enhanced image generation model compatibility with flexible identification support
    • Strengthened tool-call parsing with improved error handling and recovery

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 17, 2025

Walkthrough

Dependency version bump for Model Context Protocol SDK upgraded from 1.22.0 to 1.25.1. Refactored image-generation model detection across OpenAI providers from static list-based checking to predicate-based approach supporting exact matches and prefix matching. Enhanced function-call parsing with fallback mechanisms.

Changes

Cohort / File(s) Summary
Dependency Update
package.json
Updated @modelcontextprotocol/sdk from ^1.22.0 to ^1.25.1
Image-Generation Model Detection Refactor
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts, src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
Replaced rigid list-based model checks with predicate-based approach: introduced isOpenAIImageGenerationModel() helper and OPENAI_IMAGE_GENERATION_MODEL_PREFIXES constant supporting both exact ID matches and prefix-based matching. Enhanced parseFunctionCalls() with robust JSON parsing via fallback mechanisms and multiple structure handling.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Verify prefix-based model detection handles all expected image-generation model identifiers correctly
  • Review parseFunctionCalls() edge cases, particularly JSON parsing fallback scenarios and argument stringification behavior
  • Validate SDK version bump compatibility and verify no breaking changes impact existing functionality

Poem

🐰 A hop, skip, and bump—the SDK's grown tall,
Model detection now stretches with prefixes and all,
JSON parsing's more nimble, with helpers so spry,
Image generators found with a predicate's eye,
Flexible matching hops onward—hooray! 🚀

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change (MCP SDK version bump to 1.25.1), but significantly understates the scope—the PR also includes substantial refactoring of image-generation model detection and tool-call parsing logic across two provider files.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/add-support-for-dall-e-and-gpt-image-ids

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@zerob13 zerob13 changed the title Bump MCP SDK to 1.25.1 chore: Bump MCP SDK to 1.25.1 Dec 17, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (1)

42-46: Duplicate: Extract duplicated image generation detection logic.

This is the same code duplication issue flagged in openAICompatibleProvider.ts (lines 50-54). The image generation model detection logic should be extracted to a shared location to ensure consistency and maintainability.

</review_comment_end>

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e2ede5b and 6c62fb6.

📒 Files selected for processing (3)
  • package.json (1 hunks)
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (2 hunks)
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (2 hunks)
🧰 Additional context used
📓 Path-based instructions (16)
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for logs and comments (Chinese text exists in legacy code, but new code should use English)

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Enable and maintain strict TypeScript type checking for all files

**/*.{ts,tsx}: Always use try-catch to handle possible errors in TypeScript code
Provide meaningful error messages when catching errors
Log detailed error logs including error details, context, and stack traces
Distinguish and handle different error types (UserError, NetworkError, SystemError, BusinessError) with appropriate handlers in TypeScript
Use structured logging with logger.error(), logger.warn(), logger.info(), logger.debug() methods from logging utilities
Do not suppress errors (avoid empty catch blocks or silently ignoring errors)
Provide user-friendly error messages for user-facing errors in TypeScript components
Implement error retry mechanisms for transient failures in TypeScript
Avoid logging sensitive information (passwords, tokens, PII) in logs

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Organize core business logic into dedicated Presenter classes, with one presenter per functional domain

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each LLM provider must implement the coreStream method following the standardized event interface for tool calling and response streaming
Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation

src/main/presenter/llmProviderPresenter/providers/*.ts: In Provider implementations (src/main/presenter/llmProviderPresenter/providers/*.ts), the coreStream(messages, modelId, temperature, maxTokens) method should perform a single-pass streaming API request for each conversation round without containing multi-turn tool call loop logic
In Provider implementations, handle native tool support by converting MCP tools to Provider format using convertToProviderTools and including them in the API request; for Providers without native function call support, prepare messages using prepareFunctionCallPrompt before making the API call
In Provider implementations, parse Provider-specific data chunks from the streaming response and yield standardized LLMCoreStreamEvent objects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data
In Provider implementations, include helper methods for Provider-specific operations such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Use EventBus from src/main/eventbus.ts for main-to-renderer communication, broadcasting events via mainWindow.webContents.send()

src/main/**/*.ts: Use EventBus pattern for inter-process communication within the main process to decouple modules
Use Electron's built-in APIs for file system and native dialogs instead of Node.js or custom implementations

src/main/**/*.ts: Electron main process code belongs in src/main/ with presenters in presenter/ (Window/Tab/Thread/Mcp/Config/LLMProvider) and eventbus.ts for app events
Use the Presenter pattern in the main process for UI coordination

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Do not include AI co-authoring information (e.g., 'Co-Authored-By: Claude') in git commits

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{js,ts,jsx,tsx,mjs,cjs}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

Write logs and comments in English

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
{src/main/presenter/**/*.ts,src/renderer/**/*.ts}

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Implement proper inter-process communication (IPC) patterns using Electron's ipcRenderer and ipcMain APIs

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/presenter/llmProviderPresenter/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

Define the standardized LLMCoreStreamEvent interface with fields: type (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), content (for text), reasoning_content (for reasoning), tool_call_id, tool_call_name, tool_call_arguments_chunk (for streaming), tool_call_arguments_complete (for complete arguments), error_message, usage object with token counts, stop_reason (tool_use | max_tokens | stop_sequence | error | complete), and image_data object with Base64-encoded data and mimeType

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/**/*

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

New features should be developed in the src directory

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/**/*.{js,ts}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

Main process code for Electron should be placed in src/main

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/**/*.{ts,tsx,vue,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

Use Prettier with single quotes, no semicolons, and 100 character width

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

Use OxLint for linting JavaScript and TypeScript files

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

src/**/*.{ts,tsx}: Use camelCase for variable and function names in TypeScript files
Use PascalCase for type and class names in TypeScript
Use SCREAMING_SNAKE_CASE for constant names

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Use EventBus for inter-process communication events

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
package.json

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

package.json: Node.js >= 22 required
pnpm >= 9 required

Files:

  • package.json
🧠 Learnings (13)
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Implement separation of concerns where `src/main/presenter/llmProviderPresenter/index.ts` manages the Agent loop and conversation history, while Provider files handle LLM API interactions, Provider-specific request/response formatting, tool definition conversion, and native vs non-native tool call mechanisms

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations (`src/main/presenter/llmProviderPresenter/providers/*.ts`), the `coreStream(messages, modelId, temperature, maxTokens)` method should perform a *single-pass* streaming API request for each conversation round without containing multi-turn tool call loop logic

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, include helper methods for Provider-specific operations such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt`

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/**/*.ts : Define the standardized `LLMCoreStreamEvent` interface with fields: `type` (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), `content` (for text), `reasoning_content` (for reasoning), `tool_call_id`, `tool_call_name`, `tool_call_arguments_chunk` (for streaming), `tool_call_arguments_complete` (for complete arguments), `error_message`, `usage` object with token counts, `stop_reason` (tool_use | max_tokens | stop_sequence | error | complete), and `image_data` object with Base64-encoded data and mimeType

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, parse Provider-specific data chunks from the streaming response and `yield` standardized `LLMCoreStreamEvent` objects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, handle `reasoning`, `text`, `image_data`, and `usage` events by processing and forwarding them through `STREAM_EVENTS.RESPONSE` events to the frontend

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, handle native tool support by converting MCP tools to Provider format using `convertToProviderTools` and including them in the API request; for Providers without native function call support, prepare messages using `prepareFunctionCallPrompt` before making the API call

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts` (`startStreamCompletion`), implement the Agent loop that manages the overall conversation flow, including multiple rounds of LLM calls and tool usage, maintaining `conversationMessages` history, calling `provider.coreStream()` on each iteration, and controlling the loop using `needContinueConversation` and `toolCallCount` (compared against `MAX_TOOL_CALLS`)

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each LLM provider must implement the `coreStream` method following the standardized event interface for tool calling and response streaming

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, listen for standardized events yielded by `coreStream` and handle them accordingly: buffer text content (`currentContent`), handle `tool_call_start/chunk/end` events by collecting tool details and calling `presenter.mcpPresenter.callTool`, send frontend events via `eventBus` with tool call status, format tool results for the next LLM call, and set `needContinueConversation = true`

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-11-25T05:26:15.929Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/development-setup.mdc:0-0
Timestamp: 2025-11-25T05:26:15.929Z
Learning: Applies to package.json : Node.js >= 22 required

Applied to files:

  • package.json
📚 Learning: 2025-11-25T05:28:20.513Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-25T05:28:20.513Z
Learning: Require Node ≥ 20.19 and pnpm ≥ 10.11 (pnpm only, not npm) as the project toolchain

Applied to files:

  • package.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (4)
package.json (1)

71-71: The package version 1.25.1 exists and has no reported security vulnerabilities.

Version 1.25.1 of @modelcontextprotocol/sdk is the latest version and was published 12 hours ago. This is a minor version bump from 1.22.0 and maintains backward compatibility. No known security advisories were found for this version.

src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (2)

1525-1525: LGTM! Predicate-based model detection improves flexibility.

The change from OPENAI_IMAGE_GENERATION_MODELS.includes(modelId) to isOpenAIImageGenerationModel(modelId) correctly applies the new predicate-based detection, enabling support for both exact model ID matches and prefix-based model families like dall-e-* and gpt-image-*.

</review_comment_end>


1572-1576: LGTM! Backward-compatible signature improvement.

The addition of the optional fallbackIdPrefix parameter with a default value maintains backward compatibility while adding flexibility for generating context-specific tool call IDs. The implementation correctly uses this parameter (e.g., line 1706) to create unique identifiers when needed.

</review_comment_end>

src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (1)

304-304: LGTM! Consistent predicate-based model detection.

The change to isOpenAIImageGenerationModel(modelId) correctly applies the new predicate-based detection, consistent with the implementation in openAICompatibleProvider.ts. This enables flexible support for both exact matches and prefix-based model families.

</review_comment_end>

Comment on lines +50 to +54
const OPENAI_IMAGE_GENERATION_MODELS = ['gpt-4o-all', 'gpt-4o-image']
const OPENAI_IMAGE_GENERATION_MODEL_PREFIXES = ['dall-e-', 'gpt-image-']
const isOpenAIImageGenerationModel = (modelId: string): boolean =>
OPENAI_IMAGE_GENERATION_MODELS.includes(modelId) ||
OPENAI_IMAGE_GENERATION_MODEL_PREFIXES.some((prefix) => modelId.startsWith(prefix))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Extract duplicated image generation detection logic to shared location.

The constants OPENAI_IMAGE_GENERATION_MODELS, OPENAI_IMAGE_GENERATION_MODEL_PREFIXES, and the predicate isOpenAIImageGenerationModel are duplicated in both openAICompatibleProvider.ts (lines 50-54) and openAIResponsesProvider.ts (lines 42-46). This violates the DRY principle and creates a maintenance burden.

Consider extracting this logic to:

  • A shared utility module (e.g., src/main/presenter/llmProviderPresenter/utils/modelDetection.ts), or
  • The BaseLLMProvider class as a protected static method

This ensures consistency and reduces the risk of divergence between the two implementations.

</review_comment_end>

🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
around lines 50-54, the image-generation detection constants and the
isOpenAIImageGenerationModel predicate are duplicated elsewhere; extract
OPENAI_IMAGE_GENERATION_MODELS, OPENAI_IMAGE_GENERATION_MODEL_PREFIXES, and
isOpenAIImageGenerationModel into a single shared location (preferably
src/main/presenter/llmProviderPresenter/utils/modelDetection.ts or as a
protected static on BaseLLMProvider), export the predicate from that new
module/class, then replace the local definitions in this file (and the duplicate
in openAIResponsesProvider.ts) with imports calling the shared predicate, and
run tests/build to ensure no breakage.

@zerob13 zerob13 merged commit afd4e84 into dev Dec 17, 2025
2 checks passed
@zerob13 zerob13 deleted the codex/add-support-for-dall-e-and-gpt-image-ids branch December 18, 2025 07:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] mcp: supprot 2025-11-25 version protocol

2 participants