Skip to content

Conversation

@shuv1337
Copy link
Collaborator

@shuv1337 shuv1337 commented Jan 10, 2026

Summary

  • Merge upstream sst/opencode v1.1.11 into shuvcode fork
  • Adds Codex auth support (feat: codex auth support anomalyco/opencode#7537) - GPT-5.2-Codex authentication
  • Fix instance dispose issue
  • Documentation updates for providers and URL-based instructions

Conflicts Resolved

  • bun.lock: accepted upstream
  • packages/opencode/src/plugin/index.ts: merged imports (kept fork bundleLocalPlugin + added CodexAuthPlugin)
  • sdks/vscode/package.json: kept shuvcode branding, updated version to 1.1.11

Fork Features Preserved

  • shuvcode branding in VS Code extension
  • Custom Anthropic auth plugin (opencode-anthropic-auth-shuv)
  • Local plugin bundling for file:// plugins

Greptile Overview

Greptile Summary

This PR merges upstream OpenCode v1.1.11 into the shuvcode fork, adding GPT-5.2 Codex OAuth authentication support while preserving fork-specific customizations.

What Changed

New Codex Authentication (Main Feature)

  • Adds complete OAuth 2.0 + PKCE flow for ChatGPT Pro/Plus users to access GPT-5.2 Codex models
  • New 476-line CodexAuthPlugin handles authorization server, token exchange, and automatic refresh
  • Integrates with existing auth system through plugin architecture
  • Includes 318-line Codex-specific system prompt with agent guidelines

Integration Points

  • llm.ts: Detects Codex via provider.id === "openai" && auth.type === "oauth", sends system prompts as user messages, adds custom headers
  • plugin/index.ts: Loads CodexAuthPlugin as internal plugin, filters out deprecated opencode-openai-codex-auth plugin
  • UI updates: Auth command and TUI dialogs now show "ChatGPT Plus/Pro or API key" hints for OpenAI

Other Fixes

  • thread.ts: Event source now resubscribes on instance disposal (stability improvement)
  • transform.ts: Excludes gpt-5.2-codex from variant filtering
  • Version bump to 1.1.11 across packages
  • Dependencies updated via bun.lock

Fork Preservation

  • Maintains shuvcode branding in VS Code extension
  • Keeps custom Anthropic auth plugin
  • Preserves local plugin bundling logic

Critical Issues Found

Race Conditions in OAuth Server (packages/opencode/src/plugin/codex.ts)

  1. Global oauthServer and pendingOAuth variables support only ONE concurrent authorization flow
  2. If two users/sessions auth simultaneously, the second overwrites the first's callback handlers (line 293)
  3. State validation (line 237) fails with concurrent flows due to single-value check
  4. Server cleanup (line 273) doesn't reject pending promises or clear state

Error Handling Gaps

  • Token exchange (line 68) and refresh (line 86) functions don't handle non-JSON error responses
  • Will throw unhandled exceptions if auth server returns HTML error pages

Performance Concerns

  • Token refresh (line 383) blocks API requests synchronously during network calls
  • No mutex to prevent multiple concurrent refresh attempts
  • Could cause timeouts on slow network connections

Comparison with Existing Patterns

The codebase already has a correct OAuth implementation in packages/opencode/src/mcp/oauth-callback.ts which:

  • Uses Map<string, PendingAuth> for concurrent flow tracking (line 55)
  • Validates state via pendingAuths.has(state) (line 115)
  • Properly cleans up all pending auths on shutdown (lines 190-194)

The Codex implementation should follow this proven pattern instead of using global singleton state.

Recommendations

Before Merge:

  1. Refactor CodexAuthPlugin to use Map-based state tracking like MCP implementation
  2. Add try-catch around JSON parsing in token exchange/refresh functions
  3. Test concurrent OAuth flows (multiple browser sessions, multiple users)

Post-Merge:
4. Consider proactive token refresh before expiration
5. Add mutex/lock for token refresh to prevent redundant requests
6. Document why system messages are sent as user role for Codex

Confidence Score: 2/5

  • This PR contains critical concurrency bugs in OAuth that will cause auth failures in production
  • The Codex OAuth implementation has fundamental concurrency issues using global singleton state that will break with multiple simultaneous auth flows. Combined with missing error handling in token operations, this creates reliability risks. While the overall integration architecture is sound and other changes are low-risk, the OAuth bugs are severe enough to warrant fixing before merge.
  • packages/opencode/src/plugin/codex.ts requires immediate attention for OAuth state management refactoring before this PR can be safely merged to production

Important Files Changed

File Analysis

Filename Score Overview
packages/opencode/src/plugin/codex.ts 1/5 New 476-line Codex OAuth plugin with critical concurrency bugs in global state management
packages/opencode/src/plugin/index.ts 4/5 Adds CodexAuthPlugin to INTERNAL_PLUGINS array and filters out old codex plugin
packages/opencode/src/session/llm.ts 3/5 Integrates Codex detection and special handling (OAuth auth check, system-as-user messages, custom headers)
packages/opencode/src/session/prompt/codex_header.txt 5/5 New 318-line system prompt for Codex models with agent guidelines and instructions
packages/opencode/src/cli/cmd/tui/thread.ts 4/5 Fixes event source to resubscribe on instance disposal (important for stability)

Sequence Diagram

sequenceDiagram
    participant User
    participant CLI as CLI/Auth
    participant Plugin as CodexAuthPlugin
    participant OAuthSrv as OAuth Server
    participant Browser
    participant OpenAI as OpenAI
    participant Codex as Codex API
    participant Store as Auth Store
    
    User->>CLI: auth login
    CLI->>Plugin: OAuth method selected
    Plugin->>Plugin: Generate PKCE and state
    Plugin->>OAuthSrv: Start server on port 1455
    
    Note over OAuthSrv: ISSUE: Global state<br/>single concurrent flow only
    
    OAuthSrv-->>Plugin: Ready with redirect URI
    Plugin->>Browser: Open auth URL
    Browser->>OpenAI: Authorization request
    
    Plugin->>Plugin: Wait for callback
    Note over Plugin: ISSUE: Global pendingOAuth<br/>overwritten by concurrent flows
    
    User->>Browser: Approve authorization
    OpenAI->>Browser: Redirect with auth code
    Browser->>OAuthSrv: Callback with code and state
    
    OAuthSrv->>OAuthSrv: Validate state
    Note over OAuthSrv: ISSUE: State validation<br/>broken for concurrent flows
    
    OAuthSrv->>OpenAI: Exchange code for tokens
    Note over OpenAI: ISSUE: No JSON error<br/>handling on response
    
    OpenAI-->>OAuthSrv: Return tokens
    OAuthSrv->>Plugin: Resolve promise with tokens
    Plugin->>Store: Save OAuth credentials
    Plugin->>OAuthSrv: Stop server
    Note over OAuthSrv: ISSUE: Incomplete cleanup<br/>pendingOAuth remains
    
    Plugin-->>CLI: Success
    
    rect rgb(200, 220, 250)
    Note over User,Codex: API Request Flow
    User->>Codex: API request
    Codex->>Plugin: Custom fetch wrapper
    Plugin->>Store: Get credentials
    Store-->>Plugin: OAuth creds with expiry
    
    alt Token expired
        Plugin->>OpenAI: Refresh token request
        Note over Plugin: ISSUE: Blocking sync refresh<br/>no concurrency control
        OpenAI-->>Plugin: New tokens
        Plugin->>Store: Update
    end
    
    Plugin->>Plugin: Build request with Bearer token
    Plugin->>Codex: Forward to Codex endpoint
    Codex-->>User: Response
    end
Loading

rekram1-node and others added 12 commits January 9, 2026 17:47
Upstream changes:
- feat: codex auth support (anomalyco#7537) - GPT-5.2-Codex authentication
- fix: instance dispose issue
- docs: url based instructions and provider docs

Resolved conflicts:
- bun.lock: accepted upstream
- packages/opencode/src/plugin/index.ts: merged imports (kept fork bundleLocalPlugin + added CodexAuthPlugin)
- sdks/vscode/package.json: kept shuvcode branding, updated version to 1.1.11
Resolved version conflicts - using v1.1.11 versions from upstream merge
@shuv1337 shuv1337 merged commit 0135013 into integration Jan 10, 2026
Copy link

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

11 files reviewed, 11 comments

Edit Code Review Agent Settings | Greptile

Comment on lines +199 to +200
let oauthServer: ReturnType<typeof Bun.serve> | undefined
let pendingOAuth: PendingOAuth | undefined
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Global mutable state in OAuth server creates critical race conditions and state corruption in concurrent scenarios.

The global oauthServer and pendingOAuth variables are shared across all plugin invocations. When multiple users or sessions attempt OAuth simultaneously:

  1. Race condition: If two auth flows start concurrently, the second call to startOAuthServer() will find oauthServer truthy and return early (line 203-204), but both will try to set pendingOAuth (line 293), causing the first auth flow to lose its callback handlers.

  2. State overwrite: The waitForOAuthCallback function (line 281) overwrites the global pendingOAuth without checking if another auth is pending, silently breaking any in-flight authorization.

  3. Inconsistent cleanup: stopOAuthServer() (line 273) sets oauthServer = undefined but doesn't clean up pendingOAuth, leaving dangling state.

Solution: Follow the pattern used in the MCP OAuth implementation (packages/opencode/src/mcp/oauth-callback.ts lines 53-55) which correctly uses Map<string, PendingAuth> to track multiple concurrent auth flows keyed by state parameter, ensuring proper isolation and cleanup.

Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/plugin/codex.ts
Line: 199:200

Comment:
Global mutable state in OAuth server creates critical race conditions and state corruption in concurrent scenarios.

The global `oauthServer` and `pendingOAuth` variables are shared across all plugin invocations. When multiple users or sessions attempt OAuth simultaneously:

1. **Race condition**: If two auth flows start concurrently, the second call to `startOAuthServer()` will find `oauthServer` truthy and return early (line 203-204), but both will try to set `pendingOAuth` (line 293), causing the first auth flow to lose its callback handlers.

2. **State overwrite**: The `waitForOAuthCallback` function (line 281) overwrites the global `pendingOAuth` without checking if another auth is pending, silently breaking any in-flight authorization.

3. **Inconsistent cleanup**: `stopOAuthServer()` (line 273) sets `oauthServer = undefined` but doesn't clean up `pendingOAuth`, leaving dangling state.

**Solution**: Follow the pattern used in the MCP OAuth implementation (`packages/opencode/src/mcp/oauth-callback.ts` lines 53-55) which correctly uses `Map<string, PendingAuth>` to track multiple concurrent auth flows keyed by state parameter, ensuring proper isolation and cleanup.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +273 to +279
function stopOAuthServer() {
if (oauthServer) {
oauthServer.stop()
oauthServer = undefined
log.info("codex oauth server stopped")
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Server cleanup is incomplete - pendingOAuth state is not cleared when server stops.

When stopOAuthServer() is called, it properly stops the Bun server and logs, but:

  1. Doesn't reject any pending OAuth promises, leaving them hanging indefinitely
  2. Doesn't clear the pendingOAuth variable, leaking state

Compare with MCP implementation (packages/opencode/src/mcp/oauth-callback.ts lines 183-195) which properly cleans up all pending auths with rejections and clears the Map.

Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/plugin/codex.ts
Line: 273:279

Comment:
Server cleanup is incomplete - `pendingOAuth` state is not cleared when server stops.

When `stopOAuthServer()` is called, it properly stops the Bun server and logs, but:
1. Doesn't reject any pending OAuth promises, leaving them hanging indefinitely
2. Doesn't clear the `pendingOAuth` variable, leaking state

Compare with MCP implementation (`packages/opencode/src/mcp/oauth-callback.ts` lines 183-195) which properly cleans up all pending auths with rejections and clears the Map.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +202 to +205
async function startOAuthServer(): Promise<{ port: number; redirectUri: string }> {
if (oauthServer) {
return { port: OAUTH_PORT, redirectUri: `http://localhost:${OAUTH_PORT}/auth/callback` }
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Early return when server exists prevents proper error detection for port conflicts and doesn't account for concurrent initialization.

If oauthServer is already defined (line 203), the function returns early without verifying:

  1. The server is actually listening (could have failed/stopped)
  2. The port is available
  3. Whether another auth flow is racing to initialize

The MCP implementation (packages/opencode/src/mcp/oauth-callback.ts lines 59-66) handles this properly by checking if the port is actually in use before deciding whether to start a new server.

Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/plugin/codex.ts
Line: 202:205

Comment:
Early return when server exists prevents proper error detection for port conflicts and doesn't account for concurrent initialization.

If `oauthServer` is already defined (line 203), the function returns early without verifying:
1. The server is actually listening (could have failed/stopped)
2. The port is available
3. Whether another auth flow is racing to initialize

The MCP implementation (`packages/opencode/src/mcp/oauth-callback.ts` lines 59-66) handles this properly by checking if the port is actually in use before deciding whether to start a new server.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +68 to +84
async function exchangeCodeForTokens(code: string, redirectUri: string, pkce: PkceCodes): Promise<TokenResponse> {
const response = await fetch(`${ISSUER}/oauth/token`, {
method: "POST",
headers: { "Content-Type": "application/x-www-form-urlencoded" },
body: new URLSearchParams({
grant_type: "authorization_code",
code,
redirect_uri: redirectUri,
client_id: CLIENT_ID,
code_verifier: pkce.verifier,
}).toString(),
})
if (!response.ok) {
throw new Error(`Token exchange failed: ${response.status}`)
}
return response.json()
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error handling for token exchange failure - the fetch response might not contain valid JSON.

Line 80 checks if response is ok and throws an error, but line 83 unconditionally calls response.json() without try-catch. If the error response is not valid JSON (e.g., HTML error page, network error), this will throw an unhandled exception.

Add try-catch around the json parsing or validate the response content-type before parsing.

Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/plugin/codex.ts
Line: 68:84

Comment:
Missing error handling for token exchange failure - the fetch response might not contain valid JSON.

Line 80 checks if response is ok and throws an error, but line 83 unconditionally calls `response.json()` without try-catch. If the error response is not valid JSON (e.g., HTML error page, network error), this will throw an unhandled exception.

Add try-catch around the json parsing or validate the response content-type before parsing.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +86 to +100
async function refreshAccessToken(refreshToken: string): Promise<TokenResponse> {
const response = await fetch(`${ISSUER}/oauth/token`, {
method: "POST",
headers: { "Content-Type": "application/x-www-form-urlencoded" },
body: new URLSearchParams({
grant_type: "refresh_token",
refresh_token: refreshToken,
client_id: CLIENT_ID,
}).toString(),
})
if (!response.ok) {
throw new Error(`Token refresh failed: ${response.status}`)
}
return response.json()
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error handling for refresh token failure - same JSON parsing issue as exchangeCodeForTokens.

Line 96 checks if response is ok and throws, but line 99 unconditionally calls response.json() without error handling. Non-JSON error responses will cause unhandled exceptions during token refresh, which is a critical flow that happens automatically.

Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/plugin/codex.ts
Line: 86:100

Comment:
Missing error handling for refresh token failure - same JSON parsing issue as exchangeCodeForTokens.

Line 96 checks if response is ok and throws, but line 99 unconditionally calls `response.json()` without error handling. Non-JSON error responses will cause unhandled exceptions during token refresh, which is a critical flow that happens automatically.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +315 to +322

// Filter models to only allowed Codex models for OAuth
const allowedModels = new Set(["gpt-5.1-codex-max", "gpt-5.1-codex-mini", "gpt-5.2", "gpt-5.2-codex"])
for (const modelId of Object.keys(provider.models)) {
if (!allowedModels.has(modelId)) {
delete provider.models[modelId]
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Model filtering logic deletes from provider.models during iteration, which is safe but the allowlist approach may exclude valid future models.

The code filters models by deleting entries not in the allowedModels Set. While this works correctly, consider:

  1. Future models: New Codex models added upstream won't be automatically available until this hardcoded list is updated
  2. Maintenance burden: Every new Codex model requires a code change here

Consider using a denylist approach or inferring model availability from the OAuth provider's response instead of maintaining a hardcoded allowlist.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/plugin/codex.ts
Line: 315:322

Comment:
Model filtering logic deletes from provider.models during iteration, which is safe but the allowlist approach may exclude valid future models.

The code filters models by deleting entries not in the allowedModels Set. While this works correctly, consider:

1. **Future models**: New Codex models added upstream won't be automatically available until this hardcoded list is updated
2. **Maintenance burden**: Every new Codex model requires a code change here

Consider using a denylist approach or inferring model availability from the OAuth provider's response instead of maintaining a hardcoded allowlist.

<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +324 to +353
if (!provider.models["gpt-5.2-codex"]) {
const model = {
id: "gpt-5.2-codex",
providerID: "openai",
api: {
id: "gpt-5.2-codex",
url: "https://chatgpt.com/backend-api/codex",
npm: "@ai-sdk/openai",
},
name: "GPT-5.2 Codex",
capabilities: {
temperature: false,
reasoning: true,
attachment: true,
toolcall: true,
input: { text: true, audio: false, image: true, video: false, pdf: false },
output: { text: true, audio: false, image: false, video: false, pdf: false },
interleaved: false,
},
cost: { input: 0, output: 0, cache: { read: 0, write: 0 } },
limit: { context: 400000, output: 128000 },
status: "active" as const,
options: {},
headers: {},
release_date: "2025-12-18",
variants: {} as Record<string, Record<string, any>>,
}
model.variants = ProviderTransform.variants(model)
provider.models["gpt-5.2-codex"] = model
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcoded model configuration duplicates provider model definitions and could drift out of sync.

Lines 324-352 manually construct a model configuration for "gpt-5.2-codex" if it doesn't exist. This duplicates model metadata that likely exists elsewhere in the codebase (similar models are defined in provider configs).

Issues:

  1. Maintenance: Model capabilities, limits, and metadata need manual updates here
  2. Consistency: No guarantee this matches other GPT-5.2 model definitions
  3. Release date: Line 348 has hardcoded date "2025-12-18" which may be incorrect

Consider fetching model configuration from a centralized source or inheriting from base GPT-5.2 model definitions.

Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/plugin/codex.ts
Line: 324:353

Comment:
Hardcoded model configuration duplicates provider model definitions and could drift out of sync.

Lines 324-352 manually construct a model configuration for "gpt-5.2-codex" if it doesn't exist. This duplicates model metadata that likely exists elsewhere in the codebase (similar models are defined in provider configs).

Issues:
1. **Maintenance**: Model capabilities, limits, and metadata need manual updates here
2. **Consistency**: No guarantee this matches other GPT-5.2 model definitions
3. **Release date**: Line 348 has hardcoded date "2025-12-18" which may be incorrect

Consider fetching model configuration from a centralized source or inheriting from base GPT-5.2 model definitions.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines 87 to +90
const provider = await Provider.getProvider(input.model.providerID)
const auth = await Auth.get(input.model.providerID)
const isCodex = provider.id === "openai" && auth?.type === "oauth"

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OAuth detection logic couples authentication type with provider ID, which may be fragile.

Line 89 checks const isCodex = provider.id === "openai" && auth?.type === "oauth" to determine Codex usage. This assumes:

  1. Only OpenAI provider uses OAuth for Codex
  2. All OAuth auth with OpenAI is Codex (what if OpenAI adds OAuth for regular API?)

Consider checking for Codex-specific model IDs (like model.id.includes("codex")) or adding an explicit Codex flag to the auth info rather than inferring from provider+auth-type combination.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/session/llm.ts
Line: 87:90

Comment:
OAuth detection logic couples authentication type with provider ID, which may be fragile.

Line 89 checks `const isCodex = provider.id === "openai" && auth?.type === "oauth"` to determine Codex usage. This assumes:

1. Only OpenAI provider uses OAuth for Codex
2. All OAuth auth with OpenAI is Codex (what if OpenAI adds OAuth for regular API?)

Consider checking for Codex-specific model IDs (like `model.id.includes("codex")`) or adding an explicit Codex flag to the auth info rather than inferring from provider+auth-type combination.

<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +102 to 106
if (isCodex) {
options.instructions = SystemPrompt.instructions()
options.store = false
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Codex-specific options are set without validation that the model supports them.

Lines 103-105 set options.instructions and options.store = false whenever isCodex is true, but there's no verification that:

  1. The model actually supports the instructions field
  2. The store option is applicable to this model variant

If these options are passed to a non-Codex OpenAI model (e.g., due to wrong auth detection), it could cause API errors.

Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/session/llm.ts
Line: 102:106

Comment:
Codex-specific options are set without validation that the model supports them.

Lines 103-105 set `options.instructions` and `options.store = false` whenever `isCodex` is true, but there's no verification that:

1. The model actually supports the `instructions` field
2. The `store` option is applicable to this model variant

If these options are passed to a non-Codex OpenAI model (e.g., due to wrong auth detection), it could cause API errors.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines 191 to 205
messages: [
...system.map(
(x): ModelMessage => ({
role: "system",
content: x,
}),
),
...(isCodex
? [
{
role: "user",
content: system.join("\n\n"),
} as ModelMessage,
]
: system.map(
(x): ModelMessage => ({
role: "system",
content: x,
}),
)),
...input.messages,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

System messages sent as user messages for Codex changes the conversation structure and may affect model behavior.

Lines 192-198 send system prompts as a single user message for Codex (joining with \n\n), while non-Codex uses proper system role messages (lines 199-204). This architectural difference means:

  1. Context window usage: Combining into one user message vs multiple system messages may be cached differently
  2. Model interpretation: System role has semantic meaning; user role does not
  3. Debugging difficulty: Harder to trace which part of the combined message comes from which system prompt component

Verify this is intentional Codex API behavior and document why this conversion is necessary.

Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/session/llm.ts
Line: 191:205

Comment:
System messages sent as user messages for Codex changes the conversation structure and may affect model behavior.

Lines 192-198 send system prompts as a single user message for Codex (joining with `\n\n`), while non-Codex uses proper system role messages (lines 199-204). This architectural difference means:

1. **Context window usage**: Combining into one user message vs multiple system messages may be cached differently
2. **Model interpretation**: System role has semantic meaning; user role does not
3. **Debugging difficulty**: Harder to trace which part of the combined message comes from which system prompt component

Verify this is intentional Codex API behavior and document why this conversion is necessary.

How can I resolve this? If you propose a fix, please make it concise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants