-
Notifications
You must be signed in to change notification settings - Fork 3
sync: merge upstream v1.1.11 #289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Upstream changes: - feat: codex auth support (anomalyco#7537) - GPT-5.2-Codex authentication - fix: instance dispose issue - docs: url based instructions and provider docs Resolved conflicts: - bun.lock: accepted upstream - packages/opencode/src/plugin/index.ts: merged imports (kept fork bundleLocalPlugin + added CodexAuthPlugin) - sdks/vscode/package.json: kept shuvcode branding, updated version to 1.1.11
Resolved version conflicts - using v1.1.11 versions from upstream merge
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
11 files reviewed, 11 comments
| let oauthServer: ReturnType<typeof Bun.serve> | undefined | ||
| let pendingOAuth: PendingOAuth | undefined |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Global mutable state in OAuth server creates critical race conditions and state corruption in concurrent scenarios.
The global oauthServer and pendingOAuth variables are shared across all plugin invocations. When multiple users or sessions attempt OAuth simultaneously:
-
Race condition: If two auth flows start concurrently, the second call to
startOAuthServer()will findoauthServertruthy and return early (line 203-204), but both will try to setpendingOAuth(line 293), causing the first auth flow to lose its callback handlers. -
State overwrite: The
waitForOAuthCallbackfunction (line 281) overwrites the globalpendingOAuthwithout checking if another auth is pending, silently breaking any in-flight authorization. -
Inconsistent cleanup:
stopOAuthServer()(line 273) setsoauthServer = undefinedbut doesn't clean uppendingOAuth, leaving dangling state.
Solution: Follow the pattern used in the MCP OAuth implementation (packages/opencode/src/mcp/oauth-callback.ts lines 53-55) which correctly uses Map<string, PendingAuth> to track multiple concurrent auth flows keyed by state parameter, ensuring proper isolation and cleanup.
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/plugin/codex.ts
Line: 199:200
Comment:
Global mutable state in OAuth server creates critical race conditions and state corruption in concurrent scenarios.
The global `oauthServer` and `pendingOAuth` variables are shared across all plugin invocations. When multiple users or sessions attempt OAuth simultaneously:
1. **Race condition**: If two auth flows start concurrently, the second call to `startOAuthServer()` will find `oauthServer` truthy and return early (line 203-204), but both will try to set `pendingOAuth` (line 293), causing the first auth flow to lose its callback handlers.
2. **State overwrite**: The `waitForOAuthCallback` function (line 281) overwrites the global `pendingOAuth` without checking if another auth is pending, silently breaking any in-flight authorization.
3. **Inconsistent cleanup**: `stopOAuthServer()` (line 273) sets `oauthServer = undefined` but doesn't clean up `pendingOAuth`, leaving dangling state.
**Solution**: Follow the pattern used in the MCP OAuth implementation (`packages/opencode/src/mcp/oauth-callback.ts` lines 53-55) which correctly uses `Map<string, PendingAuth>` to track multiple concurrent auth flows keyed by state parameter, ensuring proper isolation and cleanup.
How can I resolve this? If you propose a fix, please make it concise.| function stopOAuthServer() { | ||
| if (oauthServer) { | ||
| oauthServer.stop() | ||
| oauthServer = undefined | ||
| log.info("codex oauth server stopped") | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Server cleanup is incomplete - pendingOAuth state is not cleared when server stops.
When stopOAuthServer() is called, it properly stops the Bun server and logs, but:
- Doesn't reject any pending OAuth promises, leaving them hanging indefinitely
- Doesn't clear the
pendingOAuthvariable, leaking state
Compare with MCP implementation (packages/opencode/src/mcp/oauth-callback.ts lines 183-195) which properly cleans up all pending auths with rejections and clears the Map.
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/plugin/codex.ts
Line: 273:279
Comment:
Server cleanup is incomplete - `pendingOAuth` state is not cleared when server stops.
When `stopOAuthServer()` is called, it properly stops the Bun server and logs, but:
1. Doesn't reject any pending OAuth promises, leaving them hanging indefinitely
2. Doesn't clear the `pendingOAuth` variable, leaking state
Compare with MCP implementation (`packages/opencode/src/mcp/oauth-callback.ts` lines 183-195) which properly cleans up all pending auths with rejections and clears the Map.
How can I resolve this? If you propose a fix, please make it concise.| async function startOAuthServer(): Promise<{ port: number; redirectUri: string }> { | ||
| if (oauthServer) { | ||
| return { port: OAUTH_PORT, redirectUri: `http://localhost:${OAUTH_PORT}/auth/callback` } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Early return when server exists prevents proper error detection for port conflicts and doesn't account for concurrent initialization.
If oauthServer is already defined (line 203), the function returns early without verifying:
- The server is actually listening (could have failed/stopped)
- The port is available
- Whether another auth flow is racing to initialize
The MCP implementation (packages/opencode/src/mcp/oauth-callback.ts lines 59-66) handles this properly by checking if the port is actually in use before deciding whether to start a new server.
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/plugin/codex.ts
Line: 202:205
Comment:
Early return when server exists prevents proper error detection for port conflicts and doesn't account for concurrent initialization.
If `oauthServer` is already defined (line 203), the function returns early without verifying:
1. The server is actually listening (could have failed/stopped)
2. The port is available
3. Whether another auth flow is racing to initialize
The MCP implementation (`packages/opencode/src/mcp/oauth-callback.ts` lines 59-66) handles this properly by checking if the port is actually in use before deciding whether to start a new server.
How can I resolve this? If you propose a fix, please make it concise.| async function exchangeCodeForTokens(code: string, redirectUri: string, pkce: PkceCodes): Promise<TokenResponse> { | ||
| const response = await fetch(`${ISSUER}/oauth/token`, { | ||
| method: "POST", | ||
| headers: { "Content-Type": "application/x-www-form-urlencoded" }, | ||
| body: new URLSearchParams({ | ||
| grant_type: "authorization_code", | ||
| code, | ||
| redirect_uri: redirectUri, | ||
| client_id: CLIENT_ID, | ||
| code_verifier: pkce.verifier, | ||
| }).toString(), | ||
| }) | ||
| if (!response.ok) { | ||
| throw new Error(`Token exchange failed: ${response.status}`) | ||
| } | ||
| return response.json() | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing error handling for token exchange failure - the fetch response might not contain valid JSON.
Line 80 checks if response is ok and throws an error, but line 83 unconditionally calls response.json() without try-catch. If the error response is not valid JSON (e.g., HTML error page, network error), this will throw an unhandled exception.
Add try-catch around the json parsing or validate the response content-type before parsing.
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/plugin/codex.ts
Line: 68:84
Comment:
Missing error handling for token exchange failure - the fetch response might not contain valid JSON.
Line 80 checks if response is ok and throws an error, but line 83 unconditionally calls `response.json()` without try-catch. If the error response is not valid JSON (e.g., HTML error page, network error), this will throw an unhandled exception.
Add try-catch around the json parsing or validate the response content-type before parsing.
How can I resolve this? If you propose a fix, please make it concise.| async function refreshAccessToken(refreshToken: string): Promise<TokenResponse> { | ||
| const response = await fetch(`${ISSUER}/oauth/token`, { | ||
| method: "POST", | ||
| headers: { "Content-Type": "application/x-www-form-urlencoded" }, | ||
| body: new URLSearchParams({ | ||
| grant_type: "refresh_token", | ||
| refresh_token: refreshToken, | ||
| client_id: CLIENT_ID, | ||
| }).toString(), | ||
| }) | ||
| if (!response.ok) { | ||
| throw new Error(`Token refresh failed: ${response.status}`) | ||
| } | ||
| return response.json() | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing error handling for refresh token failure - same JSON parsing issue as exchangeCodeForTokens.
Line 96 checks if response is ok and throws, but line 99 unconditionally calls response.json() without error handling. Non-JSON error responses will cause unhandled exceptions during token refresh, which is a critical flow that happens automatically.
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/plugin/codex.ts
Line: 86:100
Comment:
Missing error handling for refresh token failure - same JSON parsing issue as exchangeCodeForTokens.
Line 96 checks if response is ok and throws, but line 99 unconditionally calls `response.json()` without error handling. Non-JSON error responses will cause unhandled exceptions during token refresh, which is a critical flow that happens automatically.
How can I resolve this? If you propose a fix, please make it concise.|
|
||
| // Filter models to only allowed Codex models for OAuth | ||
| const allowedModels = new Set(["gpt-5.1-codex-max", "gpt-5.1-codex-mini", "gpt-5.2", "gpt-5.2-codex"]) | ||
| for (const modelId of Object.keys(provider.models)) { | ||
| if (!allowedModels.has(modelId)) { | ||
| delete provider.models[modelId] | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Model filtering logic deletes from provider.models during iteration, which is safe but the allowlist approach may exclude valid future models.
The code filters models by deleting entries not in the allowedModels Set. While this works correctly, consider:
- Future models: New Codex models added upstream won't be automatically available until this hardcoded list is updated
- Maintenance burden: Every new Codex model requires a code change here
Consider using a denylist approach or inferring model availability from the OAuth provider's response instead of maintaining a hardcoded allowlist.
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/plugin/codex.ts
Line: 315:322
Comment:
Model filtering logic deletes from provider.models during iteration, which is safe but the allowlist approach may exclude valid future models.
The code filters models by deleting entries not in the allowedModels Set. While this works correctly, consider:
1. **Future models**: New Codex models added upstream won't be automatically available until this hardcoded list is updated
2. **Maintenance burden**: Every new Codex model requires a code change here
Consider using a denylist approach or inferring model availability from the OAuth provider's response instead of maintaining a hardcoded allowlist.
<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>
How can I resolve this? If you propose a fix, please make it concise.| if (!provider.models["gpt-5.2-codex"]) { | ||
| const model = { | ||
| id: "gpt-5.2-codex", | ||
| providerID: "openai", | ||
| api: { | ||
| id: "gpt-5.2-codex", | ||
| url: "https://chatgpt.com/backend-api/codex", | ||
| npm: "@ai-sdk/openai", | ||
| }, | ||
| name: "GPT-5.2 Codex", | ||
| capabilities: { | ||
| temperature: false, | ||
| reasoning: true, | ||
| attachment: true, | ||
| toolcall: true, | ||
| input: { text: true, audio: false, image: true, video: false, pdf: false }, | ||
| output: { text: true, audio: false, image: false, video: false, pdf: false }, | ||
| interleaved: false, | ||
| }, | ||
| cost: { input: 0, output: 0, cache: { read: 0, write: 0 } }, | ||
| limit: { context: 400000, output: 128000 }, | ||
| status: "active" as const, | ||
| options: {}, | ||
| headers: {}, | ||
| release_date: "2025-12-18", | ||
| variants: {} as Record<string, Record<string, any>>, | ||
| } | ||
| model.variants = ProviderTransform.variants(model) | ||
| provider.models["gpt-5.2-codex"] = model | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoded model configuration duplicates provider model definitions and could drift out of sync.
Lines 324-352 manually construct a model configuration for "gpt-5.2-codex" if it doesn't exist. This duplicates model metadata that likely exists elsewhere in the codebase (similar models are defined in provider configs).
Issues:
- Maintenance: Model capabilities, limits, and metadata need manual updates here
- Consistency: No guarantee this matches other GPT-5.2 model definitions
- Release date: Line 348 has hardcoded date "2025-12-18" which may be incorrect
Consider fetching model configuration from a centralized source or inheriting from base GPT-5.2 model definitions.
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/plugin/codex.ts
Line: 324:353
Comment:
Hardcoded model configuration duplicates provider model definitions and could drift out of sync.
Lines 324-352 manually construct a model configuration for "gpt-5.2-codex" if it doesn't exist. This duplicates model metadata that likely exists elsewhere in the codebase (similar models are defined in provider configs).
Issues:
1. **Maintenance**: Model capabilities, limits, and metadata need manual updates here
2. **Consistency**: No guarantee this matches other GPT-5.2 model definitions
3. **Release date**: Line 348 has hardcoded date "2025-12-18" which may be incorrect
Consider fetching model configuration from a centralized source or inheriting from base GPT-5.2 model definitions.
How can I resolve this? If you propose a fix, please make it concise.| const provider = await Provider.getProvider(input.model.providerID) | ||
| const auth = await Auth.get(input.model.providerID) | ||
| const isCodex = provider.id === "openai" && auth?.type === "oauth" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OAuth detection logic couples authentication type with provider ID, which may be fragile.
Line 89 checks const isCodex = provider.id === "openai" && auth?.type === "oauth" to determine Codex usage. This assumes:
- Only OpenAI provider uses OAuth for Codex
- All OAuth auth with OpenAI is Codex (what if OpenAI adds OAuth for regular API?)
Consider checking for Codex-specific model IDs (like model.id.includes("codex")) or adding an explicit Codex flag to the auth info rather than inferring from provider+auth-type combination.
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/session/llm.ts
Line: 87:90
Comment:
OAuth detection logic couples authentication type with provider ID, which may be fragile.
Line 89 checks `const isCodex = provider.id === "openai" && auth?.type === "oauth"` to determine Codex usage. This assumes:
1. Only OpenAI provider uses OAuth for Codex
2. All OAuth auth with OpenAI is Codex (what if OpenAI adds OAuth for regular API?)
Consider checking for Codex-specific model IDs (like `model.id.includes("codex")`) or adding an explicit Codex flag to the auth info rather than inferring from provider+auth-type combination.
<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>
How can I resolve this? If you propose a fix, please make it concise.| if (isCodex) { | ||
| options.instructions = SystemPrompt.instructions() | ||
| options.store = false | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Codex-specific options are set without validation that the model supports them.
Lines 103-105 set options.instructions and options.store = false whenever isCodex is true, but there's no verification that:
- The model actually supports the
instructionsfield - The
storeoption is applicable to this model variant
If these options are passed to a non-Codex OpenAI model (e.g., due to wrong auth detection), it could cause API errors.
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/session/llm.ts
Line: 102:106
Comment:
Codex-specific options are set without validation that the model supports them.
Lines 103-105 set `options.instructions` and `options.store = false` whenever `isCodex` is true, but there's no verification that:
1. The model actually supports the `instructions` field
2. The `store` option is applicable to this model variant
If these options are passed to a non-Codex OpenAI model (e.g., due to wrong auth detection), it could cause API errors.
How can I resolve this? If you propose a fix, please make it concise.| messages: [ | ||
| ...system.map( | ||
| (x): ModelMessage => ({ | ||
| role: "system", | ||
| content: x, | ||
| }), | ||
| ), | ||
| ...(isCodex | ||
| ? [ | ||
| { | ||
| role: "user", | ||
| content: system.join("\n\n"), | ||
| } as ModelMessage, | ||
| ] | ||
| : system.map( | ||
| (x): ModelMessage => ({ | ||
| role: "system", | ||
| content: x, | ||
| }), | ||
| )), | ||
| ...input.messages, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
System messages sent as user messages for Codex changes the conversation structure and may affect model behavior.
Lines 192-198 send system prompts as a single user message for Codex (joining with \n\n), while non-Codex uses proper system role messages (lines 199-204). This architectural difference means:
- Context window usage: Combining into one user message vs multiple system messages may be cached differently
- Model interpretation: System role has semantic meaning; user role does not
- Debugging difficulty: Harder to trace which part of the combined message comes from which system prompt component
Verify this is intentional Codex API behavior and document why this conversion is necessary.
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/opencode/src/session/llm.ts
Line: 191:205
Comment:
System messages sent as user messages for Codex changes the conversation structure and may affect model behavior.
Lines 192-198 send system prompts as a single user message for Codex (joining with `\n\n`), while non-Codex uses proper system role messages (lines 199-204). This architectural difference means:
1. **Context window usage**: Combining into one user message vs multiple system messages may be cached differently
2. **Model interpretation**: System role has semantic meaning; user role does not
3. **Debugging difficulty**: Harder to trace which part of the combined message comes from which system prompt component
Verify this is intentional Codex API behavior and document why this conversion is necessary.
How can I resolve this? If you propose a fix, please make it concise.
Summary
Conflicts Resolved
bun.lock: accepted upstreampackages/opencode/src/plugin/index.ts: merged imports (kept fork bundleLocalPlugin + added CodexAuthPlugin)sdks/vscode/package.json: kept shuvcode branding, updated version to 1.1.11Fork Features Preserved
opencode-anthropic-auth-shuv)Greptile Overview
Greptile Summary
This PR merges upstream OpenCode v1.1.11 into the shuvcode fork, adding GPT-5.2 Codex OAuth authentication support while preserving fork-specific customizations.
What Changed
New Codex Authentication (Main Feature)
CodexAuthPluginhandles authorization server, token exchange, and automatic refreshIntegration Points
llm.ts: Detects Codex viaprovider.id === "openai" && auth.type === "oauth", sends system prompts as user messages, adds custom headersplugin/index.ts: Loads CodexAuthPlugin as internal plugin, filters out deprecatedopencode-openai-codex-authpluginOther Fixes
thread.ts: Event source now resubscribes on instance disposal (stability improvement)transform.ts: Excludes gpt-5.2-codex from variant filteringFork Preservation
Critical Issues Found
Race Conditions in OAuth Server (packages/opencode/src/plugin/codex.ts)
oauthServerandpendingOAuthvariables support only ONE concurrent authorization flowError Handling Gaps
Performance Concerns
Comparison with Existing Patterns
The codebase already has a correct OAuth implementation in
packages/opencode/src/mcp/oauth-callback.tswhich:Map<string, PendingAuth>for concurrent flow tracking (line 55)pendingAuths.has(state)(line 115)The Codex implementation should follow this proven pattern instead of using global singleton state.
Recommendations
Before Merge:
CodexAuthPluginto use Map-based state tracking like MCP implementationPost-Merge:
4. Consider proactive token refresh before expiration
5. Add mutex/lock for token refresh to prevent redundant requests
6. Document why system messages are sent as user role for Codex
Confidence Score: 2/5
Important Files Changed
File Analysis
Sequence Diagram
sequenceDiagram participant User participant CLI as CLI/Auth participant Plugin as CodexAuthPlugin participant OAuthSrv as OAuth Server participant Browser participant OpenAI as OpenAI participant Codex as Codex API participant Store as Auth Store User->>CLI: auth login CLI->>Plugin: OAuth method selected Plugin->>Plugin: Generate PKCE and state Plugin->>OAuthSrv: Start server on port 1455 Note over OAuthSrv: ISSUE: Global state<br/>single concurrent flow only OAuthSrv-->>Plugin: Ready with redirect URI Plugin->>Browser: Open auth URL Browser->>OpenAI: Authorization request Plugin->>Plugin: Wait for callback Note over Plugin: ISSUE: Global pendingOAuth<br/>overwritten by concurrent flows User->>Browser: Approve authorization OpenAI->>Browser: Redirect with auth code Browser->>OAuthSrv: Callback with code and state OAuthSrv->>OAuthSrv: Validate state Note over OAuthSrv: ISSUE: State validation<br/>broken for concurrent flows OAuthSrv->>OpenAI: Exchange code for tokens Note over OpenAI: ISSUE: No JSON error<br/>handling on response OpenAI-->>OAuthSrv: Return tokens OAuthSrv->>Plugin: Resolve promise with tokens Plugin->>Store: Save OAuth credentials Plugin->>OAuthSrv: Stop server Note over OAuthSrv: ISSUE: Incomplete cleanup<br/>pendingOAuth remains Plugin-->>CLI: Success rect rgb(200, 220, 250) Note over User,Codex: API Request Flow User->>Codex: API request Codex->>Plugin: Custom fetch wrapper Plugin->>Store: Get credentials Store-->>Plugin: OAuth creds with expiry alt Token expired Plugin->>OpenAI: Refresh token request Note over Plugin: ISSUE: Blocking sync refresh<br/>no concurrency control OpenAI-->>Plugin: New tokens Plugin->>Store: Update end Plugin->>Plugin: Build request with Bearer token Plugin->>Codex: Forward to Codex endpoint Codex-->>User: Response end