Skip to content

fix(openai): use Responses API for gpt-5.4#7982

Merged
michaelneale merged 1 commit intoblock:mainfrom
michaelneale:fix/openai-gpt-5-4-responses
Mar 18, 2026
Merged

fix(openai): use Responses API for gpt-5.4#7982
michaelneale merged 1 commit intoblock:mainfrom
michaelneale:fix/openai-gpt-5-4-responses

Conversation

@michaelneale
Copy link
Copy Markdown
Collaborator

Summary

  • route gpt-5.4* models through OpenAI /v1/responses by default
  • keep existing base-path override behavior intact
  • add tests covering gpt-5.4 and date-suffixed variants

Why

OpenAI rejects function tools + reasoning_effort for gpt-5.4 on /v1/chat/completions and requires /v1/responses.

Changes

  • OpenAiProvider::is_responses_model now treats gpt-5.4* as Responses models
  • Added unit tests:
    • gpt_5_4_uses_responses_when_base_path_is_default
    • gpt_5_4_with_date_uses_responses

Validation

  • cargo fmt
  • cargo test -p goose openai::tests ⚠️ blocked in this environment due toolchain/dependency mismatch:
  • cargo clippy --all-targets -- -D warnings ⚠️ same blocker

Notes

I only patched gpt-5.4* as requested. There may be other future GPT-5 chat variants that also require /v1/responses, but this PR intentionally keeps scope narrow.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d2920e9437

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

let normalized_model = model_name.to_ascii_lowercase();
(normalized_model.starts_with("gpt-5") && normalized_model.contains("codex"))
|| normalized_model.starts_with("gpt-5.2-pro")
|| normalized_model.starts_with("gpt-5.4")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Restrict gpt-5.4 Responses routing to actual OpenAI endpoints

This broadens the host-agnostic model check so any OpenAI-compatible backend configured through OPENAI_HOST or a declarative engine: "openai" provider will now be switched from v1/chat/completions to /v1/responses whenever the model name starts with gpt-5.4. That breaks a supported configuration in this repo: the docs explicitly advertise non-OpenAI hosts on the OpenAI provider (documentation/docs/getting-started/providers.md) and register_declarative_provider sends every OpenAI-compatible custom provider through OpenAiProvider::from_custom_config (crates/goose/src/config/declarative_providers.rs:417-423). Any proxy or self-hosted endpoint that exposes a gpt-5.4* alias but only implements chat completions will now fail with 404/unsupported-endpoint errors.

Useful? React with 👍 / 👎.

let normalized_model = model_name.to_ascii_lowercase();
(normalized_model.starts_with("gpt-5") && normalized_model.contains("codex"))
|| normalized_model.starts_with("gpt-5.2-pro")
|| normalized_model.starts_with("gpt-5.4")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Handle provider-prefixed gpt-5.4 model IDs too

This matcher only catches bare IDs like gpt-5.4, so the fix still misses the slash-qualified model names that our OpenAI-compatible integrations already use. For example, bundled configs already accept namespaced OpenAI models such as openai/gpt-oss-120b (crates/goose/src/providers/declarative/groq.json), and the bundled canonical registry contains requesty/openai/gpt-5.4 and openrouter/openai/gpt-5.4 entries (crates/goose/src/providers/canonical/data/canonical_models.json). Requests sent with those provider-prefixed gpt-5.4 IDs will still go through chat completions, so the original tool-calling/reasoning_effort failure remains unresolved for those gateways.

Useful? React with 👍 / 👎.

@michaelneale michaelneale enabled auto-merge March 18, 2026 06:58
@michaelneale michaelneale requested a review from DOsinga March 18, 2026 06:58
@michaelneale michaelneale added this pull request to the merge queue Mar 18, 2026
Merged via the queue into block:main with commit 4e7a572 Mar 18, 2026
20 checks passed
@michaelneale michaelneale deleted the fix/openai-gpt-5-4-responses branch March 18, 2026 23:38
michaelneale added a commit that referenced this pull request Mar 19, 2026
* origin/main:
  fix(openai): use Responses API for gpt-5.4 (#7982)
  Remove lead/worker provider (#7989)
  chore(release): release version 1.28.0 (#7991)
  Fix empty tool results from resource content (e.g. auto visualiser) (#7866)
  Separate SSE streaming from POST work submission (#7834)
  fix: include token usage in Databricks streaming responses (#7959)
  Optimize tool summarization (#7938)
lifeizhou-ap added a commit that referenced this pull request Mar 20, 2026
* main: (22 commits)
  feat: add gemini-acp provider, update docs on subscription models + improvements to codex (#8000)
  fix(openai): use Responses API for gpt-5.4 (#7982)
  Remove lead/worker provider (#7989)
  chore(release): release version 1.28.0 (#7991)
  Fix empty tool results from resource content (e.g. auto visualiser) (#7866)
  Separate SSE streaming from POST work submission (#7834)
  fix: include token usage in Databricks streaming responses (#7959)
  Optimize tool summarization (#7938)
  fix: overwrite the deprecated googledrive extension config (#7974)
  refactor: remove unnecessary Arc<Mutex> from tool execution pipeline (#7979)
  Revert message flush & test (#7966)
  docs: add Remote Access section with Telegram Gateway documentation (#7955)
  fix: update webmcp blog post metadata image URL (#7967)
  fix: clean up OAuth token cache on provider deletion (#7908)
  fix: hard-coded tool call id in code mode callback (#7939)
  Fix SSE parsers to accept optional space after data: prefix (#7929)
  docs: add GOOSE_INPUT_LIMIT to config-files.md (#7961)
  Add WebMCP for Beginners blog post (#7957)
  Fix download manager (#7933)
  Improve the formatting of tool calls, show thinking, treat Reasoning and Thinking as the same thing (sorry Kant) (#7626)
  ...
elijahsgh pushed a commit to elijahsgh/goose that referenced this pull request Mar 21, 2026
elijahsgh pushed a commit to elijahsgh/goose that referenced this pull request Mar 21, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants