Skip to content

Conversation

@junmediatek
Copy link
Contributor

@junmediatek junmediatek commented Dec 15, 2025

Fixes Azure reasoning models (GPT-5, o1) by using max_completion_tokens instead of max_tokens.

@junmediatek
Copy link
Contributor Author

@rekram1-node Could you help to review this PR for #5421

@rekram1-node
Copy link
Collaborator

/review


return fetchFn(input, {
...opts,
// @ts-ignore see here: https://github.com/oven-sh/bun/issues/16682
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: The nesting here could be reduced by using early returns/guards. This is a stylistic suggestion - the current code is functionally correct.

Suggested change
// @ts-ignore see here: https://github.com/oven-sh/bun/issues/16682
// Reasoning models require max_completion_tokens instead of max_tokens
// when using openai-compatible provider
// Check if model has reasoning capability OR has reasoningEffort option
const isReasoningModel =
model.capabilities.reasoning || (model.options && "reasoningEffort" in model.options)
const shouldTransformMaxTokens =
model.api.npm === "@ai-sdk/azure" &&
isReasoningModel &&
opts.body &&
typeof opts.body === "string"
if (shouldTransformMaxTokens) {
try {
const body = JSON.parse(opts.body)
if (body.max_tokens !== undefined) {
body.max_completion_tokens = body.max_tokens
delete body.max_tokens
opts.body = JSON.stringify(body)
}
} catch {
// Ignore JSON parse errors
}
}

This flattens one level of nesting by combining the conditions into a single check. Also note: using catch without a parameter (instead of catch (e)) is cleaner when you do not use the error.

// Reasoning models require max_completion_tokens instead of max_tokens
// when using openai-compatible provider
// Check if model has reasoning capability OR has reasoningEffort option
const isReasoningModel =
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only condition here should be:
model.capabilities.reasoning

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have updated it, please help to review it 7b5a00e

@junmediatek
Copy link
Contributor Author

@rekram1-node is anything wrong with the new patch 7b5a00e

@rekram1-node
Copy link
Collaborator

@junmediatek feel free to resolve any bot comments that you addressed or want to ignore

@rekram1-node
Copy link
Collaborator

rekram1-node commented Dec 16, 2025

Also, im not entirely sure if this is the perfect fix and I need to test something w/ it so in the meantime (without requiring code changes from us, you should be able to add this:

import { Plugin } from "@opencode-ai/plugin"

export const AzurePatch: Plugin = async (ctx) => {
  return {
    auth: {
      provider: "gaisf-azure",
      loader: async (getAuth, provider) => {
        return {
          async fetch(input, init) {
            const opts = init ?? {}
            if (opts.body && typeof opts.body === "string") {
              try {
                const body = JSON.parse(opts.body)
                if (body.max_tokens !== undefined) {
                  body.max_completion_tokens = body.max_tokens
                  delete body.max_tokens
                  opts.body = JSON.stringify(body)
                }
              } catch (e) {}
            }
            return fetch(input, {
              ...opts,
              timeout: false,
            })
          },
        }
      },
    },
  }
}

to ~/.config/opencode/plugin/max_completion_tokens.ts

And assuming your azure proxy provider is still called gaisf-azure like it was in your config you showed me, then this should function as expected

@rekram1-node
Copy link
Collaborator

You can completely override fetch implementations for any provider via plugins and I just wanted to demonstrate that for you

@junmediatek
Copy link
Contributor Author

Also, im not entirely sure if this is the perfect fix and I need to test something w/ it so in the meantime (without requiring code changes from us, you should be able to add this:

import { Plugin } from "@opencode-ai/plugin"

export const AzurePatch: Plugin = async (ctx) => {
  return {
    auth: {
      provider: "gaisf-azure",
      loader: async (getAuth, provider) => {
        return {
          async fetch(input, init) {
            const opts = init ?? {}
            if (opts.body && typeof opts.body === "string") {
              try {
                const body = JSON.parse(opts.body)
                if (body.max_tokens !== undefined) {
                  body.max_completion_tokens = body.max_tokens
                  delete body.max_tokens
                  opts.body = JSON.stringify(body)
                }
              } catch (e) {}
            }
            return fetch(input, {
              ...opts,
              timeout: false,
            })
          },
        }
      },
    },
  }
}

to ~/.config/opencode/plugin/max_completion_tokens.ts

And assuming your azure proxy provider is still called gaisf-azure like it was in your config you showed me, then this should function as expected

@rekram1-node

That is a good idea, I will try it. However, the standard solution is to modify the code,
so I still need your help to do more testing. If you encounter any other issues, feel free to contact me at any time.

@junmediatek
Copy link
Contributor Author

hi @rekram1-node

This issue can be resolved by the plugin, However, the standard solution is to modify the code, because the API reference in the openai API doc.
Additionally, how is the testing progress for the solution to this issue?

@junmediatek
Copy link
Contributor Author

hi @rekram1-node

This patch can be merged?

@github-actions
Copy link
Contributor

Thanks for your contribution!

This PR doesn't have a linked issue. All PRs must reference an existing issue.

Please:

  1. Open an issue describing the bug/feature (if one doesn't exist)
  2. Add Fixes #<number> or Closes #<number> to this PR description

See CONTRIBUTING.md for details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants