Skip to content

feat: add Avian as an LLM provider#7561

Merged
blackgirlbytes merged 1 commit intoblock:mainfrom
avianion:add-avian-provider
Mar 7, 2026
Merged

feat: add Avian as an LLM provider#7561
blackgirlbytes merged 1 commit intoblock:mainfrom
avianion:add-avian-provider

Conversation

@avianion
Copy link
Copy Markdown
Contributor

Summary

Adds Avian as a built-in LLM provider for Goose. Avian provides a cost-effective, OpenAI-compatible inference API with streaming and function calling support.

Available Models

Model Context Max Output Input Cost Output Cost
deepseek/deepseek-v3.2 164K 65K $0.26/1M $0.38/1M
moonshotai/kimi-k2.5 131K 8K $0.45/1M $2.20/1M
z-ai/glm-5 131K 16K $0.30/1M $2.55/1M
minimax/minimax-m2.5 1M 1M $0.30/1M $1.10/1M

Configuration

  • AVIAN_API_KEY (required) - API key from avian.io
  • AVIAN_HOST (optional) - defaults to https://api.avian.io/v1

Changes

  • crates/goose/src/providers/avian.rs - New provider using OpenAiCompatibleProvider (follows xAI pattern)
  • crates/goose/src/providers/mod.rs - Module declaration
  • crates/goose/src/providers/init.rs - Provider registration
  • documentation/docs/getting-started/providers.md - Documentation entry

Implementation

Uses the OpenAiCompatibleProvider base, identical pattern to the xAI provider. Bearer token auth, standard /chat/completions and /models endpoints.

cc @baxen @michaelneale

@avianion avianion requested a review from a team as a code owner February 27, 2026 04:04
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: ae2fdc641e

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +11 to +15
pub const AVIAN_KNOWN_MODELS: &[&str] = &[
"deepseek/deepseek-v3.2",
"moonshotai/kimi-k2.5",
"z-ai/glm-5",
"minimax/minimax-m2.5",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Add canonical limits for Avian model IDs

These Avian model IDs are registered as provider/model strings, but there are no corresponding canonical mappings for provider avian, so ModelConfig::with_canonical_limits falls back to defaults instead of model-specific limits. In practice this means Avian requests use the generic 128k context and 4096 output-token cap (via max_output_tokens()), which can truncate responses for models that are advertised with much larger limits (for example minimax/minimax-m2.5).

Useful? React with 👍 / 👎.

Add Avian (https://avian.io) as a built-in LLM provider. Avian offers
a cost-effective OpenAI-compatible inference API with support for
streaming and function calling.

Available models:
- deepseek/deepseek-v3.2 (164K context, $0.26/$0.38 per 1M tokens)
- moonshotai/kimi-k2.5 (131K context, $0.45/$2.20 per 1M tokens)
- z-ai/glm-5 (131K context, $0.30/$2.55 per 1M tokens)
- minimax/minimax-m2.5 (1M context, $0.30/$1.10 per 1M tokens)

Configuration: AVIAN_API_KEY (required), AVIAN_HOST (optional)
Signed-off-by: Kyle D <[email protected]>
@avianion
Copy link
Copy Markdown
Contributor Author

Hey @alexhancock, would love your review on this when you get a chance. Happy to address any feedback!

@avianion
Copy link
Copy Markdown
Contributor Author

avianion commented Mar 5, 2026

Friendly follow-up — this PR is still active and ready for review. Would appreciate a look when you get a chance! cc @alexhancock

@avianion
Copy link
Copy Markdown
Contributor Author

avianion commented Mar 5, 2026

Friendly follow-up — this PR is still active and ready for review. All feedback has been addressed. Would appreciate a look when you get a chance! cc @alexhancock

@avianion
Copy link
Copy Markdown
Contributor Author

avianion commented Mar 5, 2026

Hey @zanesq @angiejones — friendly follow-up on this PR. Avian is an OpenAI-compatible inference provider that's already live and powering apps like ISEKAI ZERO. This is a lightweight integration (standard OpenAI-compatible endpoint) and we're happy to address any feedback or make adjustments. Would love to get this merged if you have a moment to review. Thanks!

@blackgirlbytes
Copy link
Copy Markdown
Collaborator

Thanks for the contribution!

@blackgirlbytes blackgirlbytes added this pull request to the merge queue Mar 7, 2026
Merged via the queue into block:main with commit 472f3bf Mar 7, 2026
22 checks passed
@DOsinga
Copy link
Copy Markdown
Collaborator

DOsinga commented Mar 7, 2026

eh, sorry to have missed this, but can this not be a declarative provider? I do not think we want to have custom code for providers in our code base. If it is openai compatible, you should not need it.

jh-block added a commit that referenced this pull request Mar 9, 2026
…deps

* origin/main: (34 commits)
  fix: reduce server log verbosity — skip session in instrument, defaul… (#7729)
  fix: provider test infrastructure (#7738)
  fix: sanitize streamable HTTP extension names derived from URLs (#7740)
  refactor: derive GooseMode string conversions with strum (#7706)
  docs: Add Spraay Batch Payments MCP Extension Tutorial (#7525)
  fix: flake.nix (#7224)
  delete goose web (#7696)
  Add @angiejones as CODEOWNER for documentation (#7711)
  Add MLflow integration guide (#7563)
  docs: LM Studio availability (#7698)
  feat: add Avian as an LLM provider (#7561)
  Adds `linux-mcp-server` to the goose registry (#6979)
  fix: add #[serde(default)] to description field on 4 ExtensionConfig variants (#7708)
  feat: combine TUI UX from alexhancock/tui-goodness with publishing config from jackamadeo/package-tui (#7683)
  chore: cleanup old sandbox (#7700)
  Correct windows artifact (#7699)
  gh fall back (#7695)
  fix: restore smart-approve mode (#7690)
  fix: make TLS configurable in goosed agent via GOOSE_TLS env var (#7686)
  Update to rmcp 1.1.0 (#7619)
  ...

# Conflicts:
#	Cargo.lock
wpfleger96 added a commit that referenced this pull request Mar 9, 2026
* origin/main: (21 commits)
  fix(ui-desktop): unify path resolution around GOOSE_PATH_ROOT (#7335)
  fix: pass OAuth scopes to DCR and extract granted_scopes from token response (#7571)
  fix: write to real file if config.yaml is symlink (#7669)
  fix: preserve pairings when stopping gateway (#7733)
  fix: reduce server log verbosity — skip session in instrument, defaul… (#7729)
  fix: provider test infrastructure (#7738)
  fix: sanitize streamable HTTP extension names derived from URLs (#7740)
  refactor: derive GooseMode string conversions with strum (#7706)
  docs: Add Spraay Batch Payments MCP Extension Tutorial (#7525)
  fix: flake.nix (#7224)
  delete goose web (#7696)
  Add @angiejones as CODEOWNER for documentation (#7711)
  Add MLflow integration guide (#7563)
  docs: LM Studio availability (#7698)
  feat: add Avian as an LLM provider (#7561)
  Adds `linux-mcp-server` to the goose registry (#6979)
  fix: add #[serde(default)] to description field on 4 ExtensionConfig variants (#7708)
  feat: combine TUI UX from alexhancock/tui-goodness with publishing config from jackamadeo/package-tui (#7683)
  chore: cleanup old sandbox (#7700)
  Correct windows artifact (#7699)
  ...
wpfleger96 added a commit that referenced this pull request Mar 9, 2026
* origin/main: (21 commits)
  fix(ui-desktop): unify path resolution around GOOSE_PATH_ROOT (#7335)
  fix: pass OAuth scopes to DCR and extract granted_scopes from token response (#7571)
  fix: write to real file if config.yaml is symlink (#7669)
  fix: preserve pairings when stopping gateway (#7733)
  fix: reduce server log verbosity — skip session in instrument, defaul… (#7729)
  fix: provider test infrastructure (#7738)
  fix: sanitize streamable HTTP extension names derived from URLs (#7740)
  refactor: derive GooseMode string conversions with strum (#7706)
  docs: Add Spraay Batch Payments MCP Extension Tutorial (#7525)
  fix: flake.nix (#7224)
  delete goose web (#7696)
  Add @angiejones as CODEOWNER for documentation (#7711)
  Add MLflow integration guide (#7563)
  docs: LM Studio availability (#7698)
  feat: add Avian as an LLM provider (#7561)
  Adds `linux-mcp-server` to the goose registry (#6979)
  fix: add #[serde(default)] to description field on 4 ExtensionConfig variants (#7708)
  feat: combine TUI UX from alexhancock/tui-goodness with publishing config from jackamadeo/package-tui (#7683)
  chore: cleanup old sandbox (#7700)
  Correct windows artifact (#7699)
  ...
wpfleger96 added a commit that referenced this pull request Mar 9, 2026
…e-issue

* origin/main:
  fix(ui-desktop): unify path resolution around GOOSE_PATH_ROOT (#7335)
  fix: pass OAuth scopes to DCR and extract granted_scopes from token response (#7571)
  fix: write to real file if config.yaml is symlink (#7669)
  fix: preserve pairings when stopping gateway (#7733)
  fix: reduce server log verbosity — skip session in instrument, defaul… (#7729)
  fix: provider test infrastructure (#7738)
  fix: sanitize streamable HTTP extension names derived from URLs (#7740)
  refactor: derive GooseMode string conversions with strum (#7706)
  docs: Add Spraay Batch Payments MCP Extension Tutorial (#7525)
  fix: flake.nix (#7224)
  delete goose web (#7696)
  Add @angiejones as CODEOWNER for documentation (#7711)
  Add MLflow integration guide (#7563)
  docs: LM Studio availability (#7698)
  feat: add Avian as an LLM provider (#7561)
  Adds `linux-mcp-server` to the goose registry (#6979)
  fix: add #[serde(default)] to description field on 4 ExtensionConfig variants (#7708)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants