Skip to content

fix(opencode): correct model fallback index tracking and config parsing#8669

Open
manascb1344 wants to merge 8 commits intoanomalyco:devfrom
manascb1344:feature/model-fallback-1267
Open

fix(opencode): correct model fallback index tracking and config parsing#8669
manascb1344 wants to merge 8 commits intoanomalyco:devfrom
manascb1344:feature/model-fallback-1267

Conversation

@manascb1344
Copy link
Copy Markdown

@manascb1344 manascb1344 commented Jan 15, 2026

Fixes #1267

Summary

  • Add centralized fallbacks config with provider and model-specific fallback rules
  • Create SessionFallback module for fallback resolution logic
  • Remove per-agent models field and replace with global fallback configuration
  • Track attempted fallbacks to prevent infinite retry loops on same failing model

What Changed

packages/opencode/src/session/fallback.ts (NEW)

  • SessionFallback module for centralized fallback resolution
  • Provider-level and model-specific fallback rules
  • Priority system: model-specific fallbacks override provider-level
  • Fallback tracking to prevent retries on already-tried models

packages/opencode/src/session/processor.ts

  • Refactored to use SessionFallback for error handling
  • Removed per-agent fallback logic
  • Cleaner separation of concerns between error handling and fallback resolution

packages/opencode/src/config/config.ts

  • Added global fallbacks config schema with provider and models sections
  • Model and provider fallback validation rules
  • Removed per-agent models field from config schema

packages/opencode/src/agent/agent.ts

  • Removed models field from Agent schema

packages/opencode/test/session/fallback.test.ts (NEW)

  • Provider fallback resolution tests
  • Model-specific fallback priority tests
  • Fallback tracking and prevention of retries

packages/opencode/test/config/config.test.ts

  • Fallbacks config validation tests

New Config Structure

{
  "fallbacks": {
    "provider": {
      "anthropic": ["openai", "zai"]
    },
    "models": {
      "anthropic/claude-haiku-4-5": ["openai/gpt-5-nano"]
    }
  }
}

How I Verified

  • bun run test - 668 tests pass, 1 skip, 0 fail (669 total across 46 files)
  • bun test test/session/fallback.test.ts test/agent/agent.test.ts test/config/config.test.ts - 78 tests pass, 0 fail
  • bun run typecheck - no errors
  • Verified fallback logic correctly cycles through available models on errors

- Extract tryFallbackModel() helper function to reduce nesting
- Break long lines for better readability
- Reduce nesting from 3 levels to 2 levels
- Improve separation of concerns
- Fix fallbackIndex never incrementing after successful model switch
- Add 'models' to knownKeys in config transform to prevent moving to options
- Simplify tryFallbackModel to take targetIndex directly
- Add tests for empty models array, options isolation, and native agent fallbacks
@github-actions
Copy link
Copy Markdown
Contributor

The following comment was made by an LLM, it may be inaccurate:

No duplicate PRs found

@rekram1-node
Copy link
Copy Markdown
Collaborator

Okay this is decent, but was wondering if this could be better...

I think the top level config should have a key called "fallbacks" which is an object that has 1 or two keys something like:

{
  "fallbacks": {
       "provider": {
            "anthropic": ["openai", "zai"]
        },
        "models":  {
            "anthropic/claude-haiku-4-5": ["openai/gpt-5-nano"]
        }
   }
}

Something like this, and it shouldn't be on the agents config I'd rather have it completely centralized and using the same providerID / modelID pattern we use everywhere else

Idk if this is the cleanest but thoughts?

@manascb1344
Copy link
Copy Markdown
Author

Makes sense, centralized is cleaner.

A couple questions:

  1. If both provider and models match, does model-specific take priority?
  2. For provider fallbacks like "anthropic": ["openai"], how do we pick which OpenAI model to use?

- Add global fallbacks config with provider and model-specific rules
- Create SessionFallback module for fallback resolution logic
- Remove per-agent models field from Agent schema and config
- Model-specific fallbacks take priority over provider-level
- Track attempted fallbacks to avoid retries on same model

BREAKING CHANGE: Fallbacks are now configured globally:

Before:
  agent:
    build:
      models: [anthropic/claude-3-opus, openai/gpt-4o]

After:
  fallbacks:
    models:
      anthropic/claude-3-opus: [openai/gpt-4o]
    provider:
      anthropic: [openai]
@manascb1344
Copy link
Copy Markdown
Author

I assumed the following while implementing it, please confirm if correct:

  1. Model-specific fallbacks take priority over provider-level fallbacks.
  2. For provider-level fallbacks (e.g. anthropic to openai), we pick the first available model from the target provider’s model list.

@manascb1344
Copy link
Copy Markdown
Author

@rekram1-node

@manascb1344
Copy link
Copy Markdown
Author

@rekram1-node When can we have this feature merged?

opencode and others added 2 commits January 30, 2026 06:48
…ng sent back as assistant message content (anomalyco#11270)

Co-authored-by: opencode-agent[bot] <opencode-agent[bot]@users.noreply.github.com>
@broskees
Copy link
Copy Markdown

broskees commented Feb 4, 2026

Will I be able to set this up for specific agents too? Or is this a global config?

@yassink
Copy link
Copy Markdown

yassink commented Feb 4, 2026

Is this feature out?

@manascb1344
Copy link
Copy Markdown
Author

Is this feature out?

Looks like its in review by maintainers

@nwpr
Copy link
Copy Markdown

nwpr commented Feb 7, 2026

Thanks for the work on model fallback and the refactor, this is a useful step forward.

I’d like to suggest a slightly different abstraction that could make this more flexible and predictable long-term: introducing “virtual models” as a first-class concept in the configuration.

Instead of only defining fallback behavior on concrete models or providers, a virtual model would act as an alias to a set of real provider-specific models, together with a selection strategy.

Key idea

  • Concrete models (e.g. anthropic/claude-opus-4.5) are always deterministic: selecting them always uses that exact model.
  • Virtual models encapsulate dynamic behavior and are explicitly opt-in.
  • The config defines:
    • the underlying models
    • how a model is chosen (round-robin, priority, sequential after rate limit, weighted, etc.)

Why this helps

  1. Clear semantics: Explicit model selection never changes behavior. Dynamic selection is only used when a virtual model is chosen.
  2. Operational control: Virtual models provide a single choke point for retries and switching policies, making it easier to reason about side effects, retries, and observability compared to implicit fallback on concrete models.
  3. Extensibility: New strategies can be added without complicating fallback rules.
  4. Consistency: Virtual models can be referenced anywhere a normal model can be used (global default, agent, session).

Example configuration

{
  "models": {
    "virtual/coding": {
      "type": "virtual",
      "strategy": "sequential",
      "models": [
        "anthropic/claude-opus-4.5",
        "openai/gpt-4.1",
        "openai/gpt-4o"
      ]
    },
    "virtual/fast": {
      "type": "virtual",
      "strategy": "round_robin",
      "models": [
        "openai/gpt-4o-mini",
        "openai/gpt-4.1-mini"
      ]
    }
  }
}

Semantics

  • Selecting anthropic/claude-opus-4.5 always uses that exact model.
  • Selecting virtual/coding delegates model choice to the virtual model’s strategy.
  • Possible strategies could include:
    • priority
    • round_robin
    • random
    • sequential (on rate limit, failure, etc...)
    • weighted

This aligns well with ongoing discussions around native fallback and multi-provider usage, and could build cleanly on top of the current implementation without breaking existing configs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support per rule model fallbacks for outages and credit depletion

5 participants