Skip to content

[FEATURE]: Dynamic model selection for subagents via Task tool #6651

@mcking-07

Description

@mcking-07

Feature hasn't been suggested before.

  • I have verified this feature I'm about to request hasn't been suggested before.

Describe the enhancement you want to request

Problem

When a primary agent invokes a subagent via the Task tool, it cannot dynamically control which model the subagent uses. The current model is determined by:

  • The agent's pre-configured model (in opencode.json)
  • Fallback to the parent session's model

This creates friction in two scenarios:

  • Users create duplicate agents with identical prompts just to vary the model - explore-quick, explore-standard, explore-advanced all pointing to the same prompt file.
  • The primary agent cannot assess task complexity and select an appropriate model. A trivial file search uses the same model as complex architectural analysis, wasting cost or capability.

Solution A: Global Model Tiers

Add a model_tier parameter to the Task tool with three values: quick, standard, advanced. Users configure tier-to-model mappings globally.

Configuration

{
  "model_tiers": {
    "quick": { "model": "anthropic/claude-haiku-4-5" },
    "standard": { "model": "anthropic/claude-sonnet-4-5" },
    "advanced": { "model": "anthropic/claude-opus-4-5" }
  }
}

Task Tool Usage

Task(
  description = "Find config files",
  prompt = "List all JSON config files in the project",
  subagent_type = "explore",
  model_tier = "quick",
)

Why Not Raw Model IDs?

An alternative would expose a model parameter accepting raw model IDs directly. This has drawbacks:

  • Training cutoff: LLM may not know newer models released after its training date
  • Model proliferation: Providers like OpenRouter expose 100+ models - either bloat the tool description or provide no guidance
  • No validation: LLM could specify unavailable models; errors surface only at runtime

The tier abstraction solves these: the LLM assesses task complexity (which it can do), mappings are user-configured and guaranteed valid, and token cost is fixed (3 enum values).


Solution B: Agent-Level Model Tiers

Add the same model_tier parameter, but configure tier-to-model mappings per-agent instead of globally.

Configuration

{
  "agent": {
    "explore": {
      "model_tiers": {
        "quick": { "model": "anthropic/claude-haiku-4-5" },
        "standard": { "model": "anthropic/claude-sonnet-4-5" },
        "advanced": { "model": "anthropic/claude-opus-4-5" }
      }
    },
    "code-review": {
      "model_tiers": {
        "quick": { "model": "anthropic/claude-sonnet-4-5" },
        "standard": { "model": "anthropic/claude-opus-4-5" },
        "advanced": { "model": "anthropic/claude-opus-4-5" }
      }
    }
  }
}

Task Tool Usage

Task(
  description = "Security-focused review",
  prompt = "Review auth.ts for vulnerabilities",
  subagent_type = "code-review",
  model_tier = "advanced",
)

This approach allows agent-specific mappings - quick for explore (simple file search) might map to Haiku, while quick for code-review might still need Sonnet.


Solution C: Unified Approach

Combine both solutions with layered resolution. Same model_tier parameter, but agent-level model_tiers override global tiers when defined.

Configuration

{
  "model_tiers": {
    "quick": { "model": "anthropic/claude-haiku-4-5" },
    "standard": { "model": "anthropic/claude-sonnet-4-5" },
    "advanced": { "model": "anthropic/claude-opus-4-5" }
  },
  "agent": {
    "code-review": {
      "model_tiers": {
        "quick": { "model": "anthropic/claude-sonnet-4-5" }
      }
    }
  }
}

Task Tool Usage

Same as Solutions A and B - the parameter is identical.

Resolution

  • explore with model_tier="quick" -> no agent override -> uses global tier -> Haiku
  • code-review with model_tier="quick" -> uses agent model_tiers -> Sonnet (override)
  • code-review with model_tier="advanced" -> no agent override -> falls back to global tier -> Opus

Model Selection Hierarchy

All solutions extend the existing hierarchy:

1. Agent's configured model              -> Explicit agent.model wins (tier ignored)
2. Agent-level model_tiers[tier]        -> If defined (Solution B/C)
3. Global model_tiers[tier]             -> If defined (Solution A/C)
4. Parent session's model               -> Fallback

Implementation Notes

Solution A requires:

  • Task tool: add model_tier: z.enum(["quick", "standard", "advanced"]).optional()
  • Config: add model_tiers: { quick?: { model: string }, ... }
  • Model resolution: check global tier mapping before session fallback

Solution B requires:

  • Task tool: add model_tier: z.enum(["quick", "standard", "advanced"]).optional()
  • Agent config: add model_tiers: { quick?: { model: string }, ... }
  • Model resolution: check agent's tier mapping before session fallback

Solution C requires:

  • All of the above
  • Model resolution: check agent tier first, then global tier, then session fallback

Future Extensibility

The object structure for tier configuration allows future enhancements without breaking changes:

{
  "model_tiers": {
    "quick": {
      "model": "anthropic/claude-haiku-4-5",
      "temperature": 0.2,
      "max_tokens": 4000
    },
    "advanced": {
      "model": "anthropic/claude-opus-4-5",
      "temperature": 0.7
    }
  }
}

Potential future additions:

  • temperature - override default temperature for tier
  • max_tokens - limit output tokens for cost control
  • top_p - adjust sampling for tier
  • timeout - per-tier timeout settings

This is out of scope for the initial implementation but the object structure supports it.


Would any of these approaches be welcome? Is there a preferred direction, or alternative solutions already considered?

Metadata

Metadata

Assignees

Labels

discussionUsed for feature requests, proposals, ideas, etc. Open discussion

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions