Skip to content

feat(settings): add model settings compatibility resolver#2643

Merged
code-yeongyu merged 5 commits intocode-yeongyu:devfrom
RaviTharuma:feat/model-settings-compatibility-resolver
Mar 25, 2026
Merged

feat(settings): add model settings compatibility resolver#2643
code-yeongyu merged 5 commits intocode-yeongyu:devfrom
RaviTharuma:feat/model-settings-compatibility-resolver

Conversation

@RaviTharuma
Copy link
Copy Markdown
Contributor

@RaviTharuma RaviTharuma commented Mar 17, 2026

PR Stack (2/3) — merge in order:

  1. feat(config): object-style fallback_models with per-model settings #2622 — object-style fallback_models
  2. feat(settings): add model settings compatibility resolver #2643 ← this PR (based on feat(config): object-style fallback_models with per-model settings #2622)
  3. refactor: deduplicate DelegatedModelConfig into shared module #2674 — deduplicate DelegatedModelConfig (based on this PR)

Reviewers: The diff includes changes from #2622. The commit unique to this PR is:

  • 3f4a2827 feat(model-settings-compat): add variant/reasoningEffort compatibility resolver

Summary

This PR introduces a central model settings compatibility resolver for request-time settings on an already-selected model.

This is not model fallback.

The goal is to answer a different question:

  • not: which model should we use?
  • but: given the selected model, which variant / reasoningEffort values are actually compatible with it?

Why this matters

Today, compatibility logic for model settings is scattered across hooks and prompt plumbing. That creates brittle behavior:

  • some paths clamp unsupported levels
  • some pass them through unchanged
  • some silently drop them
  • some depend on narrow model-family-specific special cases

That means a request can still fail or degrade badly even when the chosen model itself is correct.

This PR centralizes that logic so request-time settings become predictable and model-aware.

Scope

Phase 1 intentionally covers only:

  • variant
  • reasoningEffort

Out of scope for this PR:

  • model fallback itself
  • thinking
  • maxTokens
  • temperature
  • top_p
  • automatic upward remapping of levels

Architecture

Adds a new shared module:

  • src/shared/model-settings-compatibility.ts

The resolver returns the best compatible settings for the already-selected model using this policy:

  1. keep the requested value if supported
  2. otherwise downgrade to the nearest lower compatible level
  3. otherwise drop the field
  4. never switch models

Important design choice

variant

variant is now metadata-first:

  • if OpenCode/provider metadata exposes model variants, those are the source of truth
  • only if metadata is missing do we fall back to conservative model-family heuristics

This makes the system much more robust for newly introduced models, because we do not rely purely on hard-coded model-name rules.

reasoningEffort

reasoningEffort currently remains heuristic-fallback.

Reason: in the currently available OpenCode SDK/provider metadata we do not yet have an equivalent dynamic capability source for reasoning-effort levels the way we do for variants.

So this PR uses a conservative family-based resolver for reasoningEffort until richer metadata becomes available.

Integration

First integration point:

  • src/plugin/chat-params.ts

That is the right initial runtime path because it is already the request-time normalization layer for provider-facing parameters.

The PR also composes correctly with the existing session prompt-params plumbing from the fallback/settings work, instead of replacing it.

Changes

  • New shared resolver
    • src/shared/model-settings-compatibility.ts
    • src/shared/model-settings-compatibility.test.ts
  • Exports
    • src/shared/index.ts
  • Runtime integration
    • src/plugin/chat-params.ts
    • src/plugin/chat-params.test.ts
    • src/plugin-interface.ts
  • Claude-specific supplement narrowed
    • src/hooks/anthropic-effort/hook.ts
    • src/hooks/anthropic-effort/index.test.ts
  • Spec and implementation plan
    • docs/superpowers/specs/2026-03-17-model-settings-compatibility-design.md
    • docs/superpowers/plans/2026-03-17-model-settings-compatibility-resolver.md

Verification

  • bun run typecheck
  • bun test
  • full suite on the branch: 4159 pass, 0 fail

Follow-up direction

The next logical expansion after this PR is to bring the same compatibility system to:

  • thinking
  • maxTokens
  • temperature
  • top_p

But the important foundation is now in place:

  • centralized
  • runtime-aware
  • metadata-first where possible
  • conservative where metadata is not available yet

Summary by cubic

Adds a central compatibility resolver that normalizes request-time variant and reasoningEffort for the selected model, and adds object-style fallback_models with per-model settings that are promoted and persisted across chat, background, and sync. Promotion is gated to real fallback matches, uses model metadata for variants, and stored session prompt params are applied and cleared correctly.

  • New Features

    • Compatibility resolver: keep/downgrade/drop; never switches models. Integrated in chat.params, uses client.provider.list() for variant; reasoningEffort uses conservative family heuristics. Applies stored session prompt params and clears them on session.deleted.
    • Object fallback_models: accept strings or objects with per-model settings (variant, reasoningEffort incl. none/minimal, temperature, top_p, maxTokens, thinking). Entries are flattened to strings for runtime fallback and selected by most-specific prefix; when chosen, settings are promoted to prompt options and DelegatedModelConfig, then persisted via the session prompt params store across chat/background/sync.
    • Plumbing: background manager/spawner/sync sender pass and persist these settings. Added KNOWN_VARIANTS (incl. minimal) and parsing/flattening helpers.
  • Bug Fixes

    • Promotion of object-entry settings is now gated to real fallback matches and respects most-specific prefix selection.
    • Anthropic effort hook widened to the full Claude Opus family; variant parsing tightened to avoid double suffixes and preserve parenthesized IDs.
    • Schemas accept reasoningEffort: "none" | "minimal" and object-style fallback_models; regenerated assets/oh-my-opencode.schema.json.

Written for commit 1e70f64. Summary will update on new commits.

@chatgpt-codex-connector
Copy link
Copy Markdown

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.
To continue using code reviews, you can upgrade your account or add credits to your account and enable them for code reviews in your settings.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

13 issues found across 46 files

Confidence score: 2/5

  • Several high-confidence compatibility regressions in src/shared/model-settings-compatibility.ts can misclassify Bedrock Claude and OpenAI reasoning models as unknown, which drops valid variant settings (reasoningEffort, Opus capabilities) and can directly impact model behavior.
  • src/tools/delegate-task/sync-prompt-sender.ts may leave stale prompt/session params when all settings are dropped, creating a concrete risk that later requests reuse incorrect configuration.
  • src/plugin/chat-params.ts and src/features/opencode-skill-loader/config-source-discovery.ts introduce additional runtime-risk paths (unhandled providerList() rejection and glob matching that can unintentionally exclude nested skills), so this does not yet look low-risk to merge.
  • Pay close attention to src/shared/model-settings-compatibility.ts, src/tools/delegate-task/sync-prompt-sender.ts, src/plugin/chat-params.ts, src/features/opencode-skill-loader/config-source-discovery.ts - these are the highest-impact paths for dropped settings, stale params, and config resolution errors.
Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src/features/background-agent/manager.ts">

<violation number="1" location="src/features/background-agent/manager.ts:516">
P2: The new model prompt-parameter mapping block is duplicated in both startTask and resume. This increases maintenance burden and risks divergence if model parameters change. Consider extracting a shared helper to apply prompt params for both flows.</violation>
</file>

<file name="src/features/opencode-skill-loader/config-source-discovery.ts">

<violation number="1" location="src/features/opencode-skill-loader/config-source-discovery.ts:43">
P2: Glob filtering only checks immediate directory and file path, so directory-targeting patterns can exclude nested skills unintentionally.</violation>
</file>

<file name="docs/superpowers/specs/2026-03-17-model-settings-compatibility-design.md">

<violation number="1" location="docs/superpowers/specs/2026-03-17-model-settings-compatibility-design.md:83">
P2: Design spec states family-based compatibility as primary, but resolver logic is metadata-first; this mismatch can mislead future maintenance.</violation>
</file>

<file name="src/tools/delegate-task/subagent-resolver.ts">

<violation number="1" location="src/tools/delegate-task/subagent-resolver.ts:50">
P2: Fallback entry sorting uses the maximum provider name length across all providers, not the length of the provider that matched the prefix, so a less specific match can override a more specific one.</violation>
</file>

<file name="src/shared/model-settings-compatibility.ts">

<violation number="1" location="src/shared/model-settings-compatibility.ts:43">
P1: AWS Bedrock Anthropic provider IDs are not treated as Claude providers, causing Claude models on Bedrock to be misclassified as unknown and variant settings to be dropped.</violation>

<violation number="2" location="src/shared/model-settings-compatibility.ts:47">
P2: Opus detection is too narrow: `includes("claude-opus")` misses common IDs like `claude-3-opus-*`, causing Opus models to be treated as non-Opus and downgrading allowed variant capability.</violation>

<violation number="3" location="src/shared/model-settings-compatibility.ts:58">
P1: OpenAI reasoning models (e.g., `o1`/`o3-mini`) are misclassified as `unknown`, causing valid `reasoningEffort` settings to be removed.</violation>

<violation number="4" location="src/shared/model-settings-compatibility.ts:194">
P2: Case-only normalization is incorrectly recorded as an unsupported compatibility downgrade due to fallback reason assignment.</violation>
</file>

<file name="src/features/background-agent/spawner.ts">

<violation number="1" location="src/features/background-agent/spawner.ts:139">
P2: Duplicated model-to-promptOptions mapping and setSessionPromptParams logic exists in both startTask and resumeTask. This new duplication increases maintenance risk; changes to model mapping or options conversion could easily be missed in one copy. Consider extracting a shared helper to centralize this logic.</violation>
</file>

<file name="src/shared/tmux/tmux-utils.test.ts">

<violation number="1" location="src/shared/tmux/tmux-utils.test.ts:51">
P2: The modified test no longer calls `isInsideTmux()`, so the wrapper delegation it claims to verify is untested and the test intent/title is now mismatched.</violation>
</file>

<file name="src/tools/delegate-task/sync-prompt-sender.ts">

<violation number="1" location="src/tools/delegate-task/sync-prompt-sender.ts:73">
P2: Conditional session-param update can skip writes when all settings are dropped, leaving stale prompt params that get reused for later model requests.</violation>
</file>

<file name="src/tools/delegate-task/category-resolver.test.ts">

<violation number="1" location="src/tools/delegate-task/category-resolver.test.ts:326">
P2: The new "most specific prefix match" test uses an exact fallback match (`gpt-4o`), so it duplicates exact-over-fuzzy behavior instead of validating longest-prefix selection.</violation>
</file>

<file name="src/plugin/chat-params.ts">

<violation number="1" location="src/plugin/chat-params.ts:109">
P2: `providerList()` rejection is not handled, so metadata lookup failures can abort chat params handling instead of falling back.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 10 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src/shared/model-settings-compatibility.ts">

<violation number="1" location="src/shared/model-settings-compatibility.ts:43">
P0: Custom agent: **Opencode Compatibility**

Do not unconditionally classify all models from `aws-bedrock`, `bedrock`, and `opencode` providers as Claude. They host other families (like Amazon Nova and OpenAI models) which support reasoning efforts that will be incorrectly dropped.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 14 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src/plugin/chat-params.test.ts">

<violation number="1" location="src/plugin/chat-params.test.ts:273">
P1: Custom agent: **Opencode Compatibility**

The mock payload incorrectly uses `modelID` instead of `id`. The OpenCode SDK `Model` payload uses `id` for the model identifier.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

@RaviTharuma
Copy link
Copy Markdown
Contributor Author

@cubic-dev-ai I addressed the active review points and resolved the threads. If you still see an issue on the latest head (274f9e8b), please re-review the current diff state rather than the earlier commit context.

@cubic-dev-ai
Copy link
Copy Markdown

cubic-dev-ai bot commented Mar 17, 2026

@cubic-dev-ai I addressed the active review points and resolved the threads. If you still see an issue on the latest head (274f9e8b), please re-review the current diff state rather than the earlier commit context.

@RaviTharuma I have started the AI code review. It will take a few minutes to complete.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5 issues found across 47 files

Confidence score: 2/5

  • There is a concrete high-impact risk in src/plugin/chat-params.ts: unsanitized message.variant can flow into the compatibility resolver and trigger a runtime crash, and the current conditional spread does not actually validate the variant.
  • Compatibility handling appears incomplete in src/shared/model-settings-compatibility.ts and src/config/schema/fallback-models.ts (missing "amazon-bedrock" provider detection and missing reasoningEffort values "none"/"minimal"), which can cause valid Opencode configurations to be misclassified or rejected.
  • src/shared/session-prompt-params-state.ts shallow-copies options, so nested references may mutate shared session state unexpectedly; the src/hooks/anthropic-effort/hook.ts Opus-pattern concern is lower confidence but still worth verifying before merge.
  • Pay close attention to src/plugin/chat-params.ts, src/shared/model-settings-compatibility.ts, src/config/schema/fallback-models.ts, src/shared/session-prompt-params-state.ts - runtime crash risk, compatibility gaps, and potential state-mutation side effects.
Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src/hooks/anthropic-effort/hook.ts">

<violation number="1" location="src/hooks/anthropic-effort/hook.ts:3">
P2: OPUS_PATTERN is too strict and fails to match standard Opus model IDs like `claude-3-opus-20240229`, so Opus models will be skipped despite intent to cover the whole family.</violation>
</file>

<file name="src/plugin/chat-params.ts">

<violation number="1" location="src/plugin/chat-params.ts:90">
P1: Unsanitized `message.variant` can reach compatibility resolver and crash at runtime; conditional spread is dead code and does not validate variant.</violation>
</file>

<file name="src/shared/session-prompt-params-state.ts">

<violation number="1" location="src/shared/session-prompt-params-state.ts:13">
P2: `options` is only shallow-copied, so nested values can be mutated through shared references and unintentionally alter global session state.</violation>
</file>

<file name="src/shared/model-settings-compatibility.ts">

<violation number="1" location="src/shared/model-settings-compatibility.ts:48">
P1: Custom agent: **Opencode Compatibility**

The list of providers for detecting Claude models is missing `"amazon-bedrock"`. OpenCode uses `"amazon-bedrock"` as the standard provider ID for Amazon Bedrock (`@ai-sdk/amazon-bedrock`), so Claude models routed through this provider will currently not be correctly identified as part of the Claude family.</violation>
</file>

<file name="src/config/schema/fallback-models.ts">

<violation number="1" location="src/config/schema/fallback-models.ts:6">
P1: Custom agent: **Opencode Compatibility**

The `reasoningEffort` enum is missing the "none" and "minimal" values which are valid and supported in Opencode.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

0 issues found across 6 files (changes from recent commits).

Requires human review: Large PR (3000+ lines) with significant scope creep into unrelated modules (skill discovery, command store refactor) and core logic changes that cannot be guaranteed regression-free.

@RaviTharuma
Copy link
Copy Markdown
Contributor Author

@cubic-dev-ai Thanks for the thorough review. I've addressed the valid points, but I want to push back on several items and also acknowledge where our own approach needs fundamental improvement.


Where Cubic was right (and we fixed correctly)

These were genuine catches — thank you:

  1. Stale session params when all settings are dropped — real bug, good catch.
  2. Code duplication in spawner.ts (startTask/resumeTask) — extracted a shared helper, cleaner now.
  3. Missing providerList() error handling — would have crashed the hook on API failures.
  4. Opus detection too narrowclaude-3-opus-* wouldn't have matched.

Where we blindly followed Cubic and shouldn't have

1. "none" / "minimal" reasoningEffort (commit 4e9c41f)

Cubic claimed these are "valid and supported in Opencode" at confidence 9. This is incorrect. The Vercel AI SDK reasoningEffort parameter accepts low | medium | high. The none/minimal values don't exist in any provider's actual API. We added them to the Zod schema anyway, weakening our validation for values that no consumer will ever send.

Verdict: We should revert this. Accepting unknown values "just in case" is the opposite of type safety.

2. structuredClone for options (commit 4e9c41f)

Cubic flagged shallow copy of options as a mutation risk at confidence 8. In practice, options is a flat Record<string, string | number> — there are no nested references to mutate. Adding structuredClone is a performance penalty for zero safety gain.

Verdict: Over-engineering. The shallow spread was fine.

3. modelID vs id debate

Cubic insisted twice that we should use id instead of modelID because "the SDK uses id". Our buildChatParamsInput consumes config-derived payloads where the field is modelID. We added a model.modelID || model.id fallback — harmless, but solving a problem that doesn't exist in our code path.

Verdict: Not wrong, but unnecessary complexity driven by a misunderstanding of our data flow.


Where BOTH Cubic and we got it wrong: the fundamental design problem

Here's what I realized going through all of this: the entire detectModelFamily() approach is brittle by design. We're maintaining a growing list of provider ID strings and model name patterns:

const isClaudeProvider = [
    "anthropic",
    "google-vertex-anthropic", 
    "aws-bedrock-anthropic",
].includes(provider)
    || (["github-copilot", "opencode", "aws-bedrock", "bedrock"].includes(provider) 
        && model.includes("claude"))

Cubic correctly pointed out we missed "amazon-bedrock". But the real problem is that every new provider or model naming convention will break this again. Tomorrow someone routes Claude through a custom proxy with provider ID "my-company-anthropic" and we're back to square one.

What we should do instead

The model metadata is already available at runtime via providerList(). We also have models.dev/api.json which provides structured data including family, reasoning, and capability flags for every major model.

The elegant fix is:

  1. Detect family from the model ID itself using simple, provider-agnostic heuristics: if the model ID contains claude, it's Claude. If it contains opus, it's Opus-class. If it matches /^o\d/, it's OpenAI reasoning. No provider allowlists needed.

  2. Use runtime metadata as primary source (which we already do for variant) and fall back to model-ID heuristics only when metadata is absent.

  3. Stop hardcoding provider IDs entirely. The provider tells us nothing that the model ID doesn't already tell us. A Claude model is a Claude model whether it comes through anthropic, aws-bedrock, opencode, or my-custom-proxy.

This would eliminate the entire class of bugs that both Cubic and I keep finding (missing provider IDs), and it would work automatically for any new provider without code changes.


Summary

Item Action
Bedrock/provider detection Fix properly: detect from model ID, not provider lists
none/minimal reasoningEffort Revert — not real values
structuredClone for options Revert — over-engineering
modelID || id fallback Keep (harmless) but acknowledge it's not needed
Remaining unaddressed items (glob filtering, amazon-bedrock) Will be moot after the model-ID-based detection rewrite

I'd like your thoughts on the model-ID-based detection approach before implementing.

@RaviTharuma
Copy link
Copy Markdown
Contributor Author

$(cat <<'EOF'
@cubic-dev-ai I just pushed commit 7607560c which adds the provider-agnostic model family detection module I described in my previous comment.

What this commit adds:

src/shared/model-settings-compatibility.ts — New module that resolves variant and reasoningEffort compatibility using model-ID-based detection instead of hardcoded provider allowlists.

Key design:

  • detectModelFamily() classifies by model ID string alone — "claude" in the ID = Claude, /^o\d/ = OpenAI reasoning, etc.
  • Provider ID is deliberately unused (_providerID) — a Claude model is a Claude model whether it comes from anthropic, aws-bedrock, google-vertex-anthropic, or any custom proxy
  • Runtime metadata from providerList() capabilities takes precedence over heuristic fallbacks
  • Graceful downgrade via ordered ladders when a requested level is unsupported

src/shared/model-settings-compatibility.test.ts — 14 tests covering:

  • Variant/reasoningEffort resolution for all model families
  • Metadata-first approach overriding family heuristics
  • Case normalization (not recorded as a compatibility change)
  • Provider-agnostic detection across arbitrary provider IDs
  • Unknown model families treated conservatively

All 4164 tests pass. Please re-review.
EOF
)

@cubic-dev-ai
Copy link
Copy Markdown

cubic-dev-ai bot commented Mar 17, 2026

$(cat <<'EOF'
@cubic-dev-ai I just pushed commit 7607560c which adds the provider-agnostic model family detection module I described in my previous comment.

What this commit adds:

...

@RaviTharuma I have started the AI code review. It will take a few minutes to complete.

@RaviTharuma
Copy link
Copy Markdown
Contributor Author

@cubic-dev-ai Pushed commit `625813ba` — fundamental architectural refactor of the model-settings-compatibility module.

What changed

Before: 12 near-identical if/else branches, each with a hardcoded allowed-array. A closed `ModelFamily` union type that required code changes for every new model family. Gemini, Kimi, GLM, Minimax, DeepSeek, Mistral, Llama all silently dropped to `"unknown"` and lost their settings.

After: A single `FAMILY_CAPABILITIES` data registry. One row per model family. Zero code changes needed to add a new family — just a registry entry. The 12 if/else branches in `resolveVariant()` and `resolveReasoningEffort()` are replaced by one generic `resolveField()` function.

Key design decisions

  1. Capabilities as data, not code — the registry is a plain `Record<string, { variants?, reasoningEffort? }>`. New models = new data, not new branches.

  2. Three-tier resolution with correct reason codes:

    • Runtime metadata → `"unsupported-by-model-metadata"`
    • Family heuristic → `"unsupported-by-model-family"`
    • Unknown family → `"unknown-model-family"`
    • Known family but field absent (e.g. Claude + reasoningEffort) → `"unsupported-by-model-family"` (not "unknown")
  3. Barrel export added — module was dead code before, now exported from `src/shared/index.ts`.

  4. All 14 existing tests pass unchanged — the refactor is behavior-preserving.

Net: -62 lines (85 added, 147 removed). Every model from `model-requirements.ts` fallback chains now has family coverage.

Next: pipeline integration (calling `resolveCompatibleModelSettings` from the actual resolution path).

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

6 issues found across 49 files

Confidence score: 2/5

  • There is meaningful regression risk in core compatibility logic: src/shared/model-settings-compatibility.ts uses hardcoded provider checks in detectModelFamily, which can misclassify providers and undermine the intended descriptor-driven behavior (severity 8/10, high confidence).
  • A concrete user-facing bug is likely in src/shared/model-settings-compatibility.ts where reasoningEffort values none and minimal are accepted in config but then removed during compatibility resolution, causing settings to be silently lost.
  • Fallback behavior also looks fragile in src/hooks/runtime-fallback/fallback-models.ts: flattening fallback model objects to strings drops per-model settings before chain construction, so richer fallback config may not propagate as expected.
  • Pay close attention to src/shared/model-settings-compatibility.ts, src/hooks/runtime-fallback/fallback-models.ts, and src/tools/delegate-task/subagent-resolver.ts - compatibility and fallback-chain resolution are where setting loss and incorrect model selection are most likely.
Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src/shared/model-settings-compatibility.ts">

<violation number="1" location="src/shared/model-settings-compatibility.ts:37">
P1: `reasoningEffort` values `none` and `minimal` are accepted by config but always stripped by compatibility resolution because they are missing from both ladder and allow-lists.</violation>

<violation number="2" location="src/shared/model-settings-compatibility.ts:39">
P1: Custom agent: **Opencode Compatibility**

The `detectModelFamily` function relies heavily on hardcoded provider strings (`provider === "openai"`, `[...].includes(provider)`), which completely contradicts the stated design goal of a provider-agnostic detection module. This approach prevents custom proxy configurations from correctly mapping Claude or OpenAI models to their respective families, degrading settings compatibility.</violation>
</file>

<file name="src/tools/delegate-task/subagent-resolver.ts">

<violation number="1" location="src/tools/delegate-task/subagent-resolver.ts:198">
P2: getFallbackModelConfig uses configuredFallbackChain instead of the effective fallbackChain, so settings from agentRequirement.fallbackChain are ignored when no user fallback overrides exist.</violation>
</file>

<file name="src/features/opencode-skill-loader/config-source-discovery.ts">

<violation number="1" location="src/features/opencode-skill-loader/config-source-discovery.ts:44">
P2: Root-level skills can incorrectly match subdirectory-only glob patterns because an empty relative dir is converted to "/" and matched against patterns like `*/`. Guard the trailing-slash match when the relative dir is empty.</violation>
</file>

<file name="src/hooks/runtime-fallback/fallback-models.ts">

<violation number="1" location="src/hooks/runtime-fallback/fallback-models.ts:22">
P1: Flattening fallback model objects to strings drops per-model settings before fallback-chain construction, so rich fallback config cannot propagate.</violation>
</file>

<file name="src/shared/fallback-chain-from-models.ts">

<violation number="1" location="src/shared/fallback-chain-from-models.ts:71">
P3: parseFallbackModelObjectEntry duplicates the string parsing logic from parseFallbackModelEntry; consider reusing the existing function or a shared helper to avoid maintainability drift.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

@RaviTharuma
Copy link
Copy Markdown
Contributor Author

@cubic-dev-ai Pushed commit `6ac7de55` — test coverage for all model families in the registry.

What this adds

29 new tests (43 total, was 14) covering every family in FAMILY_CAPABILITIES:

Family Model IDs tested Variant support ReasoningEffort
Gemini gemini-3.1-pro low/medium/high dropped
Kimi kimi-k2.5, k2-v2 low/medium/high dropped
GLM glm-5 low/medium/high dropped
Minimax minimax-m2.5 low/medium/high dropped
DeepSeek deepseek-r2 low/medium/high dropped
Mistral mistral-large-next low/medium/high dropped
Codestral→Mistral codestral-2506 low/medium/high dropped
Llama llama-4-maverick low/medium/high dropped

Also tests GPT-5 xhigh variant+reasoningEffort support and empty-desired passthrough.

Each family gets 3 tests:

  1. Keeps highest supported variant unchanged
  2. Downgrades unsupported variant (max→high)
  3. Correctly drops/keeps reasoningEffort based on family capabilities

89 assertions across 43 tests, all passing. Please re-review.

@cubic-dev-ai
Copy link
Copy Markdown

cubic-dev-ai bot commented Mar 17, 2026

@cubic-dev-ai Pushed commit `6ac7de55` — test coverage for all model families in the registry.

What this adds

29 new tests (43 total, was 14) covering every family in FAMILY_CAPABILITIES:
...

@RaviTharuma I have started the AI code review. It will take a few minutes to complete.

@RaviTharuma
Copy link
Copy Markdown
Contributor Author

@cubic-dev-ai Pushed `0dc50c95` — unified registry: detection + capabilities in one data structure.

Before (two separate concerns)

  1. `FAMILY_CAPABILITIES` — a `Record<string, FamilyCapabilities>` holding variant/reasoningEffort arrays
  2. `detectModelFamily()` — 15 lines of if/else returning a string key to look up in the record

Adding a new model required changes in TWO places.

After (single source of truth)

```typescript
const MODEL_FAMILY_REGISTRY: ReadonlyArray<readonly [string, FamilyDefinition]> = [
["claude-opus", { pattern: /claude(?:-\d+(?:-\d+)*)?-opus/, variants: ["low", "medium", "high", "max"] }],
["claude-non-opus", { includes: ["claude"], variants: ["low", "medium", "high"] }],
// ... one row per family, detection + capabilities together
]
```

Detection is now a 6-line loop:
```typescript
function detectFamily(_providerID: string, modelID: string): FamilyDefinition | undefined {
const model = normalizeModelID(modelID).toLowerCase()
for (const [, def] of MODEL_FAMILY_REGISTRY) {
if (def.pattern?.test(model)) return def
if (def.includes?.some((s) => model.includes(s))) return def
}
return undefined
}
```

Returns the `FamilyDefinition` directly — no intermediate string key, no separate lookup. `resolveCompatibleModelSettings` uses `family?.variants` and `family?.reasoningEffort` directly.

Net -17 lines. 43 tests pass unchanged. Adding a future model family = one array entry. Adding a future field (e.g. `thinkingBudget`) = one property in `FamilyDefinition` + one `resolveField()` call.

@RaviTharuma
Copy link
Copy Markdown
Contributor Author

@cubic-dev-ai Thanks for the review. I need to address each issue against the current HEAD (`0dc50c95`), because several of these reference code that no longer exists.


Issue 1: Missing `"amazon-bedrock"` provider (confidence 10)

Moot. The current code does not use provider ID for detection at all. `detectFamily()` takes `_providerID` (unused) and classifies purely by model ID string. A Claude model through `amazon-bedrock`, `my-custom-proxy`, or any future provider is detected correctly. This was the entire point of the provider-agnostic redesign (commits `7607560c` → `625813ba` → `0dc50c95`). Test coverage confirms:

```typescript
for (const providerID of ["anthropic", "aws-bedrock", "bedrock", "amazon-bedrock",
"opencode", "my-custom-proxy", "google-vertex-anthropic"]) {
// all correctly detected as claude-non-opus
}
```

Issue 2: `reasoningEffort` missing "none" and "minimal" (confidence 9)

Invalid. The Vercel AI SDK `reasoningEffort` parameter accepts `low | medium | high`. The values `none` and `minimal` do not exist in any provider's actual API (OpenAI, Anthropic, Google). Adding them to the Zod schema would weaken validation for values no consumer sends. I addressed this in detail in my earlier comment (2026-03-17T17:58:15Z).

Issue 3: OPUS_PATTERN too strict (confidence 0.47)

Invalid. The current regex `/claude(?:-\d+(?:-\d+)*)?-opus/` matches:

  • `claude-opus-4-6` ✓
  • `claude-3-opus-20240229` ✓ (the `(?:-\d+(?:-\d+)*)` group captures `-3`)

There's a dedicated test for exactly this case that passes.

Issue 4: Shallow copy of `options` (confidence 8)

Acceptable risk. `options` is a flat `Record<string, unknown>` in practice — the values are primitives (strings, numbers, booleans). `structuredClone` would add a performance penalty for zero safety gain. Also, this file (`session-prompt-params-state.ts`) is not part of our PR's changes — it's existing code.

Issue 5: `detectModelFamily` uses hardcoded provider strings (confidence 10)

Moot — same as Issue 1. The function `detectModelFamily()` no longer exists. It was replaced by `detectFamily()` which does not reference provider ID at all. Please review commit `0dc50c95`.

Issue 6: `reasoningEffort` "none"/"minimal" stripped (confidence 9)

Duplicate of Issue 2. Same answer: these are not real API values.


Summary: 0 of 6 issues are actionable on the current HEAD. Issues 1 and 5 reference code that was removed three commits ago. Issues 2 and 6 are duplicates proposing invalid API values. Issue 3 is factually wrong (test proves it). Issue 4 is about code outside our PR scope.

Please re-review against `0dc50c95`. The module is now 175 lines with a unified registry — detection + capabilities in one data structure, 43 tests, 89 assertions.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

4 issues found across 49 files

Confidence score: 2/5

  • There is a clear regression risk in src/shared/model-settings-compatibility.ts: OpenAI compatibility arrays omit valid upstream reasoning levels ("none", "minimal"), which can incorrectly reject legitimate configurations and impact users at runtime.
  • src/plugin/chat-params.ts removed prior variant sanitization; with the current no-op spread, non-string message.variant values may reach .toLowerCase() and throw, making this a concrete stability concern.
  • The remaining issues in src/shared/fallback-chain-from-models.ts and src/tools/delegate-task/background-task.ts are maintenance/drift risks rather than immediate breakage, but they increase long-term change fragility.
  • Pay close attention to src/shared/model-settings-compatibility.ts, src/plugin/chat-params.ts, src/shared/fallback-chain-from-models.ts, src/tools/delegate-task/background-task.ts - compatibility rejection and runtime type-safety regressions are the main risks, with duplicated parsing/type structures adding drift risk.
Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src/shared/model-settings-compatibility.ts">

<violation number="1" location="src/shared/model-settings-compatibility.ts:37">
P1: Custom agent: **Opencode Compatibility**

The `REASONING_LADDER` and `allowed` arrays for OpenAI model families omit the `"none"` and `"minimal"` levels, causing these valid upstream configurations to be incorrectly dropped at runtime.</violation>
</file>

<file name="src/plugin/chat-params.ts">

<violation number="1" location="src/plugin/chat-params.ts:92">
P1: `variant` sanitization was removed; the new conditional spread is a no-op, allowing non-string `message.variant` values to reach `.toLowerCase()` and potentially throw at runtime.</violation>
</file>

<file name="src/shared/fallback-chain-from-models.ts">

<violation number="1" location="src/shared/fallback-chain-from-models.ts:74">
P2: `parseFallbackModelObjectEntry` duplicates core model-string parsing logic already present in `parseFallbackModelEntry`, creating avoidable drift/maintenance risk.</violation>
</file>

<file name="src/tools/delegate-task/background-task.ts">

<violation number="1" location="src/tools/delegate-task/background-task.ts:18">
P2: A complex inline `categoryModel` type duplicates an existing model settings structure, increasing drift risk when model fields evolve.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

@RaviTharuma
Copy link
Copy Markdown
Contributor Author

@cubic-dev-ai Hey — I owe you a correction on the "none" and "minimal" reasoning effort issue.

You were right. I was wrong. Three times in a row, actually — sorry about that.

I went back and checked the Vercel AI SDK source properly this time. The reasoningEffort type is:

'none' | 'minimal' | 'low' | 'medium' | 'high' | 'xhigh'

All six values are legitimate. "none" and "minimal" are real values that downstream consumers can send, and our compatibility layer was silently dropping them because they weren't in the ladder or the family definitions.

I've pushed 240bc281 which fixes this:

  • REASONING_LADDER now includes all six levels: ["none", "minimal", "low", "medium", "high", "xhigh"]
  • OpenAI family definitions updated — openai-reasoning, gpt-5, and gpt-legacy all include "none" and "minimal" in their reasoningEffort arrays
  • Zod schema (FallbackModelObjectSchema) updated to accept all six values
  • 3 new tests covering "none" for GPT-5, "minimal" for GPT-5, and "none" for o-series models

All 46 tests pass (92 assertions).

Thanks for being persistent on this one. The fix was straightforward once I stopped arguing and actually looked at the SDK types.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 7 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src/shared/model-settings-compatibility.ts">

<violation number="1" location="src/shared/model-settings-compatibility.ts:183">
P1: Custom agent: **Opencode Compatibility**

The "minimal" reasoning effort is only supported for GPT-5 family models in the OpenCode SDK. Allowing it for the openai-reasoning family causes compatibility issues.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

@RaviTharuma RaviTharuma force-pushed the feat/model-settings-compatibility-resolver branch from 2c921d8 to cd1f7f5 Compare March 18, 2026 13:31
@RaviTharuma
Copy link
Copy Markdown
Contributor Author

@cubic-dev-ai please review

@cubic-dev-ai
Copy link
Copy Markdown

cubic-dev-ai bot commented Mar 18, 2026

@cubic-dev-ai please review

@RaviTharuma I have started the AI code review. It will take a few minutes to complete.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 45 files

Confidence score: 3/5

  • There is a concrete behavior mismatch in src/shared/model-settings-compatibility.test.ts: reasoningEffort is currently allowed for standard GPT models (for example gpt-4.1/gpt-5.4), which can lead to invalid OpenAI API requests and user-facing failures.
  • src/shared/model-resolver.ts has a real regression risk because flattenToFallbackModelStrings unconditionally removes parenthesized suffixes, which can corrupt legitimate model IDs and break fallback resolution.
  • Given the medium-high severities (7/10 and 6/10) with high confidence (9/10), this looks like some merge risk rather than a merge blocker if these paths are corrected.
  • Pay close attention to src/shared/model-settings-compatibility.test.ts and src/shared/model-resolver.ts - parameter compatibility and model ID normalization may produce invalid requests or incorrect model resolution.
Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src/shared/model-settings-compatibility.test.ts">

<violation number="1" location="src/shared/model-settings-compatibility.test.ts:128">
P1: The implementation incorrectly permits `reasoningEffort` for standard GPT models (gpt-4.1, gpt-5.4) which don't support this OpenAI API parameter. According to OpenAI documentation, `reasoning_effort` is only supported on o-series and GPT-5 reasoning models. For standard GPT models, this parameter should be dropped entirely (like it's done for Claude models) to prevent API errors.</violation>
</file>

<file name="src/shared/model-resolver.ts">

<violation number="1" location="src/shared/model-resolver.ts:93">
P2: Unconditional stripping of parenthesized suffixes in `flattenToFallbackModelStrings` can destroy legitimate model IDs. The first `.replace(/\([^()]+\)\s*$/, "")` unconditionally strips any parenthesized content without validating against `KNOWN_VARIANTS` (unlike the space-suffix replacement below it which does validate). If a model ID contains legitimate parentheses (e.g., `local/llama(Q4)` for quantization), the suffix will be incorrectly deleted. Apply the same validation pattern used for space-suffix variants.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 issues found across 3 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="src/shared/model-settings-compatibility.test.ts">

<violation number="1" location="src/shared/model-settings-compatibility.test.ts:121">
P1: Custom agent: **Opencode Compatibility**

The test description incorrectly states that `gpt-5.4-mini` drops `reasoningEffort`. 

Due to the `model.includes("gpt-5")` check in `detectModelFamily`, `gpt-5.4-mini` is classified as the `gpt-5` family, which actually preserves the `reasoningEffort` setting. If `gpt-5.4-mini` should drop this setting to prevent API errors, the model detection logic in `src/shared/model-settings-compatibility.ts` needs to be corrected. If the current behavior is intended, the test description and internal comments should be updated.</violation>

<violation number="2" location="src/shared/model-settings-compatibility.test.ts:121">
P2: Lost test coverage for `reasoningEffort` downgrade policy. The PR removed the test for downgrading unsupported reasoningEffort values, but the replacement tests don't cover the downgrade path for model families that DO support reasoning effort (gpt-5 and openai-reasoning). The downgrade logic within `resolveReasoningEffort` for these families is completely untested.</violation>
</file>

<file name="src/shared/model-settings-compatibility.ts">

<violation number="1" location="src/shared/model-settings-compatibility.ts:196">
P1: Non-openai o-series models are misclassified as `gpt-legacy`, so the new `gpt-legacy` rule now incorrectly drops `reasoningEffort`.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Copy link
Copy Markdown
Contributor Author

@RaviTharuma RaviTharuma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re: Automated Review — All 3 Issues Dismissed

Thanks for the automated review. After careful analysis, all three flagged issues are either factually incorrect or already addressed by existing test coverage.


Violation 1 (P1): "Test incorrectly states gpt-5.4-mini drops reasoningEffort"

Invalid. The test at line 121 tests gpt-4.1, not gpt-5.4-mini. The bot hallucinated the model ID:

// Line 121-137 — actual test code:
test("drops reasoningEffort for standard GPT models (gpt-4.1)", () => {
  const result = resolveCompatibleModelSettings({
    providerID: "openai",
    modelID: "gpt-4.1",  // ← NOT gpt-5.4-mini
    desired: { reasoningEffort: "high" },
  })
  expect(result.reasoningEffort).toBeUndefined()
})

gpt-4.1 correctly matches the gpt-legacy family (which has no reasoningEffort support), so dropping it is the correct behavior.


Violation 2 (P2): "Lost test coverage for reasoningEffort downgrade"

Already addressed. Two tests explicitly cover the downgrade path for families that do support reasoningEffort:

  • Lines 371–387: "o-series downgrades xhigh reasoningEffort to high" — verifies xhighhigh downgrade within the o-series family (which caps at high).
  • Lines 389–401: "GPT-5 keeps xhigh reasoningEffort" — verifies xhigh stays for GPT-5 (which supports up to xhigh).

Together these cover both the "stays when supported" and "downgrades when unsupported" paths for reasoningEffort.


Violation 3 (P1): "Non-OpenAI o-series models misclassified as gpt-legacy at line 196"

Invalid on multiple counts:

  1. Line 196 doesn't existmodel-settings-compatibility.ts is 176 lines long.
  2. Registry ordering prevents thisopenai-reasoning (pattern: /^o\d(?:$|-)/) appears at line 54, before gpt-legacy (line 56). detectFamily() iterates in order and returns on first match, so any o3-mini style ID matches openai-reasoning first and never reaches gpt-legacy.
  3. o-series naming is exclusively OpenAI's — there are no non-OpenAI models named o3-*. The detection is provider-agnostic by design (test at line 236–245 verifies o3-mini via azure-openai provider works correctly).

Conclusion: No code changes needed. All three violations stem from incorrect analysis by the automated reviewer (hallucinated model IDs, non-existent line references, and missed existing test coverage).

@RaviTharuma
Copy link
Copy Markdown
Contributor Author

@acamq please check it again :)

@code-yeongyu code-yeongyu added the triage:feature-request Feature or enhancement request label Mar 24, 2026
@RaviTharuma
Copy link
Copy Markdown
Contributor Author

@code-yeongyu JFYI this is not a feature, it's a bug fix for fails that happen when models fallback and the thinking levels etc. is not supported

Copy link
Copy Markdown
Owner

@code-yeongyu code-yeongyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Direction Review

Verdict: DIRECTION APPROVED — needs rebase + cubic issue fixes before merge

What this does right

The core idea is solid: centralizing model settings compatibility resolution instead of scattering it across hooks. The metadata-first approach for variant (checking model metadata before falling back to heuristics) is the correct design — it makes the system robust for newly introduced models without requiring hard-coded rules.

The separation from model fallback is important and well-articulated: "which settings are compatible with this model" vs "which model should we use" are fundamentally different concerns.

Concerns

  1. PR stack dependency: This is PR 2/3, dependent on #2622 (object-style fallback_models). Both are CONFLICTING. The stack needs to be rebased against current dev before merge is possible.

  2. cubic found real issues (latest review — 3 issues):

    • gpt-5.4-mini misclassified as gpt-5 family via model.includes("gpt-5") — may incorrectly preserve reasoningEffort for mini models
    • Non-OpenAI o-series models misclassified as gpt-legacy — drops reasoningEffort incorrectly
    • Missing test coverage for reasoningEffort downgrade path
  3. Scope creep: 45 files changed, +2569/-258. The PR description says it only covers variant and reasoningEffort, but the diff touches delegate-task (subagent-resolver, category-resolver, sync-prompt-sender), background agent manager/spawner, and event system. That is well beyond "add a settings resolver."

  4. cubic confidence consistently 2/5 across multiple reviews — the hardcoded provider checks and model family detection logic are fragile. Author (RaviTharuma) has been responsive and dismissing issues with explanations, but the fundamental approach of model.includes("gpt-5") string matching is inherently brittle.

Recommendation

  • Direction: YES — this is the right architectural direction
  • Merge now: NO — needs rebase on dev, cubic issues addressed, and ideally the scope tightened (split delegate-task changes into a separate follow-up if possible)
  • Priority: Medium — not blocking current release, but valuable foundation for settings normalization

@RaviTharuma Great work on the design doc and spec. The metadata-first approach is exactly right. Main ask: rebase onto current dev and address the model family detection issues cubic flagged.

@RaviTharuma
Copy link
Copy Markdown
Contributor Author

Follow-up note on the direction review: the remaining compatibility/runtime issues cubic found from this stack are addressed on the live follow-up PR #2674 in commit 53e1e8f.

That follow-up does three things:

  • drops reasoningEffort for gpt-legacy models so standard GPT requests do not keep unsupported values
  • guards non-string message.variant before it reaches the compatibility resolver
  • centralizes session prompt-param application through the shared helper used by manager/spawner/sync prompt paths

Fresh local verification on the follow-up branch:

  • bun test src/shared/model-settings-compatibility.test.ts src/plugin/chat-params.test.ts src/features/background-agent/spawner.test.ts src/features/background-agent/manager.test.ts src/tools/delegate-task/sync-prompt-sender.test.ts180 pass, 0 fail
  • bunx tsc --noEmit → clean

The stack/rebase point still stands: #2622, #2643, and #2674 remain stacked and need to be restacked/rebased against current dev before merge.

@code-yeongyu code-yeongyu force-pushed the feat/model-settings-compatibility-resolver branch from 3f4a282 to 039afa5 Compare March 25, 2026 08:47
@RaviTharuma
Copy link
Copy Markdown
Contributor Author

@code-yeongyu I saw your March 25 force-push on this branch (3f4a282 -> 039afa54) and treated 039afa54 as the authoritative restacked resolver head. I then moved the remaining branches to match it: #2622 is now 1de5b66a and #2674 is now 67ec609c, both on current dev, so the whole stack is aligned now.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

0 issues found across 7 files (changes from recent commits).

Requires human review: Large diff (4k+ lines) touching core request normalization, schema, and session persistence. Centralizing compatibility logic has potential for unintended downgrades in specific model paths.

…tings

Add support for object-style entries in fallback_models arrays, enabling
per-model configuration of variant, reasoningEffort, temperature, top_p,
maxTokens, and thinking settings.

- Zod schema for FallbackModelObject with full validation
- normalizeFallbackModels() and flattenToFallbackModelStrings() utilities
- Provider-agnostic model resolution pipeline with fallback chain
- Session prompt params state management
- Fallback chain construction with prefix-match lookup
- Integration across delegate-task, background-agent, and plugin layers
…y resolver

- Registry-based model family detection (provider-agnostic)
- Variant and reasoningEffort ladder downgrade logic
- Three-tier resolution: metadata override → family heuristic → unknown drop
- Comprehensive test suite covering all model families
@RaviTharuma RaviTharuma force-pushed the feat/model-settings-compatibility-resolver branch from ade46f3 to 1e70f64 Compare March 25, 2026 10:15
@RaviTharuma
Copy link
Copy Markdown
Contributor Author

@code-yeongyu restacked this branch onto the current dev head (7761e48d) and force-pushed a new head: 1e70f640.

No new code changes beyond the rebase were needed on this PR after replaying the stack.

Local verification on 1e70f640:

  • bunx tsc --noEmit
  • bun run build
  • focused compatibility/delegate-task suite: 44 pass, 0 fail

Fresh CI run for this push: 23535864601 (in progress).

@RaviTharuma
Copy link
Copy Markdown
Contributor Author

@code-yeongyu final status on this PR after the latest restack:

  • base is current dev (7761e48d)
  • head is 1e70f640
  • local verification passed (bunx tsc --noEmit, bun run build, focused resolver/delegate-task suite)
  • fresh GitHub CI is now green
  • GitHub merge state is now clean

From my side this PR is ready for your normal review/merge flow as stack item 2 of 3, after #2622.

@code-yeongyu code-yeongyu merged commit 1d48518 into code-yeongyu:dev Mar 25, 2026
7 checks passed
@RaviTharuma RaviTharuma deleted the feat/model-settings-compatibility-resolver branch March 28, 2026 08:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

triage:feature-request Feature or enhancement request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants