Skip to content

v0.50.224: legacy @provider session models, Docker Hindsight dependency#1131

Merged
nesquena-hermes merged 2 commits intomasterfrom
stage
Apr 27, 2026
Merged

v0.50.224: legacy @provider session models, Docker Hindsight dependency#1131
nesquena-hermes merged 2 commits intomasterfrom
stage

Conversation

@nesquena-hermes
Copy link
Copy Markdown
Collaborator

v0.50.224 — 2 bugfix PRs

Batch reviewed, fixed, and tested end-to-end on top of v0.50.223.

Included PRs

PR Title Author Absorb fixes
#1129 Fix legacy @provider:model session models @franksong2702 CHANGELOG conflict resolved (both entries kept)
#1130 Fix Hindsight dependency in Docker WebUI venv @franksong2702 Same CHANGELOG conflict resolved

Gate results

  • pytest: 2579 passed, 0 failed (2572 baseline + 7 new)
  • QA harness Phase 1: 20/20 ✅
  • QA harness Phase 2: 11/11 ✅

What ships

Copy link
Copy Markdown
Owner

@nesquena nesquena left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review — end-to-end ✅ (clean approve)

What this ships

v0.50.224 batch — two contributor PRs from @franksong2702 absorbed by the agent on top of v0.50.223:

PR Area Files
#1129 (#1128) Legacy @provider:model session models flow through stale-model recovery api/routes.py:219-256, 282-309
#1130 (#926) Docker WebUI venv auto-installs hindsight-client docker_init.bash:288-301, 339

Traced against upstream hermes-agent

Pulled fresh nousresearch/hermes-agent tarball. The @provider:model syntax is a WebUI-only conventiongrep -rn "@.*:.*model" /tmp/hermes-agent-fresh/hermes_cli/ returns only Python decorators (@dataclass, @contextmanager), no actual @provider: model strings. The CLI never writes this prefix, so this PR is purely WebUI-internal cleanup of legacy session.json metadata. The _resolve_compatible_session_model function strips the @ prefix BEFORE the agent receives the model ID, so the agent's already-clean model-routing is untouched. ✅ no cross-tool risk.

Hindsight-client is an optional Hermes Agent memory provider; the install is scoped to the WebUI's container venv. The error_exit on install failure is appropriate for Docker init (fail-fast over silent degradation).

End-to-end trace

Legacy @provider:model normalization (highest-stakes)

api/routes.py:282-309 adds an early branch in _resolve_compatible_session_model:

if model.startswith("@") and ":" in model:
    provider_hint, bare_model = model[1:].split(":", 1)
    provider_raw = provider_hint.strip().lower()
    provider_normalized = _normalize_provider_id(provider_raw)
    bare_model = bare_model.strip()
    if not provider_raw or not bare_model:
        return model, False

    raw_provider_ids, normalized_provider_ids = _catalog_provider_id_sets(catalog)
    hint_matches_active = ...
    if hint_matches_active:                     return bare_model, True       # active hint → strip
    if _catalog_has_provider(...):              return model, False           # routable → keep
    if _model_matches_active_provider_family(bare_model, active_provider):
                                                return bare_model, True       # family match → strip + use bare
    if default_model:                           return default_model, True    # fallback
    return model, False

Subtle correctness check passed: _model_matches_active_provider_family relies on _normalize_provider_id("gpt") returning "openai" (so a persisted @copilot:gpt-5.5 with active openai-codex correctly resolves to bare gpt-5.5). Verified _PROVIDER_ALIASES["gpt"]="openai" at api/routes.py:22 and the alias map is checked before the prefix loop, so gpt/claude/gemini all normalize to their canonical providers.

Behavioural harness — 7 scenarios, all pass:

1. @copilot:gpt-5.5 (copilot hidden, active openai-codex) → ('gpt-5.5', True)            ✅ strip
2. @openai-codex:gpt-5.4-mini (active hint)                 → ('gpt-5.4-mini', True)      ✅ strip redundant
3. @copilot:gpt-5.4 (copilot in catalog)                    → ('@copilot:gpt-5.4', False) ✅ preserve
4. @copilot:claude-opus-4.6 (family mismatch)               → ('gpt-5.5', True)           ✅ default fallback
5. @nocolon                                                 → ('@nocolon', False)         ✅ unchanged
6. @:gpt-5.5  (empty hint)                                  → ('@:gpt-5.5', False)        ✅ unchanged
7. @copilot:  (empty model)                                 → ('@copilot:', False)        ✅ unchanged

Docker Hindsight install

docker_init.bash:288-301 defines ensure_hindsight_client_docker_dependency():

if uv pip show hindsight-client >/dev/null 2>&1; then
  echo "-- hindsight-client already installed"
else
  uv pip install "${_hindsight_client_requirement}" --trusted-host pypi.org --trusted-host files.pythonhosted.org \
    || error_exit "Failed to install hindsight-client"
fi

Called at line 339 — outside the .deps_installed fast-restart guard, so existing two-container Docker venvs self-heal on next restart. The _hindsight_client_requirement="hindsight-client>=0.4.22" literal has no user input; quoting is correct. uv pip show is a fast no-op once installed, so the cost on every restart is negligible.

Other audit — things that are correct already

  • CI green: ✅ test (3.11), ✅ test (3.12), ✅ test (3.13).
  • No XSS / no innerHTML changes: this PR is server-side routing + bash, no client templates touched.
  • No SSRF / no command injection: bash quoting checked, no eval, no shell=True in Python.
  • Test for fast-restart self-heal: test_926_hindsight_install_runs_after_fast_restart_guard locks the \nfi\n\nensure_hindsight_client_docker_dependency\n ordering after the .deps_installed guard, so a future refactor can't accidentally hide the call inside the fast-restart skip.
  • No requirements.txt pollution: test_926_hindsight_dependency_stays_docker_specific ensures hindsight-client stays out of requirements.txt (local non-Docker bootstrap shouldn't pull this optional dep).
  • _resolve_compatible_session_model is webui-only: agent CLI doesn't reference it, no parameter sharing.
  • Type guards on the @ branch: if not provider_raw or not bare_model: returns early on malformed input; model.startswith("@") and ":" in model short-circuits the branch entirely for non-prefixed models.

Edge-case trace

Scenario Expected Actual
@copilot:gpt-5.5 with copilot hidden, active openai-codex strip hint → bare gpt-5.5 ✅ harness PASS
@openai-codex:gpt-5.4-mini with active openai-codex strip redundant hint ✅ harness PASS
@copilot:gpt-5.4 with copilot routable preserve as-is ✅ harness PASS
@copilot:claude-opus-4.6 family mismatch default model fallback ✅ harness PASS
Malformed @nocolon unchanged (regex gate fails) ✅ harness PASS
Empty hint @:model unchanged (early return on empty hint) ✅ harness PASS
Empty model @hint: unchanged (early return on empty bare_model) ✅ harness PASS
Existing slash-prefixed openai/gpt-5.4 unaffected (separate branch at routes.py:312+) ✅ trace verified
Cross-tool: agent CLI session reads same model field sees normalized bare model, no @ ✅ no @provider: strings written by agent
Docker first-time start install runs in main-deps block AND ensure call (idempotent) uv pip show short-circuits second call
Docker fast restart, hindsight-client missing install runs (outside .deps_installed guard) ✅ test locks ordering
Docker fast restart, hindsight-client present "already installed" log, no install ✅ trace verified

Tests

  • PR's own test files: 4 new tests in test_provider_mismatch.py (legacy @Provider scenarios), 3 new tests in test_issue926_hindsight_docker_dependency.py. All pass.
  • CI on PR: ✅ test (3.11), ✅ test (3.12), ✅ test (3.13).
  • Local full suite: 2531 passed, 47 skipped, 0 PR-related failures (1 pre-existing macOS-only deselect).
  • Behavioural harness for @provider:model normalization: 7/7 scenarios pass (output above).

Minor observations (non-blocking)

  • The error message "Failed to install hindsight-client" doesn't mention pypi.org URL or version, which would help a sysadmin troubleshoot a network-blocked install. Minor — error_exit likely already prints context.
  • The 4 new _resolve_compatible_session_model tests cover happy-path well; could add a regression test for the "empty hint / empty model" malformed-input edge cases (verified by harness here, but no Python test).
  • _catalog_provider_id_sets is called once at line 290 to build the sets — fine for the single call, but if this branch ever grew to need them in multiple places it'd be worth caching across calls. Pure micro-perf nit.

Recommendation

Approved. Two scoped contributor fixes, the higher-stakes change (legacy @provider:model normalization) cleanly traces through _resolve_compatible_session_model with all four routing branches verified plus three malformed-input edge cases, no cross-tool agent regression (the @ prefix is webui-only and gets stripped before the agent sees the model), Docker hindsight install is correctly placed outside the fast-restart guard so existing containers self-heal. CI green. Parked at approval — ready for the release agent's merge/tag pipeline.

@nesquena-hermes nesquena-hermes merged commit 69bf287 into master Apr 27, 2026
3 checks passed
@nesquena-hermes nesquena-hermes deleted the stage branch April 27, 2026 01:47
JKJameson pushed a commit to JKJameson/hermes-webui that referenced this pull request Apr 29, 2026
…cy (nesquena#1131)

* Fix legacy at-provider session models

* Fix Hindsight dependency in Docker WebUI venv

---------

Co-authored-by: Frank Song <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

bug(sessions): legacy @provider:model session models bypass stale-model recovery Hindsight memory provider not available (no module found)

2 participants