Skip to content

Sprint 15: Session Projects + Code Copy + Tool Card Toggle#11

Merged
nesquena merged 1 commit intomasterfrom
sprint-15-session-projects
Apr 2, 2026
Merged

Sprint 15: Session Projects + Code Copy + Tool Card Toggle#11
nesquena merged 1 commit intomasterfrom
sprint-15-session-projects

Conversation

@nesquena
Copy link
Copy Markdown
Owner

@nesquena nesquena commented Apr 2, 2026

Summary

  • Session projects: Named groups for organizing sessions. Project filter bar (chips) between search and session list. Create/rename/delete projects, assign sessions via folder icon dropdown picker. projects.json storage, project_id on Session model. 5 new API endpoints (GET /api/projects, POST create/rename/delete, POST /api/session/move).
  • Code block copy button: Every code block gets a "Copy" button in the language header bar (or top-right for plain blocks). Clipboard API with "Copied!" feedback.
  • Tool card expand/collapse: Messages with 2+ tool cards get an "Expand all / Collapse all" toggle above the card group.

Changes

Area Files What
Backend api/config.py, api/models.py, api/routes.py PROJECTS_FILE constant, Session.project_id field, project CRUD helpers, 5 new endpoints
Frontend static/sessions.js Project bar, picker dropdown, project CRUD UI, session assignment
Frontend static/ui.js addCopyButtons() for code blocks, tool card expand/collapse
Frontend static/style.css 20 new CSS rules for projects, copy button, toggle
Tests tests/test_sprint15.py 13 new tests covering project CRUD, session move, backward compat
Docs CHANGELOG.md, SPRINTS.md, ROADMAP.md v0.17 entry, sprint history, parity tables

Test plan

  • pytest tests/test_sprint15.py — 13/13 passing
  • pytest tests/ — 214 passing, 23 pre-existing failures, 0 regressions
  • Manual: create project, assign sessions, filter by project, rename, delete
  • Manual: code block copy buttons work (clipboard + "Copied!" feedback)
  • Manual: tool card expand/collapse toggle with 2+ cards
  • Manual: old sessions without project_id load correctly

🤖 Generated with Claude Code

Session projects: named groups for organizing sessions. Project filter
bar with chips between search and session list. Create/rename/delete
projects, assign sessions via folder icon dropdown. Stored in
projects.json, project_id on Session model. 5 new API endpoints.

Code block copy button: every code block gets a Copy button in the
language header (or top-right for plain blocks). Clipboard API with
"Copied!" feedback.

Tool card expand/collapse: messages with 2+ tool cards get an
"Expand all / Collapse all" toggle above the card group.

13 new tests (237 total), all passing. No regressions.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
@nesquena nesquena force-pushed the sprint-15-session-projects branch from 910f11e to 1a47938 Compare April 2, 2026 07:12
@nesquena nesquena merged commit ebdd955 into master Apr 2, 2026
nesquena added a commit that referenced this pull request Apr 2, 2026
Covers PRs #11, #13, #14, #15: Sprint 15 features, security hardening,
OpenRouter routing fix, project picker UX fixes.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
@nesquena-hermes nesquena-hermes deleted the sprint-15-session-projects branch April 2, 2026 22:18
Ola-Turmo pushed a commit to Ola-Turmo/hermes-webui that referenced this pull request Apr 9, 2026
Sprint 15: Session Projects + Code Copy + Tool Card Toggle
Ola-Turmo pushed a commit to Ola-Turmo/hermes-webui that referenced this pull request Apr 9, 2026
Covers PRs nesquena#11, nesquena#13, nesquena#14, nesquena#15: Sprint 15 features, security hardening,
OpenRouter routing fix, project picker UX fixes.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
JKJameson pushed a commit to JKJameson/hermes-webui that referenced this pull request Apr 25, 2026
Sprint 15: Session Projects + Code Copy + Tool Card Toggle
JKJameson pushed a commit to JKJameson/hermes-webui that referenced this pull request Apr 25, 2026
Covers PRs nesquena#11, nesquena#13, nesquena#14, nesquena#15: Sprint 15 features, security hardening,
OpenRouter routing fix, project picker UX fixes.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
roadhero referenced this pull request in fox-in-the-box-ai/hermes-webui May 5, 2026
Adds a "Local Ollama" tile to Settings → Providers that auto-detects
a host-side Ollama daemon, lists installed models, and lets the user
pick one with a single click. Routes through the existing `custom`
OpenAI-compat path — no hermes-agent change needed.

- api/ollama.py — detection probe ordered host.docker.internal →
  localhost (10s cache); /api/tags wrapper that flattens the response
  into name/size/params/quantization; use_model() that writes
  model.{provider:custom,base_url,name} into config.yaml and triggers
  the gateway hot-reload added in v0.2.0 PR nesquena#61.
- routes.py — registers GET /api/ollama/{status,models},
  POST /api/ollama/{refresh,use-model}.
- static/index.html — tile in Settings → Providers with status dot,
  model list, refresh button. Distinct empty/not-found states with
  clear next-step guidance (install Ollama / pull a model).
- static/panels.js — loadOllamaLocal() on Settings open;
  refreshOllamaLocal() / useOllamaModel() handlers.

The fitb container needs `--add-host=host.docker.internal:host-gateway`
on Linux for the probe to reach the host. The parent repo bumps that
in (Electron's docker-manager.js, install.sh, README's docker-run
example).

Phase 1+2 of issue fox-in-the-box-ai/fox-in-the-box#66; phase 3
(pull/delete UI) and phase 4 (onboarding integration) tracked
separately as nesquena#67 and #11 respectively.
roadhero referenced this pull request in fox-in-the-box-ai/hermes-webui May 5, 2026
Issue #11 Option B — augments the existing 3-step button wizard
without replacing it.

- Skip CTA on every step. POST /api/setup/skip marks onboarding
  complete without collecting an API key. Provides an exit hatch
  for users who'll configure providers later from Settings, or
  who'll rely on local Ollama (nesquena#66).
- Externalized welcome text. Read from /data/config/onboarding.md
  via GET /api/setup/welcome; default copy ships from the
  default-configs sweep. Edit the file in place to customize the
  greeting per install.
- State alignment. /api/setup/{complete,skip} now write both
  onboarding.json (the redirect middleware's flag) AND
  settings.json:onboarding_completed (the hermes-webui settings
  surface). onboarding_complete() reads either, so future code
  paths flipping settings.json directly are honoured.
- Local Ollama fast-path on Welcome (depends on nesquena#66). When a
  host-side Ollama daemon is detected with at least one model
  installed, the Welcome step surfaces a "Use <model name>" CTA
  that activates the model server-side and drops the user
  straight into chat — no API key step. Falls back gracefully
  to the existing button wizard when Ollama isn't detected.

Conversational onboarding "regardless of provider status" is
tracked separately as nesquena#69 (depends on #9 — llama.cpp fallback).
roadhero referenced this pull request in fox-in-the-box-ai/hermes-webui May 5, 2026
Bridges the v0.4.0 download manager (#10), the bundled llama.cpp
binary (Dockerfile change), and the existing streaming.py
fallback_model machinery. When the user opts into Settings →
Providers → "Local fallback" and the GGUF is on disk, transient
remote-provider failures (5xx, 429, connection errors) silently
retry through the on-device model — no chat interruption.

api/local_fallback.py (new):
- is_enabled() / set_enabled() — single bool in settings.json,
  mirrors the existing onboarding_completed pattern (#11 alignment).
- supervisor_status() / start_llama_server() / stop_llama_server() —
  thin wrappers around `supervisorctl`. Soft-fail with status
  NO_SUPERVISOR when the binary isn't on PATH (upstream Hermes,
  dev environments) so the rest of the module still works.
- get_fallback_endpoint() — returns the routing dict the streaming.py
  hook reads, or None if the user is opted out OR the server isn't
  RUNNING. Two-condition gate prevents silent failover to a non-
  functional local model.
- should_failover() classifier — 5xx + 429 + connection-error
  substrings are eligible; 401/403/404 + auth/quota/billing/
  model-not-found substrings are NEVER eligible (config errors must
  not be masked by a local model). NEVER list takes precedence.
- enable() / disable() — orchestration. enable() persists the flag,
  triggers #10's download_start, and either starts llama-server
  immediately (model already on disk) or spawns a background watcher
  that starts it once the file lands (handles 2.5 GB long downloads
  without blocking the toggle-on POST). disable() persists the flag
  and stops llama-server; preserves the downloaded model file.

api/streaming.py:
- Surgical 12-line addition just after the existing fallback_model
  resolution at line 1768. If local_fallback enabled AND the live
  endpoint is up, override _fallback_resolved with the on-device
  config. The agent's existing rate-limit-recovery / 5xx-retry path
  uses fallback_model, so we get failover for free without touching
  the exception classifier or retry orchestration. Failure modes are
  silent (try/except, falls through to original config-driven
  fallback or none).

api/config.py:
- Adds `local_fallback_enabled` to _SETTINGS_DEFAULTS (default False)
  AND _SETTINGS_BOOL_KEYS (so save_settings preserves it). Without
  these the toggle would silently drop on save — caught during
  Phase 4 testing.

api/routes.py:
- GET /api/local-fallback/status (full snapshot)
- POST /api/local-fallback/enable
- POST /api/local-fallback/disable

UI (static/index.html + panels.js):
- New "Local fallback" tile in Settings → Providers, below the Local
  Ollama tile. Status dot (gray=off, green=running/downloading), one-
  line status text, native checkbox toggle.
- _refreshLocalFallback() polls /api/local-fallback/status every 2.5s
  while Settings is open — catches downloading→ready state transitions
  without forcing the user to refresh the page.
- ui_state-driven copy:
    disabled         → "Off — toggle on to download (~2.5 GB) and …"
    needs-download   → "Will start downloading (~2.5 GB) …"
    downloading      → "Downloading 1.2 GB / 2.5 GB (48%)" (live)
    starting/warming → "Starting llama-server…"
    ready            → "Ready — your provider failures will silently…"
    no-supervisor    → "Supervisor not available — only works in Fox"

Verified e2e against a freshly built test container with a deliberately
broken MODEL_URL_PHI4MINI:
- Toggle OFF → ON: enabled flips, model download starts (status
  reaches 'failed' because of the bogus URL — that's the test signal),
  ui_state correctly transitions disabled → downloading → needs-download
- Toggle ON → OFF: enabled flips back, settings.json persisted,
  supervisorctl status confirms llama-server STOPPED
- Settings.json contains "local_fallback_enabled": false after the
  off-toggle (persistence verified)

What's NOT in this commit (deferred to a follow-up):
- Reactive modal for users who haven't opted in when a chat fails.
  Today they see the existing remote-provider error; a small UX
  improvement would offer "Try local model? (one-time download)".
  Documented in #9's Phase 1 plan as a v0.4.2 polish item.
- Recovery banner ("Remote provider is back — switch?") — same
  reasoning, polish for v0.4.2.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant