Skip to content

Session Analysis Report #61

@holstein13

Description

@holstein13

What

A new Session Analysis Report tab that provides a comprehensive post-session breakdown of any Claude Code session. Open it from the session toolbar to get instant analysis of how a session performed across cost, tokens, tools, timing, quality, and more.

The report is generated entirely client-side by analyzeSession() — a TypeScript engine that processes the raw session data already loaded by the app. No API calls, no AI inference, instant results.

Report Sections

Section What It Shows
Overview Session duration, total cost, token count, model used, context health
Cost Analysis Total spend, cost per commit, cost per line changed, subagent cost share
Token Economics Input/output/cache breakdown, cache efficiency %, read-to-write ratio
Tool Usage Per-tool call counts, success rates, health assessment per tool
Timeline Active vs idle time, model switches, session pacing
Quality Signals Prompt quality, startup overhead, file read redundancy, test progression
Friction Points Permission denials, retry patterns, thrashing signals
Git Activity Commits, lines changed, files touched
Subagents Subagent count, tokens, duration, cost per subagent
Errors Error breakdown by type
Insights Key takeaways and notable patterns

Interpretive Assessment Badges

Every metric section includes color-coded severity badges (green/amber/red/gray) computed from threshold-based rules. This turns raw numbers into instant signal:

  • Cost: "Efficient" / "Normal" / "Expensive" / "Red Flag" for cost per commit and cost per line
  • Cache: "Good" / "Concerning" for cache efficiency and R/W ratio
  • Tool Health: "Healthy" / "Degraded" / "Unreliable" per tool and overall
  • Idle Time: "Efficient" / "Moderate" / "High Idle"
  • Thrashing: "None" / "Mild" / "Severe" based on signal count
  • Startup Overhead: "Normal" / "Heavy"
  • File Read Redundancy: "Normal" / "Wasteful"
  • Model Mismatch Detection: Flags when Opus is used for mechanical tasks (rename, lint, format) or read-only tasks (search, explore) — suggests cheaper model alternatives
  • Switch Pattern Recognition: Detects Sonnet→Opus→Sonnet as "Opus Plan Mode" vs generic manual switching

Why This Is Useful

  1. Visibility into session efficiency — See at a glance whether a session was cost-effective, whether tools were reliable, whether the AI spent too much time idle or thrashing.
  2. Actionable cost optimization — Model mismatch detection and cost-per-commit analysis help users tune their model selection and prompting strategy. For example, if Opus is being used to rename files, the report flags it and suggests using Haiku instead.
  3. No AI required — All analysis is deterministic and runs instantly. Previously this kind of interpretation required running an AI-powered report skill. Now it's free and immediate.
  4. Pattern recognition — Detects behavioral patterns like Opus plan mode switching, test regression trajectories, and startup overhead that are hard to spot manually.
  5. Quality feedback loop — Prompt quality assessment, friction rate tracking, and file read redundancy help users improve their prompting habits over time.

Implementation

  • src/renderer/utils/sessionAnalyzer.ts — Main analysis engine (~1300 lines)
  • src/renderer/utils/reportAssessments.ts — Centralized threshold/severity/color utility
  • src/renderer/types/sessionReport.ts — Type definitions for all report sections
  • src/renderer/components/report/ — Report tab and 11 section components
  • 712 tests passing including 104 tests for the analyzer and assessments

PR

#60

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions