A rich, informative status bar for Claude Code that shows everything you need at a glance.
[Opus 4.6 · high] | 1M [██░░░░░░░░] 6% ~192p | Cache:99% | ⏱ 25m | 5h [█░░░░░░░░░] 8% · 7d [█████████░] 91% ~9p ↻3h
| Segment | Example | Description |
|---|---|---|
| Model | [Opus 4.6 · high] |
Active model + reasoning effort level |
| Context | 1M [██░░░░░░░░] 6% ~192p |
Context window size, usage bar, % used, estimated prompts remaining |
| Cache | Cache:99% |
Prompt cache hit rate (green ≥80%, yellow ≥50%, red <50%) |
| Duration | ⏱ 25m |
How long the current session has been running |
| 5h Limit | 5h [█░░░░░░░░░] 8% ↻4h |
5-hour rate limit usage with reset timer |
| 7d Limit | 7d [█████████░] 91% ~9p ↻3d |
7-day rate limit with remaining prompts estimate and reset timer |
Estimates how many more prompts fit before the context window is full. Based on your average token consumption per prompt in the current session. Resets with each new session.
Color thresholds (based on MRCR v2 benchmarks and context rot research):
- 🟢 Green (0-20%): ~93%+ accuracy, no meaningful degradation
- 🟡 Yellow (20-40%): ~89-93% accuracy, mild degradation on complex reasoning
- 🔴 Red (40%+): Below ~89%, significant degradation, consider
/clear
Opus 4.6 loses ~2% effectiveness per 100K tokens. These thresholds apply to both Opus 4.6 and Sonnet 4.6 (same architecture).
Shows what percentage of input tokens came from cache vs. being processed fresh. Higher = more efficient, lower cost on API plans.
- 🟢 ≥80% — Excellent, most tokens cached
- 🟡 50-79% — Moderate
- 🔴 <50% — Low cache efficiency (normal after
/clearor new session)
~9p — Estimated remaining prompts before hitting the limit. Calculated from your historical usage patterns across all sessions. Gets more accurate over time.
↻3h — Time until the limit resets to zero. Both limits reset completely at the shown time:
- 5-hour limit: Resets every ~5 hours on a fixed timer
- 7-day limit: Resets every ~7 days on a fixed timer
↻ always shows the time until the limit resets to zero. ~Np only appears when yellow or red (≥50% used).
Color thresholds:
- 🟢 <50% — Plenty of room
- 🟡 50-80% — Getting there
- 🔴 >80% — Running low
The status line records rate limit percentages after each prompt in ~/.claude/data/rate_history.jsonl. It groups data by session, calculates how much each prompt consumed on average, and extrapolates remaining prompts. The estimate accounts for your mix of light prompts and heavy autonomous workflows.
- Claude Code v2.1.83+
- Python 3.9+ on your
PATHaspython3(stdlib only — no pip packages)
git clone https://github.com/egerev/claude-status-line.git
cd claude-status-line
chmod +x install.sh && ./install.shThen restart Claude Code.
chmod +x uninstall.sh && ./uninstall.sh~/.claude/
├── status_lines/
│ └── status_line.py # Status bar renderer
├── hooks/
│ └── hook_prompt_submit.py # Session tracking hook
├── data/
│ ├── sessions/ # Per-session data (auto-created)
│ └── rate_history.jsonl # Rate limit history (grows over time)
└── settings.json # Claude Code config (patched by installer)
Edit ~/.claude/status_lines/status_line.py:
# Context degradation thresholds (% of context window)
CTX_YELLOW = 10 # yellow at 10%
CTX_RED = 20 # red at 20%
# Rate limit thresholds (% used)
RATE_YELLOW = 50 # yellow at 50%
RATE_RED = 80 # red at 80%BAR_WIDTH = 10 # number of blocks in progress barsEach segment is added via parts.append(...) in the generate() function. Comment out or rearrange as needed.
- No context cost — Scripts run as external processes, not inside Claude's context window
- Minimal latency — Hook adds ~50ms per prompt (Python startup + JSON write)
- No network calls — Everything is local file I/O
- Safe — All errors are silently caught, never breaks Claude Code
- Claude Code v2.1.83+ (for
statusLineandrate_limitssupport in stdin JSON) - Claude Max subscription (for rate limit data; API users see cost instead)
- Works on macOS, Linux, Windows (WSL)
MIT