Monitor your Caddy server in real time: per-host traffic, latency percentiles, status codes, and more. When FrankenPHP is detected, unlock per-thread introspection, worker management, and memory tracking.
Caddy exposes rich metrics through its admin API and Prometheus endpoint, but reading raw Prometheus text or setting up a full Grafana stack just to glance at traffic is heavy. Ember gives you a zero-config, read-only terminal dashboard that connects to Caddy's admin API and works out of the box. No extra infrastructure, no YAML to write: just ember and you're monitoring.
Caddy Monitoring
- Per-host traffic table with RPS, average latency, status codes, and sparklines
- Latency percentiles (P50, P90, P95, P99) and Time-to-First-Byte per host
- Sorting, filtering, and full-screen ASCII graphs (CPU, RPS, RSS)
- Config Inspector tab: browse the live Caddy JSON config as a collapsible tree
- Automatic Caddy restart detection
FrankenPHP Introspection
- Per-thread state, method, URI, duration, and memory tracking (method, URI, duration, memory, and request count require FrankenPHP 1.13+)
- Worker management with queue depth and crash monitoring
- Graphs for queue depth and busy threads
- Automatic detection and recovery when FrankenPHP starts or stops
Integration & Operations
- Prometheus metrics export (
/metrics) with optional basic auth and health endpoint (/healthz) - Daemon mode for headless operation, with error throttling and TLS certificate reload via SIGHUP
- JSON output mode for scripting, with
--oncefor single snapshots - Quick health check:
ember status(text or--json) for a one-line Caddy summary - Readiness gate:
ember waitblocks until Caddy is up (-qfor silent scripting) - Deployment validation:
ember diff before.json after.jsoncompares snapshots - Zero-config setup:
ember initchecks Caddy, enables metrics, and warns about missing host matchers - TLS and mTLS support for secured Caddy admin APIs
- Environment variable configuration (
EMBER_ADDR,EMBER_EXPOSE, ...) for container deployments NO_COLORenv var support (no-color.org)- Lightweight: ~15 MB RSS, ~0.3 ms per poll cycle with 100 threads and 10 hosts (benchmarks)
- Cross-platform binaries (Linux, macOS, Windows), Homebrew tap, and Docker image
brew install alexandre-daubois/tap/embermacOS: if Gatekeeper blocks the binary on first run, remove the quarantine attribute:
xattr -d com.apple.quarantine $(which ember), or allow it manually in System Settings → Privacy & Security.
Or with Go:
go install github.com/alexandre-daubois/ember/cmd/ember@latestOr with Docker (runs in daemon mode by default, see Docker docs):
docker run --rm --network host ghcr.io/alexandre-daubois/emberYou can also download the latest binaries from the release page. If you use this method, don't forget to check for updates regularly!
Make sure Caddy is running with the admin API enabled (it is by default). Then:
ember initThis checks your Caddy setup and enables metrics via the admin API if needed (no restart required). Once ready:
emberEmber connects to the Caddy admin API and auto-detects FrankenPHP if present.
For a quick one-line health check:
ember statusEmber polls the Caddy admin API and Prometheus metrics endpoint at a regular interval (default: 1s), computes deltas and derived metrics (RPS, percentiles, error rates), and renders them through one of several output modes: an interactive Bubble Tea TUI (default), streaming JSONL, a headless daemon with Prometheus export, or a one-shot status command.
Full documentation is available in the docs/ directory:
- Getting Started: Install and first run
- Caddy Configuration: Caddyfile requirements
- Caddy Dashboard: Per-host traffic and latency
- FrankenPHP Dashboard: Thread introspection and workers
- CLI Reference: Flags, keybindings, shell completions
- JSON Output: Streaming JSONL for scripting
- Prometheus Export: Metrics, health checks, daemon mode
- Docker: Container usage
- AI Agent Skills: Skills for AI coding agents
- Troubleshooting: Common issues and solutions
See CONTRIBUTING.md for development setup, architecture overview, and guidelines.
MIT

