Terminal-first network observability for operators who live in the shell.
Wirescope is a live and historical network inspection tool built for the moment when "the network feels wrong" is not enough and you need proof.
It is designed to give one operator a fast, honest view of what is happening on a host or segment:
- live interface health and throughput
- top talkers by bytes, packets, and connection count
- DNS activity with query/response context
- active connection tables with drill-down detail
- protocol mix and directional traffic breakdown
- historical search backed by indexed metadata
- a metadata-first storage boundary that keeps raw packet payloads out of SQLite
Wirescope v1.0.0 is released. The core product is shipped and ready for real-world deployment feedback.
The repo contains the full Cobra CLI surface, gopacket/pcap-based interface discovery and live capture, pure Go dissection and replay path, in-memory aggregation with rolling windows, full-screen Bubble Tea terminal UI, SQLite metadata persistence with session/DNS/alert history, file-backed alert rules with 7 rule types, JSON/CSV export for all query types, webhook notification, Prometheus capture/flush/alert metrics, a read-only Astro + Vue browser companion in frontend/apps/web with same-origin JSON/CSV export downloads for history, DNS, and alerts, an in-progress OpenTUI terminal frontend in frontend/apps/tui, repo-local PCAP writer and manifest helpers, soak-tested persistence, documented benchmarks, and reproducible release builds.
Current shipped architecture:
- Go owns the v1.0 runtime: live capture via gopacket/pcap, packet parsing, aggregation, filtering, persistence orchestration, exporters, alerts, and the terminal UI
- SQLite is the v1.0 metadata store for the current single-node product shape, with plain SQL migrations and raw-SQL query paths
- raw PCAP stays on disk first, with object storage as the later cold-tier option
That split is deliberate for the shipped release. Raw packets are too expensive and awkward to treat like database rows. SQLite answers operator questions about flows, sessions, DNS, and alert history in the current product. Packet blobs remain packet blobs.
Rewrite note: the active rewrite plan now targets a unified Bun + TypeScript + Astro + Vue + Elysia + Zod + PostgreSQL stack with Docker Compose and Caddy. See BUILD.md for the phased cutover plan.
The repo root now contains the Phase 2 rewrite persistence foundation:
apps/api/- Elysia placeholder API plus the Postgres migration runner for/health,/api/v1/runtime, and/api/v1/overviewapps/api/migrations/- SQL-first Postgres schema files for the rewrite metadata lane, including replay-import compatibility for shipped Go-backed datasetsapps/api/src/db/import-sqlite.ts- SQLite-to-Postgres import tool with dry-run support for current metadata storesapps/api/src/db/replay-smoke.ts- replay-backed verification lane that rebuilds fixture data through the Go runtime, imports it into disposable Postgres, and checks table countsapps/web/- Astro + Vue shell that fetches rewrite runtime data same-origin through Caddypackages/contracts/- shared Zod schemas for the rewrite backend and web shelldeploy/- Docker Compose and Caddy wiring for the rewrite workspace
That foundation is intentionally honest about scope:
- the shipped product is still the Go + SQLite line described below
- the existing browser companion in
frontend/apps/webis still the current shipped web surface - the Bun rewrite workspace now carries schema truth and migrations, but it is not the default operator path yet
Most network tools force a bad trade:
- live tools show what is happening now, but forget it a minute later
- packet capture tools keep everything, but turn simple questions into forensic labor
- SaaS observability platforms centralize the answer, the cost, and the trust problem all at once
Wirescope is aimed at the middle that operators actually need:
- keep the fast terminal workflow
- preserve useful history
- store searchable metadata in a real database
- keep raw capture outside the database
- make drill-down possible without pretending every packet belongs in SQL
+------------------------------------+ +------------------------------+
| terminal operator surface | | browser companion |
| CLI + Bubble Tea style terminal UI | | Astro + Vue read-only web UI |
+------------------+-----------------+ +--------------+---------------+
| same core | polls/proxies
+----------------+--------------------+
|
+-------------v-------------+
| Go core |
| gopacket/pcap capture |
| command dispatch |
| live aggregation |
| filters and query path |
| TUI state + exporters |
| web JSON API serving |
| persistence orchestration |
+-------------+-------------+
| batched metadata
+--------------------+--------------------+
| |
+---------v-------------------+ +------------------v-------------------+
| eBPF (cilium/ebpf) | | SQLite metadata / PCAP storage |
| Linux-only, opt-in | | sessions/flows, DNS/alerts, disk |
| process/cgroup attribution | | local disk first, object store later |
+-----------------------------+ +--------------------------------------+
# Install Mage once, then build the binary (requires Go 1.26+ and libpcap)
go install github.com/magefile/[email protected]
mage build
BIN=./.tmp/build/wirescope
# List available interfaces
$BIN interfaces
# Start the live terminal UI
sudo $BIN live --iface en0
# Replay a pcap file
$BIN capture replay testdata/pcap/mixed-traffic.pcap
# Run diagnostics
$BIN --config configs/wirescope.example.json doctorPaths inside configs/wirescope.example.json resolve relative to that file, so the checked-in example works even when you invoke Wirescope from another directory.
mage frontend:sync
mage frontend:dev
# In another terminal, serve the Go contract the web app consumes:
./.tmp/build/wirescope webThe Astro + Vue app runs on 127.0.0.1:4321 by default. It expects the Go API at http://127.0.0.1:4080 unless WIRESCOPE_BACKEND_URL is set.
go build -o .tmp/build/wirescope ./cmd/wirescope
./.tmp/build/wirescope web
# In another terminal:
cd frontend
bun run dev:tuiThe OpenTUI app is a sibling Bun app that talks to the same Go frontend contract over loopback HTTP. It defaults to http://127.0.0.1:4080 and can be pointed elsewhere with WIRESCOPE_TUI_BACKEND_URL.
This is currently an in-progress terminal frontend, not the shipped Bubble Tea cutover. The app now renders overview, talkers, connections, protocols, DNS, alerts, processes, and history panes against the Go frontend contract, with client-side filter input in the terminal. Bubble Tea remains the production terminal workflow. See docs/opentui-spike.md for the Phase 3 proof that preceded the current Phase 4 work.
Go owns the entire runtime: capture, dissection, aggregation, persistence, CLI, TUI, web API, and eBPF loading. The capture path uses gopacket/gopacket with gopacket/pcap for libpcap bindings. This gives a single binary with no subprocess coordination overhead.
The shipped TUI remains a Bubble Tea style interface: fast keyboard navigation, stable panels, strong filtering, and a layout that still feels like a terminal tool instead of a web app trapped in a tty. The repo now also contains an in-progress OpenTUI app in frontend/apps/tui that consumes the stable Go frontend contract over loopback HTTP as the current Phase 4 migration lane.
SQLite should store by default:
- capture runs and interfaces
- flow/session metadata
- minute-level rollups and historical counters
- DNS transactions and resolution history
- alert events and operator-visible history
- references to PCAP segment manifests when the raw-packet lane is wired into the runtime
SQLite stays first for Wirescope's metadata and history path while the product remains single-node and local-first.
SQLite should not store raw packet payloads. That boundary keeps search fast, schema sane, and retention practical.
On Linux, Wirescope can attribute network activity to specific processes and cgroups using eBPF TC classifiers and kprobes. The eBPF programs are loaded via cilium/ebpf and ring buffer events are read directly in Go. See ebpf/README.md for details.
- session and flow metadata
- time-bucket rollups
- DNS query/response records
- alert events
- capture run manifests
- PCAP segment manifests and lookup pointers when the raw-packet writer is enabled in the runtime
- rotated raw packet segments
- compressed archival packet data
- large binary artifacts that only need manifest lookup in the database
This is the key design choice in Wirescope. The database answers: what happened, when, where, and how much? PCAP answers: show me the bytes for that time window.
The initial schema direction is intentionally typed and queryable, not JSON soup.
Core records for the first real implementation:
capture_nodescapture_interfacescapture_runspcap_segmentssessionssession_minute_rollupsdns_transactionsalert_events
Key principles:
- keep typed, relational columns in SQLite with explicit SQL migrations
- use obvious timestamp and address representations that stay readable in SQLite and keep migrations simple
- partition time-heavy tables by time before they become a cleanup problem
- keep raw counters in typed columns
- prefer recomputable rollups over clever write-time magic
- keep operator queries obvious enough to debug at 2am
Wirescope is not a packet-analysis kitchen sink. It is a sharp single-node observability tool with durable history.
Wirescope ships with:
- single-host capture on selected interfaces
- live terminal dashboard
- read-only browser companion for overview, runtime, live visibility, DNS, alerts, history, and safe JSON/CSV download exports
- throughput, top talkers, connection tables, DNS feed, and protocol breakdown
- SQLite-backed historical drill-down
- repo-local PCAP writer and manifest helpers alongside SQLite-backed historical drill-down
- alert rules for common operator thresholds and spikes
- JSON/CSV export plus Prometheus metrics for Wirescope itself
Wirescope does not try to be:
- a full IDS/IPS platform
- a distributed multi-node sensor fleet
- a packet warehouse in any database
- an ML anomaly product
- a GUI-first dashboard suite
The operator workflow looks like this:
wirescope live --iface en0
wirescope history --since 1h --host 10.0.0.12
wirescope dns --since 30m --qname contains internal
wirescope alerts --open
wirescope export sessions --since 24h --format jsonThe full-screen TUI makes it easy to:
- pivot from interface totals to a single host
- pivot from a host to its active sessions
- pivot from a session to related DNS activity
- pivot from live rows to historical windows
- keep the common path metadata-first, with the repo's PCAP helpers available for packet-level retention work
.
├── apps/
│ ├── api/ # repo-root Elysia rewrite API and migration runner
│ ├── api/migrations/ # SQL-first Postgres rewrite migrations
│ └── web/ # repo-root Astro + Vue rewrite shell
├── cmd/wirescope/ # Go CLI entrypoint
├── internal/
│ ├── agg/ # live aggregation and rollups
│ ├── alert/ # alert evaluation and dispatch path
│ ├── app/ # runtime orchestration
│ ├── capture/ # gopacket/pcap capture backend and ingest coordination
│ │ └── ebpf/ # eBPF process attribution (cilium/ebpf, Linux-only)
│ ├── config/ # config loading and validation
│ ├── export/ # JSON/CSV and related export paths
│ ├── history/ # historical query surface
│ ├── metrics/ # counters and self-observability
│ ├── pcap/ # raw packet retention helpers
│ ├── protocol/ # packet and protocol handling
│ ├── store/ # SQLite-backed persistence
│ ├── tui/ # terminal UI work
│ ├── version/ # build/version reporting
│ └── web/ # read-only browser companion and API
├── deploy/ # rewrite Docker Compose and Caddy scaffold
├── docs/ # architecture, install, operations, retention, benchmarks
├── ebpf/
│ ├── bpf/src/ # eBPF C source (TC classifier, kprobes)
│ └── README.md # eBPF architecture and build docs
├── frontend/ # Bun workspace for the shipped read-only companion, OpenTUI app, and current TS contract helpers
├── magefile.go # canonical task runner and developer workflow
├── migrations/ # SQLite migration files
├── packages/
│ └── contracts/ # rewrite Zod contracts package
└── testdata/pcap/ # capture fixtures
- Installation - requirements, build instructions, permissions, configuration
- Operations - deployment patterns, service-manager guidance, feedback checklist
- Retention and storage - storage model, sizing guidance, drill-down workflow
- Architecture - data flow, ownership split, storage boundaries
- Rewrite persistence foundation - Postgres schema, migrations, index posture, and retention boundary for the rewrite
- Rewrite Phase 0 baseline - fixture-backed baseline outputs plus the explicit keep, defer, and drop allowance for the first rewritten release
- OpenTUI Phase 3 spike - proof notes, launch model, and verification evidence for the viability work that preceded the current terminal migration lane
- Benchmarks - measured performance on Apple M5
- Changelog - release history
mage verify
# Shipped Go + frontend companion lanes:
mage check
mage frontend:verify
# Rewrite scaffold lane:
bun install
docker compose -f deploy/docker-compose.yml up -d postgres
export DATABASE_URL=postgres://wirescope:[email protected]:15432/wirescope
bun run db:migrate
bun run import:sqlite -- --dry-run /path/to/wirescope.db
bun run replay:smoke
bun run verify
docker compose -f deploy/docker-compose.yml up --buildWirescope v1.0.0 is a complete single-node network observability tool with live terminal UI, durable SQLite history, alert evaluation, export workflows, Prometheus self-observability, soak-tested persistence, documented benchmarks, and opt-in eBPF-based per-process network attribution on Linux. The Bun-managed Astro + Vue browser companion lives alongside the terminal workflow as a read-only companion, not a replacement, and now includes same-origin JSON/CSV download exports for session history, DNS history, and alert history. The OpenTUI app in frontend/apps/tui is currently an early Phase 4 terminal frontend that proves pane rendering, protocol coverage, and filter flows against the Go contract without claiming the Bubble Tea cutover is done.
Today that means:
- one capture node per install
- Go owns capture, aggregation, persistence, export, alerting, and the JSON API
- SQLite stores searchable metadata, while raw-packet retention remains an explicit on-disk boundary and repo-local helper lane
- Linux can opt into eBPF-based process and cgroup attribution
- future expansion stays intentionally narrow until operator feedback clearly earns it
The next correct move is to run Wirescope on real hosts, gather operator feedback, and fix friction in the shipped surface before broadening scope.
Focus areas:
- deployment and permission ergonomics
- alert noise and rule tuning
- query latency and storage growth under real capture loads
- missing filters or pivots that block normal incident-response workflows
Use docs/operations.md as the current operating reference for post-1.0 deployments.
GPL-3.0. See LICENSE.