Peer into latent space.
Terminal UI for visualizing high-dimensional text embeddings via dimensionality reduction. Embeds text using Ollama's nomic-embed-text model (768D vectors), persists to Qdrant vector database over gRPC, and projects to 2D using PCA (SVD-based) or UMAP for nonlinear manifold approximation. Clustering via HDBSCAN reveals semantic structure without specifying k. Built with Bubble Tea and Lipgloss.
- Ollama serving
nomic-embed-textonlocalhost:11434 - Qdrant running on
localhost:6334(gRPC)
curl -sSL https://raw.githubusercontent.com/alDuncanson/latent/main/install.sh | bashor
go install github.com/alDuncanson/latent@latestlatent # Start TUI
latent dataset.csv # Import from CSV (requires `text` column)
latent dataset.json # Import from JSON (array of strings or {text: ...} objects)
latent --preload # Seed with demo word listlatent --hf-dataset stanfordnlp/imdb --hf-split test --hf-max-rows 50
latent --hf-dataset rajpurkar/squad --hf-column question --hf-max-rows 200Flags: --hf-dataset, --hf-split (default: train), --hf-column (default: text), --hf-max-rows (default: 100)
