Install • Quick Start • Why Gokin? • Features • Providers • Config • Contribute
Most AI coding tools are closed-source, route your code through third-party servers, and give you zero control over what gets sent to the model. Gokin was built with a different goal: a fast, secure, zero-telemetry CLI where your code goes directly to the provider you chose — and nothing else leaves your machine.
This matters especially when you work with multiple LLM providers across different jurisdictions (DeepSeek, GLM, Kimi, MiniMax, Gemini, Claude, OpenAI). Gokin ensures that secrets, credentials, and sensitive code are automatically redacted before reaching any model, TLS is enforced on every connection, and no proxy or middleware ever touches your data. You pick the provider — Gokin handles the rest.
| Feature | Gokin | Claude Code | Cursor |
|---|---|---|---|
| Price | Free → Pay-per-use | $20+/month | $20+/month |
| Providers | 8 (Gemini, Claude, OpenAI, DeepSeek, GLM, Kimi, MiniMax, Ollama) | 1 (Claude) | Multi |
| Offline | ✅ Ollama | ❌ | ❌ |
| 54 Tools | ✅ | ~30 | ~30 |
| Multi-agent | ✅ 5 parallel | Basic | ❌ |
| Direct API | ✅ Zero proxies | ✅ | ❌ Routes through Cursor servers |
| Security | ✅ TLS 1.2+, secret redaction (24 patterns), sandbox, 3-level permissions | Basic | Basic |
| Open Source | ✅ | ❌ | ❌ |
| Self-hosting | ✅ | ❌ | ❌ |
Choose your price tier:
| Stack | Cost | Best For |
|---|---|---|
| Gokin + Ollama | 🆓 Free | Privacy, offline, no API costs |
| Gokin + Gemini Flash | 🆓 Free tier | Fast iterations, prototyping |
| Gokin + DeepSeek | ~$1/month | Daily coding, best value |
| Gokin + Kimi | Pay-per-use | Fast reasoning, 256K context |
| Gokin + MiniMax | Pay-per-use | 1M context, strong coding |
| Gokin + Claude | Pay-per-use | Complex reasoning |
| Gokin + OpenAI | Pay-per-use | Codex models, o3 reasoning |
curl -fsSL https://raw.githubusercontent.com/ginkida/gokin/main/install.sh | shgit clone https://github.com/ginkida/gokin.git
cd gokin
go build -o gokin ./cmd/gokin
./gokin --setup- Go 1.25+ (build from source)
- One AI provider (see Providers below)
# Launch with interactive setup
gokin --setup
# Or set API key and run
export GEMINI_API_KEY="your-key"
gokinThen just talk naturally:
> Explain how auth works in this project
> Add user registration endpoint with validation
> Run the tests and fix any failures
> Refactor this module to use dependency injection
> Create a PR for these changes
- Semantic Search — Find code by meaning, not just keywords
- Code Graph — Dependency visualization
- Multi-file Analysis — Understand entire modules
- Files: read, write, edit, diff, batch, copy, move, delete
- Search: glob, grep, semantic_search, tree, code_graph
- Git: status, commit, diff, branch, log, blame, PR
- Run: bash, run_tests, ssh, env
- Plan: todo, task, enter_plan_mode, coordinate
- Memory: memorize, shared_memory, pin_context
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Explore │────▶│ General │────▶│ Bash │
│ (read) │ │ (write) │ │ (execute) │
└─────────────┘ └─────────────┘ └─────────────┘
│ │ │
└───────────────────┴───────────────────┘
│
[Progress UI]
- Up to 5 parallel agents
- Shared memory between agents
- Automatic task decomposition
- 3-level permissions: Low (auto), Medium (ask once), High (always ask)
- Sandbox mode for bash commands
- Diff preview before applying changes
- Undo/Redo for all file operations
- Audit logging
┌──────────┐ ┌──────────────────┐
│ Gokin │ ──TLS──▶ │ Provider API │
│ (local) │ │ (Gemini/Claude/ │
│ │ ◀──TLS── │ DeepSeek/etc.) │
└──────────┘ └──────────────────┘
No middle servers. No Vercel. No telemetry proxies.
Your API key, your code, your conversation — direct.
Some CLI tools route requests through their own proxy servers (Vercel Edge, custom gateways) for telemetry, analytics, or API key management. Gokin does none of this. Every API call goes directly from your machine to the provider's endpoint. You can verify this — it's open source.
LLM tool calls can accidentally expose secrets found in your codebase. Gokin automatically redacts them before they reach the model or your terminal:
| Category | Examples |
|---|---|
| API keys | AKIA..., ghp_..., sk_live_..., AIza... |
| Tokens | Bearer tokens, JWT (eyJ...), Slack/Discord tokens |
| Credentials | Database URIs (postgres://user:pass@...), Redis, MongoDB |
| Crypto material | PEM private keys, SSH keys |
24 regex patterns, applied to every tool result and audit log. Handles any data type — strings, maps, typed slices, structs. Custom patterns supported via API.
| Layer | What it does |
|---|---|
| TLS 1.2+ enforced | No weak ciphers, certificate verification always on |
| Sandbox mode | Bash runs in isolated namespace (Linux), safe env whitelist (~35 vars) — API keys never leak to subprocesses |
| Command validation | 50+ blocked patterns: fork bombs, reverse shells, rm -rf /, credential theft, env injection |
| SSH validation | Host allowlist, loopback blocked, username injection prevention |
| Path validation | Symlink resolution, directory traversal blocked, TOCTOU prevention |
| SSRF protection | Private IPs, loopback, link-local blocked; all DNS results checked |
| Audit trail | Every tool call logged with sanitized args |
- API keys loaded from env vars or local config (
~/.config/gokin/config.yaml) - Keys are masked in all UI displays (
sk-12****cdef) - Keys are never included in conversation history or tool results
- Ollama mode: zero network calls — fully airgapped
> Remember we use PostgreSQL with pgx driver
> What were our database conventions?
- Project-specific memories
- Auto-inject relevant context
- Stored locally (your data stays yours)
| Provider | Models | Auth | Notes |
|---|---|---|---|
| Gemini | 3.1-pro, 3-flash, 2.5-pro | API key / OAuth | Free tier, native tools |
| Anthropic | Opus 4, Sonnet 4.5, Haiku | API key | Best reasoning |
| OpenAI | GPT-5.3 Codex, o3, o4-mini | OAuth | Codex models |
| DeepSeek | Chat, Reasoner | API key | Best price/quality |
| Kimi | K2.5, K2 Thinking Turbo, K2 Turbo | API key | Fast reasoning, 256K context |
| MiniMax | M2.7, M2.5 | API key | 200K context, strong coding |
| GLM | GLM-5, GLM-4.7 | API key | Budget option |
| Ollama | Any local model | None | 100% offline |
Switch anytime:
> /provider gemini
> /model 3-flash
> /provider anthropic
> /model sonnet
> /provider openai
> /oauth-login openai
| Command | Description |
|---|---|
/login <provider> <key> |
Set API key |
/oauth-login <provider> |
OAuth login (Gemini, OpenAI) |
/provider <name> |
Switch provider |
/model <name> |
Switch model |
/plan |
Enter planning mode |
/save / /load |
Session management |
/commit [-m "msg"] |
Git commit |
/pr --title "..." |
Create GitHub PR |
/undo |
Undo last file change |
/theme |
Switch UI theme |
/help |
Show all commands |
| Key | Action |
|---|---|
Enter |
Send message |
Ctrl+C |
Interrupt |
Ctrl+P |
Command palette |
↑/↓ |
History |
Tab |
Autocomplete |
? |
Show help |
Location: ~/.config/gokin/config.yaml
api:
gemini_key: "your-key"
active_provider: "gemini"
model:
name: "gemini-3-flash-preview"api:
gemini_key: ""
anthropic_key: ""
deepseek_key: ""
glm_key: ""
kimi_key: ""
minimax_key: ""
openai_oauth: # OAuth-only provider
access_token: ""
refresh_token: ""
active_provider: "gemini"
ollama_base_url: "http://localhost:11434"
retry:
max_retries: 10
retry_delay: 1s
http_timeout: 120s
stream_idle_timeout: 30s
providers:
anthropic:
http_timeout: 5m
stream_idle_timeout: 120s
deepseek:
http_timeout: 5m
stream_idle_timeout: 120s
minimax:
http_timeout: 5m
stream_idle_timeout: 120s
kimi:
http_timeout: 5m
stream_idle_timeout: 120s
model:
name: "gemini-3-flash-preview"
temperature: 1.0
max_output_tokens: 8192
enable_thinking: false # Anthropic extended thinking
tools:
timeout: 2m
model_round_timeout: 5m
bash:
sandbox: true
allowed_dirs: []
permission:
enabled: true
default_policy: "ask" # allow, ask, deny
plan:
enabled: true
require_approval: true
ui:
theme: "dark" # dark, macos, light
stream_output: true
markdown_rendering: truegokin/
├── cmd/gokin/ # CLI entry point
├── internal/
│ ├── app/ # Orchestrator & message loop
│ ├── agent/ # Multi-agent system
│ ├── client/ # 8 API providers
│ ├── tools/ # 54 built-in tools
│ ├── ui/ # Bubble Tea TUI
│ ├── config/ # YAML config
│ ├── permission/ # 3-level security
│ ├── memory/ # Persistent memory
│ ├── semantic/ # Embeddings & search
│ └── ...
~120K LOC • 100% Go • Production-ready
Contributions welcome! See CONTRIBUTING.md for:
- Development setup
- Code style guide
- Pull request process
# Dev setup
git clone https://github.com/ginkida/gokin.git
cd gokin
go mod download
go build -o gokin ./cmd/gokin
# Test
go test -race ./...
# Format
go fmt ./...
go vet ./...MIT — Use freely, modify, distribute.
- Bubble Tea — TUI framework
- Gemini API — Google AI SDK
- Lipgloss — Terminal styling
Made with ❤️ by developers, for developers

