Self-hosted AI builder · Agent workflow operator · Web / CLI / Telegram automation
I build AI systems that can actually live inside real workflows: self-hosted workspaces, Codex bridges, token accounting services, and small infrastructure pieces that make agent execution easier to run and maintain.
Psyche: a self-hosted AI workspace for multi-provider chat, provider and model management, attachments, and production-ready deployment.teledex: a lightweight Telegram bridge that turns persistent Codex sessions into a practical remote workflow with bound directories and live progress previews.token-account: a persistent token usage service for Codex sessions with incremental sync, multi-device aggregation, and a live dashboard.codex-bark-notify-hook: a small notification adapter that turns Codex notify callbacks into Bark pushes or webhook relays with deduplication and compressed summaries.
- Moving AI from "can answer" to "can plug into real workflows and keep operating"
- Building self-hosted setups where models, providers, data, and deployment all stay under control
- Connecting
Web,CLI, andTelegramentry points so the same agent capability can be reused across contexts - Using lightweight systems to validate product direction quickly, then hardening what proves useful
- Self-hosted AI products and multi-model integration
- Codex and agent-oriented engineering workflows
- Telegram, Web, and CLI automation bridges
- Service delivery with
FastifyandFastAPI - Token accounting, observability, and usage analytics
- Quant research and tooling infrastructure
- Frontend:
React,TypeScript,Vite,Tailwind CSS - Backend:
Fastify,FastAPI,Node.js,Python - Data and deployment:
PostgreSQL,SQLite,Docker,PM2,Nginx - AI integration:
Codex,OpenAI-compatible APIs,Anthropic-compatible workflows
- How to turn self-hosted AI apps into maintainable products
- How to plug Codex and agents into real engineering workflows
- Remote control patterns, automation bridges, and observability design
- The tradeoffs between fast MVPs and long-term maintainability
I would rather ship a working system first, then refine it into something worth maintaining for the long run.
