currently breaking things — intentionally
Backend · Applied AI · Node.js · TypeScript · Python
SDE-3 focused on the backend–AI intersection. I build things like RAG pipelines, chatbot systems, hybrid decision engines, and API infrastructure — then I take them apart to understand why they work (or don't).
I care about applied AI, not theoretical AI. If it doesn't run on real data with real edge cases, I'm not that interested. I also teach as I build — workshops, demos, writing — because explaining something badly is usually a sign you don't understand it yet.
over 5 years in. Still wrong about things often enough to keep it interesting.
- RAG systems that don't hallucinate on your specific domain data
- Hybrid architectures — decision trees + LLMs, where determinism matters
- API infrastructure experiments: Kong plugins, mock Direct Line, gateway patterns
- OCR pipelines that handle real document messiness (Tesseract + fuzzy search via Fuse.js)
- Teaching by building — live demos where the code is written in front of people, bugs and all
RAG Demo System
End-to-end retrieval-augmented generation over domain-specific documents. Built to demonstrate what actually goes wrong when you swap out embeddings, chunk sizes, or retrieval strategies.
Python LLM vector search Azure
Hybrid Chatbot — Decision Tree + AI
A chatbot that routes structured, predictable queries through deterministic logic and falls back to an LLM only when needed. Cheaper, faster, and more auditable than pure LLM for constrained domains.
Node.js TypeScript Azure Bot Service Direct Line
Kong Plugin Experiments
Custom API gateway plugins for request transformation, auth middleware, and traffic control — built while figuring out what Kong's plugin architecture actually lets you do (and what it quietly refuses).
Kong API design Lua Node.js
OCR + Fuzzy Search Pipeline
Document extraction pipeline using Tesseract, with KQL-based querying and Fuse.js for tolerant matching when OCR output isn't clean. Which is always.
Tesseract Fuse.js KQL TypeScript
- Build first, clean up second. A working prototype beats a clean plan that never ships.
- Break things on purpose. If I don't know where the system fails, I don't know what I built.
- Prefer boring solutions. The clever approach is usually the one you regret maintaining.
- Teach through demos, not slides. Writing code in front of people is the fastest way to find out what you actually understand.
- Comfort with "I don't know" is a feature, not a bug. The interesting work lives past that line.
The gap between demo and production is where the actual engineering is.
I write at whoisnp.me — it's called "brain overflow buffer" for a reason. Mostly experiments, breakdowns of systems I'm working on, and things I figured out the hard way.
- Live workshop demos — code written in the room, no pre-baked scripts
- Learning in public: what I tried, what broke, what I'd do differently
- Occasional deep dives into backend + AI system patterns
No pitches. If you're building something in the AI + backend space, or want to talk systems — reach out.


