The Jozu Blog
Package the Agent, Not Just the Model: Native Skills Support in KitOps
Package agent skills, configs, and model weights as versioned ModelKits with KitOps v1.12.0. Push to Jozu Hub and serve locally with Jozu Rapid Inference Containers.
Running a Local Coding Agent with OpenCode and Jozu Rapid Inference Container (RICs)
Learn how to package a quantized LLM as a ModelKit, deploy it locally with a Jozu Rapid Inference Container, and connect it to OpenCode to run a fully private AI coding agent on your own hardware.
Signing Is Not Enough: Why AI Artifact Provenance Needs to Be a Graph
Signing your AI models isn't enough. Learn why fine-tuned model provenance requires graph traversal, not just attestations, to close the supply chain gap.
Claude Managed Agents: What It Solves and What Enterprises Still Need
Prompt Drift Is the New Shadow Deploy
Your model didn't change. Your prompt did. Can you prove exactly what ran in production? KitOps v1.11 treats prompts as first-class release artifacts, closing the supply-chain gap between behavior changes and governed deployments.
Deploy LLMs On-Prem: From Docker Model Runner to Kubernetes with Jozu Hub
Learn how to extract a model from Docker Model Runner, package it as a versioned ModelKit with KitOps, push to Jozu Hub, and deploy to Kubernetes using auto-generated Rapid Inference Containers — with full governance and audit trails.