This repository is a quick gathering of notes and other material for a workflow in progress that I'm developing that attempts to work in harmony with Claude Code, Codex, etc.
GitHub repositories are outstanding mechanisms for gathering together the material for AI agents to "do their thing."
I am coming to think of them less as traditional code repos and more as baskets.
In this model, the repo as launchpad may have two discrete areas: a workspace which is where the user and the AI agent exchange information and the basket where the user gathers foundational materials that persist throughout the project lifecycle, which may be open-ended.
Taking a workspace for systems administration as an example we might sketch out a folder structure like this:
To the extent that anything at all can be considered traditional in this very new field, AI systems leverage a few key ingredients to deliver optimal results:
- External knowledge: information not in training data (RAG)
- Contextual data about purpose
- System prompt containing some of the above and stylistic direction
- Technical parameters like temperature settings etc
Vendors are developing agentic CLIs that have won justified praise for simplifying the AI stack considerably. See: Codex, Gemini, Qwen.
An emerging standard is that of a single markdown file (like CLAUDE.md) which is automatically parsed by the agent.
Developers are quickly recognizing that maintaining a separate rules file per model is unsustainable and creating unified rulesets or tooling which exposes the same rules to multiple CLIs (or generates per-framework versions).
Traditional context delivery methods are complicated and entail chunking text and providing it in vectors.
However, LLMs are increasingly adept at handling large context loads delivered from both system and user prompting.
Much as JSON data stores and PostgreSQL databases serve different purposes at vastly different levels of scale, a single markdown file can be a very eloquent yet effective mechanism for providing the kind of short but sweet context that can ground better inference.
Why both?
I created this repo to share my thinking for this and explain the workflow I have started adopting.
AI agents do best with information that is rigidly organized and stripped of filler. This is not, however, how most humans like to express their ideas.
Large language models (LLMs) are very adept at converting language between formats—maybe we can think of "human speak" and "robot speak" as just two dialects of English (or whatever language you speak).
Rather than trying to write like a robot for robots, CONTEXT.md is a place for humans to describe their ideas as colorfully, passionately, and at length as they wish.
CONTEXT.md can be captured using speech-to-text tools (in fact, this is most often how I create them). They can be refined and edited: so a very rough speech transcription can be cleaned up just a little. And then they can be ported to their agent-briefing equivalents.
To convert a context file to an agent brief, just about any approach can be used:
- An AI assistant
- A slash command (a selection is added to this repo).
Daniel Rosehill (danielrosehill.com)
Comments: [email protected]




