Overview
Bring Design Thinking (DT) coaching capabilities to HVE Core as a pre-RPI workflow for Problem Discovery. This epic tracks the end-to-end implementation from coaching foundation through packaging and distribution.
Architecture Decision: Approach C — Single Coach + Knowledge Modules
A single dt-coach.agent.md maintains continuous conversational identity while loading method-specific knowledge through tiered instructions. This approach was selected over:
- Approach A (Lightweight Orchestrator): Destroys identity continuity and hint state per subagent dispatch.
- Approach B (Heavyweight Task Agents): Fragments experience across mandatory handoffs.
Approach C mirrors the validated DT4HVE coaching pattern: one coach, 19 specialized hats, identity continuity through method transitions.
Core Components
| Component |
Description |
| DT Coach Agent |
Single conversational agent with Think/Speak/Empower philosophy |
| 9-Method Curriculum |
Problem Space (1-3), Solution Space (4-6), Implementation Space (7-9) |
| Tiered Instructions |
Ambient (always loaded), Method (auto-loaded per method), On-demand (deep expertise via read_file) |
| Progressive Hint Engine |
4-level escalation from broad direction to direct detail |
| Industry Context Layer |
Pluggable industry profiles for cross-industry coaching |
| Coaching State |
YAML-based session persistence at .copilot-tracking/dt/{project-slug}/ |
| DT→RPI Handoff |
Tiered handoff system with exit prompts at each DT space boundary, handoff contract, and subagent-assisted artifact compilation |
Five Critical Gaps Addressed
- Problem-Space Gap — No HVE Core agents address stakeholder discovery or problem framing. DT coach creates the pre-RPI workflow.
- Too Nice Prototype — Production-grade output prevents iteration. Per-method quality constraints enforce lo-fi in early methods.
- Linear vs. Iterative — RPI's forward pipeline vs. DT's fluid iteration. Coach maintains independent session state.
- Coaching vs. Directing — New interaction model: questions, observations, choices, capability building.
- User Centricity vs. Code Centricity — Coach guides thinking through conversation, not code generation.
Phase Overview
| Phase |
Scope |
Priority |
| Phase 1: Coaching Foundation |
Ambient instructions, Problem Space method instructions |
P0 |
| Phase 2: Coach Agent |
Agent definition, entry prompts, state protocol |
P0 |
| Phase 3: Problem Space Methods |
Deep instructions, industry templates, handoff contract, Problem Space exit handoff, subagent workflow |
P1 |
| Phase 4: Solution Space Methods |
Methods 4-6 instructions, deep files, Solution Space exit handoff |
P1/P2 |
| Phase 5: Implementation Space Methods |
Methods 7-9 instructions, deep files, Implementation Space exit handoff |
P2 |
| Phase 6: Packaging and Distribution |
Collection manifest, extension, integration |
P1/P2 |
| Phase 7: Learning Tutor |
V2 teaching agent with syllabi and assessment |
P3 |
Token Budget Estimates
| Scenario |
Token Range |
| Base session (ambient only) |
~6,300-7,700 |
| Active method (ambient + one method) |
~9,800-12,700 |
| Method transition (ambient + two methods) |
~12,000-15,000 |
Artifact Creation Process: RPI Pipeline
All DT artifacts across every phase are created using the RPI pipeline. Each phase's deliverables flow through: /task-research → /task-plan → authoring → /task-review. The authoring step varies by file type:
- AI artifacts (
.instructions.md, .prompt.md, .agent.md, SKILL.md): Use /prompt-build — the prompt-builder agent includes built-in Prompt Quality Criteria validation and sandbox testing specific to these file types.
- Non-AI artifacts (YAML manifests, documentation, configuration): Use
/task-implement — the standard implementation pipeline.
All AI artifacts additionally follow .github/instructions/prompt-builder.instructions.md authoring standards. Individual leaf issues contain a "How to Build This File" section with phase-by-phase instructions and starter prompts tailored to each deliverable.
This is a cross-cutting process requirement — individual phase issues inherit this workflow without needing separate process issues.
Validation Strategy
Validation occurs per-issue at backlog creation time rather than as a separate testing phase. Each issue's success criteria specify verifiable completion indicators checked during task-reviewer validation. No dedicated testing issues are planned; the RPI pipeline's built-in review step provides continuous quality assurance.
DT Prompt Strategy: Start Light, Graduate if Needed
Phases 1-3 use DT-aware instruction files (with applyTo globs on .copilot-tracking/dt/ paths) to modify RPI agent behavior in DT context. This is the lightest-path approach — RPI's architecture supports contextual behavior modification through instruction files without requiring parallel prompt suites. After Phase 3 handoff integration, evaluate whether richer DT-native prompt variants (dt-empathize.prompt.md, dt-define.prompt.md, etc.) are warranted. Graduate to the richest path only if the lightest path proves insufficient.
Source Material
Success Criteria
Overview
Bring Design Thinking (DT) coaching capabilities to HVE Core as a pre-RPI workflow for Problem Discovery. This epic tracks the end-to-end implementation from coaching foundation through packaging and distribution.
Architecture Decision: Approach C — Single Coach + Knowledge Modules
A single
dt-coach.agent.mdmaintains continuous conversational identity while loading method-specific knowledge through tiered instructions. This approach was selected over:Approach C mirrors the validated DT4HVE coaching pattern: one coach, 19 specialized hats, identity continuity through method transitions.
Core Components
.copilot-tracking/dt/{project-slug}/Five Critical Gaps Addressed
Phase Overview
Token Budget Estimates
Artifact Creation Process: RPI Pipeline
All DT artifacts across every phase are created using the RPI pipeline. Each phase's deliverables flow through:
/task-research→/task-plan→ authoring →/task-review. The authoring step varies by file type:.instructions.md,.prompt.md,.agent.md,SKILL.md): Use/prompt-build— the prompt-builder agent includes built-in Prompt Quality Criteria validation and sandbox testing specific to these file types./task-implement— the standard implementation pipeline.All AI artifacts additionally follow
.github/instructions/prompt-builder.instructions.mdauthoring standards. Individual leaf issues contain a "How to Build This File" section with phase-by-phase instructions and starter prompts tailored to each deliverable.This is a cross-cutting process requirement — individual phase issues inherit this workflow without needing separate process issues.
Validation Strategy
Validation occurs per-issue at backlog creation time rather than as a separate testing phase. Each issue's success criteria specify verifiable completion indicators checked during task-reviewer validation. No dedicated testing issues are planned; the RPI pipeline's built-in review step provides continuous quality assurance.
DT Prompt Strategy: Start Light, Graduate if Needed
Phases 1-3 use DT-aware instruction files (with
applyToglobs on.copilot-tracking/dt/paths) to modify RPI agent behavior in DT context. This is the lightest-path approach — RPI's architecture supports contextual behavior modification through instruction files without requiring parallel prompt suites. After Phase 3 handoff integration, evaluate whether richer DT-native prompt variants (dt-empathize.prompt.md, dt-define.prompt.md, etc.) are warranted. Graduate to the richest path only if the lightest path proves insufficient.Source Material
Success Criteria
collections/design-thinking.collection.ymlwithpathandkindfieldscollections/hve-core-all.collection.ymlwithpathandkindfieldsnpm run plugin:generatesucceeds after all collection manifest updates