Skip to content

Commit 9f7378f

Browse files
feat(instructions): create DT curriculum content (9 modules) (#690)
## Summary Creates 9 curriculum instruction files for the DT learning tutor — one per design thinking method across all three spaces. Each module provides structured teaching content that the tutor loads when delivering methodology education. ## Changes ### New Files (9 curriculum modules) | File | Method | Space | |------|--------|-------| | `dt-curriculum-01-scoping.instructions.md` | Scope Conversations | Problem | | `dt-curriculum-02-research.instructions.md` | Design Research | Problem | | `dt-curriculum-03-synthesis.instructions.md` | Input Synthesis | Problem | | `dt-curriculum-04-brainstorming.instructions.md` | Brainstorming | Solution | | `dt-curriculum-05-concepts.instructions.md` | User Concepts | Solution | | `dt-curriculum-06-prototypes.instructions.md` | Low-Fidelity Prototypes | Solution | | `dt-curriculum-07-testing.instructions.md` | High-Fidelity Prototypes | Implementation | | `dt-curriculum-08-iteration.instructions.md` | User Testing | Implementation | | `dt-curriculum-09-handoff.instructions.md` | Iteration at Scale | Implementation | ### Module Structure (consistent across all 9) Each curriculum file contains: - **Module Overview** — Method purpose in teaching voice - **Key Concepts** — 3-4 concepts with definition, relevance, and common misconception - **Techniques** — 2-3 techniques explained for learning (what, when, how, pitfalls) - **Comprehension Checks** — 3-4 conceptual and scenario-based questions - **Practice Exercises** — 1-2 exercises using the manufacturing reference scenario - **Learner Level Adaptations** — Beginner, intermediate, and advanced guidance ### Collection Updates - Registered all 9 files in `design-thinking.collection.yml` (22 → 30 items) - Registered all 9 files in `hve-core-all.collection.yml` (108 → 117 items) - Added description bullets to `design-thinking.collection.md` - Plugin outputs regenerated ### Design Decisions - **Teaching voice** distinct from coach facilitation voice — explains concepts rather than prescribing coaching actions - **applyTo pattern** `**/.copilot-tracking/dt/**/curriculum-{NN}*` — auto-loads when tutor creates tracking artifacts for a specific module - **Manufacturing scenario** used throughout practice exercises for continuity - **Progressive complexity** — earlier modules have simpler checks; later modules connect across methods ## Validation - `npm run plugin:generate` — 0 errors - `npm run lint:frontmatter` — 0 errors - `npm run lint:collections-metadata` — 0 errors - `npm run spell-check` — 0 issues ## Related Closes #617
1 parent 41088d8 commit 9f7378f

32 files changed

Lines changed: 574 additions & 66 deletions
Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
---
2+
description: 'DT Curriculum Module 1: Scope Conversations — concepts, techniques, checks, and exercises'
3+
applyTo: '**/.copilot-tracking/dt/**/curriculum-01*'
4+
---
5+
6+
# DT Curriculum Module 1: Scope Conversations
7+
8+
Scope conversations are the entry point to the Problem Space. They exist to ensure teams understand the actual problem before pursuing solutions. This module teaches learners how to move from an initial request to a validated problem frame through structured stakeholder dialogue.
9+
10+
## Key Concepts
11+
12+
*Frozen vs fluid requests* — A frozen request names a specific solution ("Build me a quality dashboard"). A fluid request describes a vague desire ("We want to use AI somehow"). Classification determines the conversation strategy: frozen requests need unfreezing through assumption surfacing, while fluid requests need scoping through progressive focusing. Learners often assume frozen requests are clearer and therefore better, but they frequently mask the real problem.
13+
14+
*Stakeholder ecosystem mapping* — Identifying primary stakeholders (decision makers, budget holders, daily users), secondary stakeholders (influencers, supporters, resistors), and hidden stakeholders (compliance, regulatory, union, IT security). The goal is discovering who the solution affects, not just who requested it. A common misconception is that the person requesting work represents all affected users.
15+
16+
*Constraint discovery* — Uncovering physical environment, operational workflow, and technical reality constraints that could invalidate solution assumptions before significant effort is invested. Learners tend to treat constraints as obstacles rather than as design parameters that shape viable solutions.
17+
18+
*Problem space discipline* — Scope conversations deliberately avoid solution discussions. Premature solutioning anchors thinking on approaches that address symptoms rather than root causes. Learners frequently conflate understanding the problem with being slow or indecisive.
19+
20+
## Techniques
21+
22+
*Progressive questioning* moves from broad context ("Tell me about your current process") to specific constraints ("What happens during night shifts when a machine stops?"). Each answer narrows the scope while revealing assumptions. Good output is a set of validated problem dimensions; a common pitfall is leading questions that confirm existing assumptions.
23+
24+
*Frozen request unfreezing* starts with the stated solution, then works backward: "What problem does the dashboard solve?" → "How do you know that problem exists?" → "Who experiences it most?" Each step peels back solution-first thinking. The pitfall is challenging the request too aggressively, which damages trust.
25+
26+
*Stakeholder tier mapping* categorizes every person or group the solution touches into primary, secondary, and hidden tiers, then identifies gaps — voices not yet heard. Good output is a map with at least one hidden stakeholder surfaced.
27+
28+
## Comprehension Checks
29+
30+
1. A plant manager asks "Build us a real-time quality dashboard for the production floor." Is this request frozen or fluid? What would your first question be, and why?
31+
2. Why must scope conversations stay in the problem space even when stakeholders push for solution discussions?
32+
3. A team identified operators and supervisors as stakeholders but no one else. What category of stakeholders are they likely missing, and what risks does that create?
33+
4. How does constraint discovery during scoping differ from constraint discovery during prototyping?
34+
35+
## Practice Exercises
36+
37+
*Exercise: Classify and unfreeze* — Given the manufacturing scenario request "Build a quality dashboard," write three progressive questions that move from the stated solution toward the underlying problem. Each question should reveal a different assumption embedded in the original request.
38+
39+
*Exercise: Hidden stakeholder search* — Using the manufacturing plant context (day shifts outperform night shifts on first-pass yield), identify at least two hidden stakeholders not mentioned in the initial request. For each, explain what perspective they bring that primary stakeholders cannot.
40+
41+
## Learner Level Adaptations
42+
43+
Beginners should focus on the frozen-vs-fluid distinction and basic progressive questioning. Intermediate learners benefit from comparing different questioning strategies and understanding how stakeholder mapping connects forward to Method 2 research planning. Advanced learners should explore edge cases where scope conversations reveal the original request should be abandoned entirely, and critique how power dynamics between stakeholders shape which problems get prioritized.
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
---
2+
description: 'DT Curriculum Module 2: Design Research — concepts, techniques, checks, and exercises'
3+
applyTo: '**/.copilot-tracking/dt/**/curriculum-02*'
4+
---
5+
6+
# DT Curriculum Module 2: Design Research
7+
8+
Design research bridges stakeholder assumptions and user reality within the Problem Space. Where scope conversations collect secondhand accounts from decision makers, design research generates firsthand evidence from the people who experience the problem daily. This module teaches learners how to observe, interview, and validate in context.
9+
10+
## Key Concepts
11+
12+
*Genuine need discovery* — Uncovering actual problems users face rather than confirming assumed needs. Research questions must be open-ended and curiosity-driven ("Walk me through your process when X happens") rather than leading ("Don't you think a dashboard would help?"). Learners often confuse validating a hypothesis with genuine discovery; validation seeks confirmation while discovery seeks surprise.
13+
14+
*Environmental context understanding* — Physical, technical, and organizational constraints affect every aspect of solution design. A solution that works in a quiet office may fail on an 85-90 dB production floor. Learners tend to research user needs in isolation from the environment where those needs occur, missing critical constraints.
15+
16+
*Universal discovery sequence* — Environmental observation → workflow interviews → constraint validation → unmet need exploration. This progression builds understanding from broad context to specific gaps. Learners often skip observation and jump directly to interviews, losing the ability to notice things users have normalized and stopped mentioning.
17+
18+
*Stakeholder-to-user bridge* — Moving from what stakeholders believe about users (Method 1 output) to what users actually experience. The gap between these perspectives is often significant and always informative. A common misconception is that stakeholder descriptions are close enough to user reality that direct user engagement is optional.
19+
20+
## Techniques
21+
22+
*Contextual inquiry* combines observation and interview at the user's actual work location during real work. The researcher watches, asks questions about observed behaviors, and notes environmental factors. Good output includes both stated needs and unstated workarounds. The pitfall is conducting interviews in conference rooms detached from the actual work environment.
23+
24+
*Environmental observation* involves documenting physical conditions, workflow patterns, and tool usage before asking any questions. Watch for 15-30 minutes without interrupting. Good output is a constraint inventory the user never articulated because they have adapted to limitations. The pitfall is observing too briefly to see natural variation.
25+
26+
*Cross-source validation* compares findings from different research methods and different user groups. Agreement strengthens a finding; disagreement reveals context-dependent factors worth exploring. Good output is a set of validated themes with confidence levels. The pitfall is treating one interview as representative of all users.
27+
28+
## Comprehension Checks
29+
30+
1. A team conducted 10 phone interviews with operators and concluded they need a mobile dashboard. What critical research step did they skip, and what constraints might they have missed?
31+
2. Why does the universal discovery sequence place environmental observation before workflow interviews?
32+
3. A researcher asks "Would a voice-controlled system help you during repairs?" Explain why this is a leading question and rewrite it as a discovery question.
33+
4. When stakeholder descriptions from Method 1 contradict user observations in Method 2, which should take precedence and why?
34+
35+
## Practice Exercises
36+
37+
*Exercise: Observation plan* — Design a 30-minute environmental observation protocol for the manufacturing plant floor during a night shift. Identify what you would document (noise levels, lighting, hand conditions, tool usage, communication patterns) and explain why each observation matters for solution design.
38+
39+
*Exercise: Leading question conversion* — Convert these leading questions into discovery questions: (a) "Wouldn't a touchscreen kiosk help?" (b) "Do you have trouble finding information in manuals?" (c) "Would you prefer voice commands over typing?" For each conversion, explain what the discovery version can reveal that the leading version cannot.
40+
41+
## Learner Level Adaptations
42+
43+
Beginners should focus on the distinction between leading and discovery questions and the importance of environment-based research.
44+
45+
Intermediate learners benefit from comparing contextual inquiry with remote research methods and understanding how environmental constraints discovered here feed into brainstorming constraints in Method 4.
46+
47+
Advanced learners should explore ethical dimensions of research (power dynamics with observed workers, consent in workplace settings) and analyze how research design choices bias the findings.
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
---
2+
description: 'DT Curriculum Module 3: Synthesis — concepts, techniques, checks, and exercises'
3+
applyTo: '**/.copilot-tracking/dt/**/curriculum-03*'
4+
---
5+
6+
# DT Curriculum Module 3: Input Synthesis
7+
8+
Input synthesis is the Problem Space exit point — the bridge between raw research data and actionable direction for the Solution Space. This module teaches learners how to transform scattered observations, interview notes, and field data into coherent themes that frame problems without prescribing solutions.
9+
10+
## Key Concepts
11+
12+
*Multi-source pattern recognition* — Identifying themes that appear across different types of research data (interviews, observations, environmental audits, existing reports). Patterns that emerge from only one source may be artifacts of research method rather than genuine findings. Learners often anchor on the most vivid or recent data point rather than looking across sources for convergent evidence.
13+
14+
*Theme development progression* — Individual data points become actionable themes through a specific evolution: fragment → supporting evidence from other sources → unified theme → actionable direction. Rushing from fragment to direction produces themes that sound reasonable but lack the evidence base to survive scrutiny. Learners commonly force themes too early, grouping loosely related points under a convenient label.
15+
16+
*Context preservation* — Maintaining domain-specific nuances and environmental factors while abstracting to themes. "Workers struggle with information access" loses critical detail compared to "Night-shift operators spend 10-15 minutes locating manual sections while machines sit idle in 85 dB environments with greasy hands." Learners tend to abstract too aggressively, losing the constraints that make themes actionable.
17+
18+
*Solution-ready problem statements* — Framing discovered themes as clear direction for brainstorming without dictating specific approaches. "Operators need immediate access to repair procedures in hands-free, high-noise environments" enables creative solutions; "Operators need a voice-controlled repair guide" prescribes one. Learners frequently embed solutions in their problem statements without realizing it.
19+
20+
## Techniques
21+
22+
*Affinity clustering* groups individual research data points by natural similarity. Work with physical or virtual cards — one observation per card — and group by emerging themes rather than predetermined categories. Good output is 4-7 theme clusters with clear boundaries. The pitfall is creating categories first and sorting data into them, which forces findings into preconceived frameworks.
23+
24+
*Cross-source validation* tests whether a theme appears in interview data, observation data, and existing metrics. Themes supported by all three carry higher confidence than single-source themes. Good output is a confidence-weighted theme list. The pitfall is discarding themes that appear in only one source without investigating whether the other sources simply did not capture that dimension.
25+
26+
*Stakeholder perspective balancing* ensures synthesis does not over-represent the loudest or most accessible voices. Count how many data points come from each stakeholder group and check for missing perspectives. Good output is a coverage map showing which groups informed which themes. The pitfall is letting management perspectives dominate when frontline workers provided different signals.
27+
28+
## Comprehension Checks
29+
30+
1. A team has 40 interview transcripts and grouped them into 3 themes in one hour. What risks does this speed suggest about their synthesis process?
31+
2. Why does synthesis happen before brainstorming rather than during it? What goes wrong when teams try to synthesize and ideate simultaneously?
32+
3. A synthesis produced the theme "Workers want better technology." Explain what is wrong with this framing and rewrite it as a solution-ready problem statement using manufacturing scenario details.
33+
4. Research found that day-shift operators rarely mentioned information access problems while night-shift operators cited it repeatedly. How should synthesis handle this discrepancy?
34+
35+
## Practice Exercises
36+
37+
*Exercise: Theme from fragments* — Given these manufacturing research fragments, develop one unified theme: (a) "Night-shift workers take 10-15 minutes to find the right manual section," (b) "Day-shift workers ask experienced colleagues instead of using manuals," (c) "Maintenance logs show faster resolution times during shifts with senior operators present." Write the theme as a solution-ready problem statement that preserves environmental context.
38+
39+
*Exercise: Bias check* — Review this stakeholder data distribution and identify the gap: 12 data points from plant managers, 8 from day-shift supervisors, 3 from night-shift operators, 0 from temporary workers. What perspective is missing, why does it matter, and what research method from Module 2 would fill the gap?
40+
41+
## Learner Level Adaptations
42+
43+
Beginners should focus on the difference between premature theme forcing and evidence-based pattern recognition, and practice writing solution-ready problem statements.
44+
45+
Intermediate learners benefit from comparing affinity clustering with top-down categorization and understanding how synthesis quality directly affects brainstorming scope in Method 4.
46+
47+
Advanced learners should explore how organizational power dynamics distort synthesis (whose data gets weighted more), analyze cases where contradictory themes are both valid, and critique the boundary between sufficient synthesis and analysis paralysis.

0 commit comments

Comments
 (0)