|
1 | 1 | --- |
2 | | -name: Case |
3 | | -about: Product-level description in stakeholder language - defines system boundaries and value propositions |
| 2 | +name: Case Brief |
| 3 | +about: Product-level overview for stakeholders - defines engineered system and acceptance criteria |
4 | 4 | title: '[CASE] [Brief descriptive title]' |
5 | 5 | labels: 'case' |
6 | 6 | assignees: '' |
7 | | - |
8 | 7 | --- |
9 | 8 |
|
10 | | -[Case writing rules: |
| 9 | +[Case Brief writing rules: |
11 | 10 | - Product Focus: Define agent value and experience, not technical implementation |
| 11 | +- Problem Analysis: Lives in Case Product Specification - Brief references Spec for context |
12 | 12 | - Agent Priority: List agents by % importance with Human/System distinction (even if similar to other cases) |
13 | | -- Target System Clarity: Explicitly identify target system and distinguish from consumer/dependency systems |
14 | | -- System Boundaries: Explicitly state what's included/excluded from this case |
15 | | -- Integration Points: Technical interfaces (Clearly separate from project dependencies) |
16 | 13 | - Basic English: Write for non-native speakers, avoid complex technical terms |
17 | | -- Scope Limit: Target achievable milestones of 3–6 months only |
18 | | -- Agent-Driven: Focus on agent behaviour and adoption, rather than system performance |
19 | | -- Experiment Mapping: Link agent outcomes to experiments when available, update during experiment planning phase] |
20 | | - |
21 | | -## Target System |
22 | | - |
23 | | -[Which system does this case address and what is its position in the larger system architecture?] |
24 | | - |
25 | | -## Problem Statement |
26 | | - |
27 | | -[Describe the agent/business problem this case solves. What needs to change in the current state and why? Focus on WHAT agents need and WHY it matters. Leave technical details for experiments] |
28 | | - |
29 | | -### Current Agent Experience |
30 | | - |
31 | | -[What agents experience today that needs improvement] |
32 | | - |
33 | | -### Desired Agent Experience |
34 | | - |
35 | | -[What agents should be able to do after this case is complete] |
| 14 | +- Stakeholder Language: This brief is for business/product stakeholders |
| 15 | +- Minimal Content: Engineers need context to understand "what" and "why", not extensive product analysis |
| 16 | +- System Boundaries: Explicitly state what's included/excluded |
| 17 | +- Link to Details: Extended analysis lives in Coda, link from here |
| 18 | +- Scenario-Driven: Focus on agent behavior and acceptance, not system performance |
| 19 | +- Scope Limit: Target 3-6 month achievable milestones |
| 20 | +- Experiment Mapping: Link acceptance criteria to implementing experiments |
| 21 | +- Metrics Cascade: Case Product Specification defines product success metrics → Case Brief translates to verifiable acceptance criteria → Experiments validate technical implementation |
| 22 | +- Link Don't Duplicate: Reference specifications, don't copy content] |
36 | 23 |
|
37 | | -## Value Proposition |
| 24 | +## Engineered System |
38 | 25 |
|
39 | | -[Clear business / agent value that this case provides] |
| 26 | +[Which specific system/subsystem are we building and what is its position in the larger system architecture?] |
40 | 27 |
|
41 | | -## Agent Analysis |
| 28 | +*For detailed system context, complete problem analysis including current/desired agent experience, see: [Case Product Specification](link-to-coda)* |
42 | 29 |
|
43 | | -[Map all agents (human and system) by priority with percentages. Focus on WHO / WHAT will interact with or benefit from the Target System] |
| 30 | +## Agent Priority Overview |
44 | 31 |
|
45 | | -**Agent Priority Overview**: [e.g., "Primary: 60% Developers; Secondary: 30% Monitoring Systems; Minimal: 10% End Users"] |
| 32 | +[High-level stakeholder priorities with percentages. Detailed analysis in Case Product Specification in Coda.] |
46 | 33 |
|
47 | | -[Optional: Include an evaluation / justification for why these priorities make sense for this case] |
| 34 | +**Priority Distribution:** [e.g., "Primary: 60% Developers; Secondary: 30% System Integrators; Minimal: 10% End Users"] |
48 | 35 |
|
49 | | -*Note: Initially, experiment links may show "TBD - [description]". Update with actual experiment links during experiment planning phase. When experiment scope changes significantly, review and update corresponding agent outcomes.* |
| 36 | +**Rationale:** *Optional* [1-2 sentence justification for these priorities] |
50 | 37 |
|
51 | | -### [Primary Agent Name] ([X%] - Primary) |
52 | | -- **Agent Type**: [Human Agent: group of people/person/role] OR [System Agent: machine/service/system] |
53 | | -- **Current Pain Points**: [What problems do they have today with existing systems] |
54 | | -- **Desired Outcomes**: [What success looks like] |
55 | | - - *Outcome 1* → **Experiment**: [Title/Link when available] |
56 | | - - *Outcome 2* → **Experiment**: [Title/Link when available] |
57 | | -- **Agent Journey**: [Action] → [Action] → [Successful Outcome] |
58 | | -- **Integration Requirements**: [For System Agents: APIs, data formats, protocols needed] |
59 | | - |
60 | | - *Note: This details the specific technical interfaces this agent needs (see Technical Interfaces section for system-wide view)* |
61 | | - |
62 | | -### [Secondary Agent Name] ([Y%] - Secondary) |
63 | | - |
64 | | -[Same structure as above] |
65 | | - |
66 | | -[Continue the pattern for all Agents, ordered by priority] |
| 38 | +*For detailed agent analysis with agent journeys and integration requirements, see: [Case Product Specification](link-to-coda)* |
67 | 39 |
|
68 | 40 | ## Expected Agent Experience & Acceptance |
69 | 41 |
|
70 | | -[Scenarios that define both the Target System behaviour and the acceptance criteria. Describe what agents will experience, NOT how the Target System works internally. Focus on acceptance testing, not repeating the desired outcomes already listed in Agent Analysis. Validation priorities are derived from Agent Priority Overview above – no separate priority statement needed here] |
| 42 | +[Define scenarios the Engineered System must handle and acceptance criteria. Focus on observable outcomes, not internal system operations.] |
| 43 | + |
| 44 | +*Note: Link acceptance criteria to implementing experiments during experiment planning phase.* |
71 | 45 |
|
72 | 46 | ### Agent Acceptance Scenarios |
73 | 47 |
|
74 | | -**Scenario 1: [Primary Happy Path for Human Agent]** |
75 | | -- Given [agent context / starting point] |
| 48 | +**Scenario 1: [Primary Scenario for Primary Agent]** |
| 49 | +- Given [detailed initial conditions] |
76 | 50 | - When [agent performs action] |
77 | 51 | - Then [agent experiences result] |
78 | 52 | - And [additional agent benefit] |
79 | 53 |
|
80 | 54 | **Acceptance Criteria:** |
| 55 | +[Each criterion should be demonstrable within 5-10 minutes by non-developers or through developer demo. Validation methods: Observable (UI/logs/behavior), Measurable (counted/timed), Testable (test scripts), User-Validated (actual users)] |
81 | 56 |
|
82 | | -[Each criterion should be demonstrable to non-developers within 5-10 minutes] |
83 | | -[Prefer outcomes that non-developers can verify directly, but developer demos are acceptable] |
84 | | - |
85 | | -*Demonstration Examples: Screen Demo - Show working feature in browser/app; Metrics Dashboard - Display performance/usage numbers; Test Results - Show automated test passes/results; User Validation - Have actual user complete task successfully; Live Demo - Demonstrate feature working end-to-end* |
86 | | - |
87 | | -[Each criterion must be verifiable through one of these methods:] |
88 | | -[Observable] - Can be seen in UI/logs/behavior |
89 | | -[Measurable] - Can be counted/timed/quantified |
90 | | -[Testable] - Can be validated through test scripts |
91 | | -[User-Validated] - Can be confirmed by actual users/stakeholders |
92 | | - |
93 | | -- [ ] [Specific criterion] [Validation method: ] |
94 | | -- [ ] [Performance requirement] [e.g. Dashboard showing metrics within targets] |
95 | | - |
96 | | -**Scenario 2: [Primary Happy Path for System Agent]** |
| 57 | +- [ ] [Specific criterion] → **Experiment**: [Link #XXX when available or TBD] |
| 58 | + - *Validation: [How to verify - e.g., "Dashboard shows metric within target"]* |
| 59 | +- [ ] [Performance/quality requirement] → **Experiment**: [Link #XXX when available or TBD] |
| 60 | + - *Validation: [Verification method]* |
97 | 61 |
|
98 | | -- Given [system agent needs specific data / functionality] |
99 | | -- When [system agent makes API call / integration request] |
100 | | -- Then [target system provides required response / data] |
101 | | -- And [system agent can successfully complete its function] |
| 62 | +**Scenario 2: [Secondary Scenario - Success path for Secondary Agent]** |
| 63 | +- Given [different initial conditions] |
| 64 | +- When [alternative agent action] |
| 65 | +- Then [expected alternative outcome] |
102 | 66 |
|
103 | 67 | **Acceptance Criteria:** |
| 68 | +- [ ] [Specific criterion] → **Experiment**: [Link #XXX when available or TBD] |
| 69 | + - *Validation: [Verification method]* |
104 | 70 |
|
105 | | -[How to verify system agent integration works, e.g. API tests, data format checks] |
106 | | - |
107 | | -- [ ] [Specific criterion] [Validation method: Observable/Measurable/Testable/User-Validated] |
108 | | - |
109 | | -**Scenario 3: [Alternative Path]** |
110 | | - |
111 | | -Given [Different initial conditions] |
112 | | -When [Alternative stakeholder action] |
113 | | -Then [Expected alternative response] |
| 71 | +**Scenario 3: [Alternative Scenario - Different approach or edge case]** |
| 72 | +- Given [edge case conditions] |
| 73 | +- When [action that triggers alternative path] |
| 74 | +- Then [expected handling] |
114 | 75 |
|
115 | 76 | **Acceptance Criteria:** |
| 77 | +- [ ] [Specific criterion] → **Experiment**: [Link #XXX when available or TBD] |
| 78 | + - *Validation: [Verification method]* |
116 | 79 |
|
117 | | -- [ ] [Specific criterion for this scenario] [Validation method: Observable/Measurable/Testable/User-Validated] |
118 | | - |
119 | | -**Scenario 4: [Error/Edge Case Handling]** |
120 | | - |
121 | | -Given [Error conditions] |
122 | | -When [Action that triggers error] |
123 | | -Then [Expected error handling behavior] |
| 80 | +**Scenario 4: [Error Scenario - Failure case and recovery]** |
| 81 | +- Given [error conditions] |
| 82 | +- When [action that triggers error] |
| 83 | +- Then [expected error handling and recovery] |
124 | 84 |
|
125 | 85 | **Acceptance Criteria:** |
| 86 | +- [ ] [Error handling criterion] → **Experiment**: [Link #XXX when available or TBD] |
| 87 | + - *Validation: [Verification method]* |
126 | 88 |
|
127 | | -- [ ] [Specific measurable criterion for error handling] [Validation method: Observable/Measurable/Testable/User-Validated] |
128 | | - |
129 | | -## Target System Context & Boundaries |
130 | | - |
131 | | -### Target System Scope |
132 | | - |
133 | | -In Scope: [What the Target System will do and the boundaries included] |
134 | | -Out of Scope: [What explicitly will not be addressed - link to other cases handling these] |
| 89 | +## Scope Summary |
135 | 90 |
|
136 | | -### Integration Points & Dependencies |
| 91 | +### Engineered System Scope |
137 | 92 |
|
138 | | -**Technical Interfaces** |
139 | | -[APIs, protocols, data formats this case will provide/require] |
| 93 | +**In Scope:** [What this system will do - boundaries included] |
140 | 94 |
|
141 | | -- [Interface 1]: [Description and purpose] |
142 | | -- [Interface 2]: [Description and purpose] |
| 95 | +**Out of Scope:** [What explicitly will not be addressed - link to other cases handling these] |
143 | 96 |
|
144 | | -*Examples: REST API endpoints (POST /api/contracts/deploy), WebSocket connections for real-time updates, JSON message format for configuration data, HTTPS communication with external payment service* |
| 97 | +*For detailed interfaces and integration points, see: [Case Architecture Specification](link-to-arch-doc)* |
145 | 98 |
|
146 | | -**System Dependencies** |
147 | | -- [Other systems this case requires to function] |
148 | | - |
149 | | -**Case Dependencies** |
150 | | -- [Other cases that must complete first] |
| 99 | +## Critical Dependencies & Blockers |
151 | 100 |
|
152 | | -**External Dependencies** |
153 | | -- [Outside decisions, resources, approvals needed] |
| 101 | +**Blocking This Case:** |
| 102 | +- [Case/System #X]: [What must complete before we can proceed] |
154 | 103 |
|
155 | | -**Critical Path** |
156 | | -- [Blocking factors and their resolution timeline] |
| 104 | +**This Case Blocks:** |
| 105 | +- [Case/System #Y]: [What depends on this case's completion] |
157 | 106 |
|
158 | | -*Examples: Case Dependencies - Authentication system (Case #45) must be complete; External Dependencies - Legal approval for terms of service; Technical Dependencies - PostgreSQL database cluster must be deployed* |
| 107 | +**Bottlenecks** (Resource constraints): |
| 108 | +- [Resource constraint] - Impact: [Description] |
159 | 109 |
|
160 | | -### Quality Attributes |
| 110 | +**External Blockers** (Third-party dependencies): |
| 111 | +- [Third-party dependency] - Expected resolution: [Timeline] |
161 | 112 |
|
162 | | -[High-level requirements overview. May duplicate metrics from Acceptance Criteria for stakeholder clarity] |
| 113 | +**Critical Path Items:** |
| 114 | +- [Dependency with resolution date] |
| 115 | +- [Risk requiring immediate attention] |
163 | 116 |
|
164 | | -**Performance**: [Response time, throughput requirements] |
165 | | -**Scalability**: [Growth expectations and constraints] |
166 | | -**Reliability**: [Uptime, error rate expectations] |
167 | | -**Security**: [Security requirements and compliance needs] |
168 | | -**Usability**: [User experience requirements] |
| 117 | +*For complete dependency analysis and technical interfaces, see: [Case Product Specification](link-to-coda) and [Case Architecture Specification](link-to-arch-doc)* |
169 | 118 |
|
170 | | -### Constraints |
171 | | - |
172 | | -**Technical Constraints**: |
173 | | -- [Technical limitations and requirements] |
174 | | -- [Platform requirements, compatibility needs, performance limits] |
175 | | - |
176 | | -**Business Constraints**: |
177 | | -- [Business rules and regulatory requirements] |
178 | | -- [Human resource: Total estimated resources across all planned experiments] |
179 | | -- [Timeline constraints] |
| 119 | +## Decision Log |
180 | 120 |
|
181 | | -## Risks Assessment |
| 121 | +[Enumeration of related ADRs - decisions themselves live in ADR documents] |
182 | 122 |
|
183 | | -|Risk|Impact|Probability|Mitigation|Owner|Experiment| |
184 | | -|---|---|---|---|---|---| |
185 | | -|[Risk description]|[High/Med/Low]|[High/Med/Low]|[Mitigation approach]|[Responsible person]|[Link to experiment if applicable]| |
| 123 | +- [Date] - ADR #[XXXX] - [Case decomposition decision] - [Link to ADR] |
| 124 | + Status: [Active/Superseded by ADR #[XXXX]] |
| 125 | +- [Date] - ADR #[XXXX] - [Brief description] - [Link to ADR] |
| 126 | + Status: [Active/Superseded by ADR #[XXXX]] |
186 | 127 |
|
187 | | -## Decision Log |
| 128 | +## References & Links |
188 | 129 |
|
189 | | -[Record key architectural and design decisions] |
| 130 | +**Full Case Details:** |
| 131 | +- [Case Product Specification](link-to-coda) - Extended product analysis, detailed agent journeys, business context |
190 | 132 |
|
191 | | -[Date] - [Decision] - [Rationale] - [Impact on agents] |
192 | | -Status: [Active/Superseded] |
| 133 | +**Related Architecture:** |
| 134 | +- [Case Architecture Specification](link-to-arch-doc) - Technical architecture, interfaces, integration points |
193 | 135 |
|
194 | 136 | ## Learning Outcomes |
195 | 137 |
|
196 | | -[To be filled in during and after the case has been completed] |
| 138 | +[To be filled during and after case completion] |
197 | 139 |
|
198 | 140 | **What we learned:** |
199 | | - |
200 | | -Key insights gained: |
201 | | -Assumptions validated/invalidated: |
202 | | -Unexpected discoveries: |
| 141 | +- Key insights gained: |
| 142 | +- Assumptions validated/invalidated: |
| 143 | +- Unexpected discoveries: |
203 | 144 |
|
204 | 145 | **What we would do differently:** |
205 | | - |
206 | | -Process improvements: |
207 | | -Technical approach changes: |
| 146 | +- Process improvements: |
| 147 | +- Technical approach changes: |
208 | 148 |
|
209 | 149 | ## Review & Acknowledgment |
210 | 150 |
|
211 | | -[People/teams involved/affected by this case who should be aware] |
| 151 | +[People who should review and acknowledge understanding of this experiment] |
212 | 152 |
|
213 | 153 | - [ ] [Person 1] |
214 | 154 | - [ ] [Person 2] |
215 | 155 | - [ ] [Person 3] |
216 | 156 |
|
217 | | -*Note: People listed here should check their name after reading and understanding case. This aims to reduce communication and increase traceability of the review process.* |
| 157 | +*Note: Check your name after reading and understanding this case to confirm awareness and reduce communication overhead.* |
| 158 | + |
| 159 | +--- |
218 | 160 |
|
219 | | -[ |
220 | 161 | **Final Checklist Before Submitting:** |
221 | | -- [ ] Does this describe Agent value, not technical implementation? |
222 | | -- [ ] Are agents prioritized with clear percentages and Human / System distinction? |
223 | | -- [ ] Is the Target System clearly identified and distinguished from consumer / dependency systems? |
224 | | -- [ ] Are Integration Points clearly separated from Dependencies? |
225 | | -- [ ] Are system boundaries clearly defined? |
226 | | -- [ ] Is the language simple enough for non-native speakers? |
227 | | -- [ ] Is the scope limited to 3-6 months of achievable work? |
228 | | -- [ ] Do scenarios focus on agent behavior, not system performance? |
229 | | -- [ ] Are experiment links updated where available? |
230 | | -- [ ] Is the Review & Acknowledgment section completed? |
231 | | -] |
| 162 | +- [ ] Does this describe agent value, not technical implementation? |
| 163 | +- [ ] Is problem analysis referenced (not duplicated) from Case Product Specification? |
| 164 | +- [ ] Is Agent Priority Overview high-level with justification? |
| 165 | +- [ ] Are acceptance criteria clear and verifiable? |
| 166 | +- [ ] Do scenarios use correct terminology (Primary/Secondary/Alternative/Error)? |
| 167 | +- [ ] Is scope limited to 3-6 months of achievable work? |
| 168 | +- [ ] Are only critical dependencies and blockers listed? |
| 169 | +- [ ] Are links to Case Product Specification and Architecture docs present? |
| 170 | +- [ ] Are experiment links marked as TBD where not yet planned? |
| 171 | +- [ ] Is Review & Acknowledgment section complete? |
0 commit comments