You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Metrics Cascade: Case Product Specification defines product success metrics → Case Brief translates to verifiable acceptance criteria → Experiments validate technical implementation
22
-
- Link Don't Duplicate: Reference specifications, don't copy content]
21
+
- Dependencies: For complex dependency chains, create visual diagrams instead of listing in text
22
+
- ADRs: Technical decisions are documented in Architecture Decision Records (ADRs) within experiments, not in cases]
23
23
24
24
## Engineered System
25
25
26
26
[Which specific system/subsystem are we building and what is its position in the larger system architecture?]
27
27
28
-
*For detailed system context, complete problem analysis including current/desired agent experience, see: [Case Product Specification](link-to-coda)*
29
-
30
28
## Agent Priority Overview
31
29
32
30
[High-level stakeholder priorities with percentages. Detailed analysis in Case Product Specification in Coda.]
33
31
34
32
**Priority Distribution:**[e.g., "Primary: 60% Developers; Secondary: 30% System Integrators; Minimal: 10% End Users"]
35
33
36
34
**Rationale:***Optional*[1-2 sentence justification for these priorities]
37
-
38
-
*For detailed agent analysis with agent journeys and integration requirements, see: [Case Product Specification](link-to-coda)*
39
-
40
35
## Expected Agent Experience & Acceptance
41
36
42
-
[Define scenarios the Engineered System must handle and acceptance criteria. Focus on observable outcomes, not internal system operations.]
43
-
44
-
*Note: Link acceptance criteria to implementing experiments during experiment planning phase.*
45
-
46
-
### Agent Acceptance Scenarios
37
+
[Brief paragraph: What agents will be able to do after this case is complete. Focus on observable outcomes and agent value.]
47
38
48
-
**Scenario 1: [Primary Scenario for Primary Agent]**
49
-
- Given [detailed initial conditions]
50
-
- When [agent performs action]
51
-
- Then [agent experiences result]
52
-
- And [additional agent benefit]
39
+
### Acceptance Criteria
53
40
54
-
**Acceptance Criteria:**
55
-
[Each criterion should be demonstrable within 5-10 minutes by non-developers or through developer demo. Validation methods: Observable (UI/logs/behavior), Measurable (counted/timed), Testable (test scripts), User-Validated (actual users)]
41
+
[Group criteria by agent type. Keep criteria simple and observable - verifiable by stakeholders in 5-10 minutes. Detailed validation procedures belong in Acceptance Experiment, not here. No experiment links - checkboxes show validation status when marked in stakeholder meetings.]
56
42
57
-
-[ ][Specific criterion] → **Experiment**: [Link #XXX when available or TBD]
58
-
-*Validation: [How to verify - e.g., "Dashboard shows metric within target"]*
59
-
-[ ][Performance/quality requirement] → **Experiment**: [Link #XXX when available or TBD]
60
-
-*Validation: [Verification method]*
43
+
**For [Primary Agent Type]:**
44
+
-[ ][Observable outcome or capability]
45
+
-[ ][Measurable result or behavior]
46
+
-[ ][Demonstrable functionality]
61
47
62
-
**Scenario 2: [Secondary Scenario - Success path for Secondary Agent]**
63
-
- Given [different initial conditions]
64
-
- When [alternative agent action]
65
-
- Then [expected alternative outcome]
48
+
**For [Secondary Agent Type]:**
49
+
-[ ][Observable outcome or capability]
50
+
-[ ][Measurable result or behavior]
66
51
67
-
**Acceptance Criteria:**
68
-
-[ ][Specific criterion] → **Experiment**: [Link #XXX when available or TBD]
69
-
-*Validation: [Verification method]*
52
+
**For [Tertiary Agent Type]:**
53
+
-[ ][Observable outcome or capability]
54
+
-[ ][Measurable result or behavior]
70
55
71
-
**Scenario 3: [Alternative Scenario - Different approach or edge case]**
72
-
- Given [edge case conditions]
73
-
- When [action that triggers alternative path]
74
-
- Then [expected handling]
75
-
76
-
**Acceptance Criteria:**
77
-
-[ ][Specific criterion] → **Experiment**: [Link #XXX when available or TBD]
78
-
-*Validation: [Verification method]*
79
-
80
-
**Scenario 4: [Error Scenario - Failure case and recovery]**
81
-
- Given [error conditions]
82
-
- When [action that triggers error]
83
-
- Then [expected error handling and recovery]
84
-
85
-
**Acceptance Criteria:**
86
-
-[ ][Error handling criterion] → **Experiment**: [Link #XXX when available or TBD]
87
-
-*Validation: [Verification method]*
56
+
[Continue for all relevant agent types. Focus on WHAT agents experience, not HOW the system works internally.]
88
57
89
58
## Scope Summary
90
59
@@ -94,45 +63,19 @@ assignees: ''
94
63
95
64
**Out of Scope:**[What explicitly will not be addressed - link to other cases handling these]
96
65
97
-
*For detailed interfaces and integration points, see: [Case Architecture Specification](link-to-arch-doc)*
98
-
99
-
## Critical Dependencies & Blockers
100
-
101
-
**Blocking This Case:**
102
-
-[Case/System #X]: [What must complete before we can proceed]
66
+
## References & Links *Optional*
103
67
104
-
**This Case Blocks:**
105
-
-[Case/System #Y]: [What depends on this case's completion]
- DRY: Reference Case Brief, Architecture docs - don't duplicate
16
+
- Dependencies: Use GitHub issue status ('Blocked') and comments for blockers. Don't list dependencies in the experiment description - that's project management, not technical specification
17
+
- ADRs Optional: Create Architecture Decision Records (ADRs) only when decisions affect other teams or have long-term consequences. Engineers have freedom for local implementation choices]
16
18
17
-
## Experiment Type & Hypothesis
19
+
## Experiment Hypothesis
18
20
19
-
**Type:**[Implementation / Research / Analysis / Proof-of-Concept]
21
+
**Hypothesis:**[Technical approach or assumption we're testing]
20
22
21
-
**What we believe:**[Technical approach or assumption we're testing]
23
+
**Rationale:**[Optional - brief rationale if not obvious]
22
24
23
-
**Expected outcome:**[Measurable technical result we expect]
25
+
## Out of Scope
24
26
25
-
**How we'll verify:**[Brief verification approach - detailed in Success Criteria below, expand in Verification Approach if non-standard]
27
+
[What's explicitly not included - prevents scope creep]
26
28
27
-
## Implementation Scope
28
-
29
-
[What we're building/testing in 1-2 sentences - keep brief and specific]
30
-
31
-
**In Scope:**
32
-
-[Specific technical work included]
33
-
-[Component/feature being implemented]
34
-
35
-
**Out of Scope:**
36
-
-[What's explicitly not included - link to other experiments if applicable]
29
+
-[Exclusion with brief reason]
30
+
-[Link to other experiment handling this if applicable]
37
31
38
32
## Technical Approach *Optional*
39
33
@@ -43,7 +37,7 @@ assignees: ''
43
37
[Key technical decisions or approach details]
44
38
45
39
**Technology Stack:**[If relevant to hypothesis]
46
-
-[Technology/tool/Library] - [Optional - brief reason. Detailed "why" in ADRs if architectural decision]
40
+
-[Technology/Tool/Library] - [Optional - brief reason. Detailed "why" in ADRs if architectural decision]
[Checkbox list - when all checked, experiment is ready to close]
65
+
[Checkbox list of concrete deliverables - when all checked, experiment is complete. If needed create categories that match your work. The following provides examples of the categories and outcomes]
86
66
87
67
**Code/Artifacts:**
88
68
-[ ][Specific code module/component committed to branch X]
0 commit comments