Skip to content

Commit 507451e

Browse files
committed
edit templates after review
1 parent 8fde185 commit 507451e

File tree

2 files changed

+69
-157
lines changed

2 files changed

+69
-157
lines changed

.github/ISSUE_TEMPLATE/case.md

Lines changed: 35 additions & 92 deletions
Original file line numberDiff line numberDiff line change
@@ -8,83 +8,52 @@ assignees: ''
88

99
[Case Brief writing rules:
1010
- Product Focus: Define agent value and experience, not technical implementation
11-
- Problem Analysis: Lives in Case Product Specification - Brief references Spec for context
1211
- Agent Priority: List agents by % importance with Human/System distinction (even if similar to other cases)
1312
- Basic English: Write for non-native speakers, avoid complex technical terms
1413
- Stakeholder Language: This brief is for business/product stakeholders
1514
- Minimal Content: Engineers need context to understand "what" and "why", not extensive product analysis
1615
- System Boundaries: Explicitly state what's included/excluded
1716
- Link to Details: Extended analysis lives in Coda, link from here
18-
- Scenario-Driven: Focus on agent behavior and acceptance, not system performance
19-
- Scope Limit: Target 3-6 month achievable milestones
20-
- Experiment Mapping: Link acceptance criteria to implementing experiments
17+
- Acceptance Focus: Define observable outcomes that stakeholders can verify, not technical validation procedures
18+
- Scope Limit: Target achievable work within project constraints (don't boil the ocean)
19+
- DRY: Reference specifications, don't copy content
2120
- Metrics Cascade: Case Product Specification defines product success metrics → Case Brief translates to verifiable acceptance criteria → Experiments validate technical implementation
22-
- Link Don't Duplicate: Reference specifications, don't copy content]
21+
- Dependencies: For complex dependency chains, create visual diagrams instead of listing in text
22+
- ADRs: Technical decisions are documented in Architecture Decision Records (ADRs) within experiments, not in cases]
2323

2424
## Engineered System
2525

2626
[Which specific system/subsystem are we building and what is its position in the larger system architecture?]
2727

28-
*For detailed system context, complete problem analysis including current/desired agent experience, see: [Case Product Specification](link-to-coda)*
29-
3028
## Agent Priority Overview
3129

3230
[High-level stakeholder priorities with percentages. Detailed analysis in Case Product Specification in Coda.]
3331

3432
**Priority Distribution:** [e.g., "Primary: 60% Developers; Secondary: 30% System Integrators; Minimal: 10% End Users"]
3533

3634
**Rationale:** *Optional* [1-2 sentence justification for these priorities]
37-
38-
*For detailed agent analysis with agent journeys and integration requirements, see: [Case Product Specification](link-to-coda)*
39-
4035
## Expected Agent Experience & Acceptance
4136

42-
[Define scenarios the Engineered System must handle and acceptance criteria. Focus on observable outcomes, not internal system operations.]
43-
44-
*Note: Link acceptance criteria to implementing experiments during experiment planning phase.*
45-
46-
### Agent Acceptance Scenarios
37+
[Brief paragraph: What agents will be able to do after this case is complete. Focus on observable outcomes and agent value.]
4738

48-
**Scenario 1: [Primary Scenario for Primary Agent]**
49-
- Given [detailed initial conditions]
50-
- When [agent performs action]
51-
- Then [agent experiences result]
52-
- And [additional agent benefit]
39+
### Acceptance Criteria
5340

54-
**Acceptance Criteria:**
55-
[Each criterion should be demonstrable within 5-10 minutes by non-developers or through developer demo. Validation methods: Observable (UI/logs/behavior), Measurable (counted/timed), Testable (test scripts), User-Validated (actual users)]
41+
[Group criteria by agent type. Keep criteria simple and observable - verifiable by stakeholders in 5-10 minutes. Detailed validation procedures belong in Acceptance Experiment, not here. No experiment links - checkboxes show validation status when marked in stakeholder meetings.]
5642

57-
- [ ] [Specific criterion]**Experiment**: [Link #XXX when available or TBD]
58-
- *Validation: [How to verify - e.g., "Dashboard shows metric within target"]*
59-
- [ ] [Performance/quality requirement]**Experiment**: [Link #XXX when available or TBD]
60-
- *Validation: [Verification method]*
43+
**For [Primary Agent Type]:**
44+
- [ ] [Observable outcome or capability]
45+
- [ ] [Measurable result or behavior]
46+
- [ ] [Demonstrable functionality]
6147

62-
**Scenario 2: [Secondary Scenario - Success path for Secondary Agent]**
63-
- Given [different initial conditions]
64-
- When [alternative agent action]
65-
- Then [expected alternative outcome]
48+
**For [Secondary Agent Type]:**
49+
- [ ] [Observable outcome or capability]
50+
- [ ] [Measurable result or behavior]
6651

67-
**Acceptance Criteria:**
68-
- [ ] [Specific criterion]**Experiment**: [Link #XXX when available or TBD]
69-
- *Validation: [Verification method]*
52+
**For [Tertiary Agent Type]:**
53+
- [ ] [Observable outcome or capability]
54+
- [ ] [Measurable result or behavior]
7055

71-
**Scenario 3: [Alternative Scenario - Different approach or edge case]**
72-
- Given [edge case conditions]
73-
- When [action that triggers alternative path]
74-
- Then [expected handling]
75-
76-
**Acceptance Criteria:**
77-
- [ ] [Specific criterion]**Experiment**: [Link #XXX when available or TBD]
78-
- *Validation: [Verification method]*
79-
80-
**Scenario 4: [Error Scenario - Failure case and recovery]**
81-
- Given [error conditions]
82-
- When [action that triggers error]
83-
- Then [expected error handling and recovery]
84-
85-
**Acceptance Criteria:**
86-
- [ ] [Error handling criterion]**Experiment**: [Link #XXX when available or TBD]
87-
- *Validation: [Verification method]*
56+
[Continue for all relevant agent types. Focus on WHAT agents experience, not HOW the system works internally.]
8857

8958
## Scope Summary
9059

@@ -94,45 +63,19 @@ assignees: ''
9463

9564
**Out of Scope:** [What explicitly will not be addressed - link to other cases handling these]
9665

97-
*For detailed interfaces and integration points, see: [Case Architecture Specification](link-to-arch-doc)*
98-
99-
## Critical Dependencies & Blockers
100-
101-
**Blocking This Case:**
102-
- [Case/System #X]: [What must complete before we can proceed]
66+
## References & Links *Optional*
10367

104-
**This Case Blocks:**
105-
- [Case/System #Y]: [What depends on this case's completion]
106-
107-
**Bottlenecks** (Resource constraints):
108-
- [Resource constraint] - Impact: [Description]
109-
110-
**External Blockers** (Third-party dependencies):
111-
- [Third-party dependency] - Expected resolution: [Timeline]
112-
113-
**Critical Path Items:**
114-
- [Dependency with resolution date]
115-
- [Risk requiring immediate attention]
116-
117-
*For complete dependency analysis and technical interfaces, see: [Case Product Specification](link-to-coda) and [Case Architecture Specification](link-to-arch-doc)*
118-
119-
## Decision Log
120-
121-
[Enumeration of related ADRs - decisions themselves live in ADR documents]
122-
123-
- [Date] - ADR #[XXXX] - [Case decomposition decision] - [Link to ADR]
124-
Status: [Active/Superseded by ADR #[XXXX]]
125-
- [Date] - ADR #[XXXX] - [Brief description] - [Link to ADR]
126-
Status: [Active/Superseded by ADR #[XXXX]]
127-
128-
## References & Links
68+
[Include this section when you need to reference external context, related work, or technical background. Delete if not applicable.]
12969

13070
**Full Case Details:**
13171
- [Case Product Specification](link-to-coda) - Extended product analysis, detailed agent journeys, business context
13272

13373
**Related Architecture:**
13474
- [Case Architecture Specification](link-to-arch-doc) - Technical architecture, interfaces, integration points
13575

76+
**Related Cases:**
77+
- [Case #XXX]: [Brief description of relationship]
78+
13679
## Learning Outcomes
13780

13881
[To be filled during and after case completion]
@@ -150,22 +93,22 @@ assignees: ''
15093

15194
[People who should review and acknowledge understanding of this experiment]
15295

153-
- [ ] [Person 1]
154-
- [ ] [Person 2]
155-
- [ ] [Person 3]
96+
- [ ] @github-handle-1
97+
- [ ] @github-handle-2
98+
- [ ] @github-handle-3
15699

157100
*Note: Check your name after reading and understanding this case to confirm awareness and reduce communication overhead.*
158101

159102
---
160103

161-
**Final Checklist Before Submitting:**
104+
[Note: Delete this checklist section once the case is submitted]
105+
106+
[Final Checklist Before Submitting:
162107
- [ ] Does this describe agent value, not technical implementation?
163-
- [ ] Is problem analysis referenced (not duplicated) from Case Product Specification?
164108
- [ ] Is Agent Priority Overview high-level with justification?
165-
- [ ] Are acceptance criteria clear and verifiable?
166-
- [ ] Do scenarios use correct terminology (Primary/Secondary/Alternative/Error)?
167-
- [ ] Is scope limited to 3-6 months of achievable work?
168-
- [ ] Are only critical dependencies and blockers listed?
109+
- [ ] Are acceptance criteria simple, grouped by agent?
110+
- [ ] Are acceptance criteria verifiable by stakeholders (observable outcomes)? Are we focused on delivering demonstrable outcomes?
111+
- [ ] Is scope limited to achievable work within project constraints?
112+
- [ ] Are we solving the immediate problem, not building the ultimate solution (don't boil the ocean)?
169113
- [ ] Are links to Case Product Specification and Architecture docs present?
170-
- [ ] Are experiment links marked as TBD where not yet planned?
171-
- [ ] Is Review & Acknowledgment section complete?
114+
- [ ] Is Review & Acknowledgment section complete with GitHub handles?]

.github/ISSUE_TEMPLATE/experiment.md

Lines changed: 34 additions & 65 deletions
Original file line numberDiff line numberDiff line change
@@ -12,28 +12,22 @@ assignees: ''
1212
- Engineer Freedom: Choose verification approach that fits hypothesis
1313
- Run Fast: Code may be thrown away if hypothesis fails
1414
- Technical Focus: Success criteria are technical, not product/user outcomes. Product criteria live in Case Brief
15-
- Link, Don't Copy: Reference Case Brief, Architecture docs - don't duplicate]
15+
- DRY: Reference Case Brief, Architecture docs - don't duplicate
16+
- Dependencies: Use GitHub issue status ('Blocked') and comments for blockers. Don't list dependencies in the experiment description - that's project management, not technical specification
17+
- ADRs Optional: Create Architecture Decision Records (ADRs) only when decisions affect other teams or have long-term consequences. Engineers have freedom for local implementation choices]
1618

17-
## Experiment Type & Hypothesis
19+
## Experiment Hypothesis
1820

19-
**Type:** [Implementation / Research / Analysis / Proof-of-Concept]
21+
**Hypothesis:** [Technical approach or assumption we're testing]
2022

21-
**What we believe:** [Technical approach or assumption we're testing]
23+
**Rationale:** [Optional - brief rationale if not obvious]
2224

23-
**Expected outcome:** [Measurable technical result we expect]
25+
## Out of Scope
2426

25-
**How we'll verify:** [Brief verification approach - detailed in Success Criteria below, expand in Verification Approach if non-standard]
27+
[What's explicitly not included - prevents scope creep]
2628

27-
## Implementation Scope
28-
29-
[What we're building/testing in 1-2 sentences - keep brief and specific]
30-
31-
**In Scope:**
32-
- [Specific technical work included]
33-
- [Component/feature being implemented]
34-
35-
**Out of Scope:**
36-
- [What's explicitly not included - link to other experiments if applicable]
29+
- [Exclusion with brief reason]
30+
- [Link to other experiment handling this if applicable]
3731

3832
## Technical Approach *Optional*
3933

@@ -43,7 +37,7 @@ assignees: ''
4337
[Key technical decisions or approach details]
4438

4539
**Technology Stack:** [If relevant to hypothesis]
46-
- [Technology/tool/Library] - [Optional - brief reason. Detailed "why" in ADRs if architectural decision]
40+
- [Technology/Tool/Library] - [Optional - brief reason. Detailed "why" in ADRs if architectural decision]
4741

4842
## Engineered System Context *Optional*
4943

@@ -66,23 +60,9 @@ assignees: ''
6660

6761
*For complete integration architecture, see: [Architecture Documentation](link)*
6862

69-
**Critical Path:**
70-
71-
*Impediments* (Active obstacles):
72-
- [Active obstacle preventing work] - Status: [Active/Resolved]
73-
74-
*Bottlenecks* (Resource constraints):
75-
- [Resource constraint slowing progress]
76-
77-
*External Blockers* (Third-party dependencies):
78-
- [Dependency causing delays] - Expected resolution: [Timeline]
79-
80-
*Blocking this experiment:*
81-
- [Experiment/System #X must complete first]
82-
8363
## Outcomes
8464

85-
[Checkbox list - when all checked, experiment is ready to close]
65+
[Checkbox list of concrete deliverables - when all checked, experiment is complete. If needed create categories that match your work. The following provides examples of the categories and outcomes]
8666

8767
**Code/Artifacts:**
8868
- [ ] [Specific code module/component committed to branch X]
@@ -132,19 +112,9 @@ assignees: ''
132112
**Standard approach assumed:** Code review + linter + basic testing to verify hypothesis
133113

134114
**Special verification for this experiment:**
135-
- [Document only non-standard verification needs, e.g.:
136-
- Load testing with specific parameters
137-
- Security review due to sensitive operations
138-
- Integration testing with specific external system]
115+
- [Document only non-standard verification needs, e.g., load testing, security audit, manual testing protocol, etc.]
139116

140-
## Resources & Timeline
141-
142-
**Team:**
143-
- [Role/Person] - [Time commitment]
144-
145-
**Timeline:** [Estimated duration for experiment]
146-
147-
## Decision Log
117+
## Decision Log *Optional*
148118

149119
[Enumeration of related ADRs - decisions themselves live in ADR documents]
150120

@@ -153,19 +123,19 @@ assignees: ''
153123
- [Date] - ADR #[XXXX] - [Brief description] - [Link to ADR]
154124
Status: [Active/Superseded by ADR #[XXXX]]
155125

156-
## References & Links
157-
158-
**Case Brief:** [Link] - See acceptance criteria for product context
126+
## References & Links *Optional*
159127

160-
**Full Case Details:**
161-
- [Case Product Specification](link-to-coda) - Extended product analysis, detailed agent journeys, business context
128+
[Include this section when you need to reference external context, related work, or technical background. Delete if not applicable.]
162129

163130
**Architecture Docs:** [Link if exists] - Technical architecture and design decisions
164131

165-
**Additional Resources:** [Optional]
166-
- [Tool documentation]
167-
- [Best practices guide]
168-
- [Research paper]
132+
**Technical Documentation:**
133+
- [Architecture docs, API specs, standards]
134+
135+
**External Resources:**
136+
- [Research papers, library docs, similar implementations]
137+
138+
**Additional Resources:**
169139
- [Meeting notes]
170140
- [Related experiments]
171141

@@ -206,21 +176,20 @@ assignees: ''
206176

207177
[People who should review and acknowledge understanding of this experiment]
208178

209-
- [ ] [Person 1]
210-
- [ ] [Person 2]
211-
- [ ] [Person 3]
179+
- [ ] @github-handle-1
180+
- [ ] @github-handle-2
181+
- [ ] @github-handle-3
212182

213183
*Note: Check your name after reading and understanding this case to confirm awareness and reduce communication overhead.*
214184

215185
---
216186

217-
**Final Checklist Before Submitting:**
187+
[Note: Delete this checklist section once the experiment is submitted]
188+
189+
[Final Checklist Before Submitting:
218190
- [ ] Is hypothesis clear and testable?
219-
- [ ] Are success criteria measurable and technical (not product-focused)?
220-
- [ ] Is scope limited and specific to this experiment?
221-
- [ ] Are links to Case Brief and Architecture docs present?
222-
- [ ] Are only critical dependencies listed?
223-
- [ ] Is verification approach appropriate (standard or documented special needs)?
224-
- [ ] Is this lightweight enough for experimental approach (not production feature development)?
225-
- [ ] Does Decision Log enumerate relevant ADRs?
226-
- [ ] Are Outcomes specific and actionable?
191+
- [ ] Are outcomes concrete and verifiable?
192+
- [ ] Are success criteria specific with measurable thresholds?
193+
- [ ] Is scope limited (what's excluded is clear)?
194+
- [ ] Is this focused on validating one specific technical approach?
195+
- [ ] Are we running fast to test the hypothesis, not building the perfect solution?]

0 commit comments

Comments
 (0)