-
-
Notifications
You must be signed in to change notification settings - Fork 9.6k
Description
- I searched existing issues and this has not been proposed before
(Related: #601 proposes accumulating learnings for subagent handoff. This request is about a first-class “learning” area in core Superpowers so the system can improve over time across sessions and workflows—not only between subagents. If maintainers consider this the same problem, I’m happy to fold this into #601 or narrow the scope after discussion.)
What problem does this solve?
Superpowers skills encode general workflows, but project- and team-specific knowledge still lives scattered in CLAUDE.md, ad-hoc notes, or nowhere at all. After a session I often have durable lessons (e.g. “our CI uses X”, “don’t run Y in this repo”, “reviewer prefers Z”) that don’t belong in a single skill file but should shape future runs.
Today there is no standard, documented place in Superpowers for “what we learned” that:
- is easy to find for both humans and the agent,
- can be updated incrementally without editing many skills,
- and carries forward so the next task starts smarter than the last.
Without that, the same mistakes and rediscoveries repeat; skills stay static while reality drifts.
Proposed solution
Add a learning section to core Superpowers—concrete and boring on purpose:
-
Canonical location and naming
e.g. alearnings/(ordocs/learnings/) area plus a short index (LEARNINGS.mdorlearnings/README.md) that explains: what goes here, how entries are formatted, and when to add vs. when to upstream into a real skill. -
A small “how to use learnings” contract
Either a thin skill or a section inusing-superpowers(or equivalent) that says: after meaningful outcomes (debugging, review, failed CI), append or update a learning entry (date, context, verified fact, optional link to issue/PR). Optionally: merge duplicates and deprecate stale entries. -
Optional hooks (later)
If it fits the project’s direction: reference this section from high-traffic skills (e.g. verification, debugging) so “capture learning” is a normal close-out step, not a separate habit.
Success looks like: repeat visits to the same repo get cheaper because verified context accumulates in one place Superpowers already knows about.
What alternatives did you consider?
- Only project docs (
CLAUDE.md,AGENTS.md): works but isn’t Superpowers-specific; new clones and contributors don’t get a consistent pattern across repos using Superpowers. - Embedding everything in existing skills: spreads knowledge across files and makes skills noisy; hard to grep and to retire stale guidance.
- Relying on subagent carryover only (#601): helps within multi-subagent work but doesn’t by itself define a long-lived, human-auditable learning surface for the whole plugin/workspace.
- External tools (wikis, Notion): fine for humans, weak for “agent loads this by default” unless duplicated locally.
A dedicated learning section + light process keeps skills stable while still allowing continuous improvement from real usage.
Is this appropriate for core Superpowers?
Yes, if the mechanism is generic (structure, conventions, optional skill text) and works for any stack or domain.
No / better as a plugin if it depends on a specific host (one IDE only), a proprietary service, or automatic cloud sync. The core proposal here is local, repo- or user-scoped files and clear conventions—no vendor lock-in.
Context
- Use case: long-running work on the same codebase where the same constraints and preferences should persist without re-explaining each session.
- Harness / client: (e.g. Cursor, Claude Code, etc.—fill in yours)
- Superpowers version / install path: (if known)