Context
PAI runs on top of Claude Code. CC injects a # auto memory directive into every session's system prompt, which teaches the agent to save per-cwd memories under ~/.claude/projects/<cwd-slug>/memory/. That directive is loud, fully specified in-context, and fires on every correction-worthy exchange.
PAI has its own memory system under ~/.pai/MEMORY/. Prior work (#109, #118, #119) established ~/.pai/MEMORY/LEARNING/FEEDBACK/ as the canonical home for PAI-native feedback memories, wired a loadFeedbackMemories() reader into SessionStart via LoadContext.hook.ts, and fixed the /learn apply workflow to write there.
But as identified in #140, #109's fix addressed only the /learn apply path. The harness-level # auto memory directive still routes every session's opportunistic memory save into the CC per-cwd silo, independent of /learn. #97 proposed fixing this by adding an override directive telling the agent to disregard # auto memory — an approach that relies on the model consistently preferring PAI's rule over CC's louder, more-specified in-prompt directive, which is the exact failure mode #97 itself describes.
Strategy — augment, don't override
Rather than fight the CC harness, let it write to its silo as designed. Extend /learn to consume and curate those silos into PAI's canonical store.
Principle:
CC silos are ephemeral staging. PAI is the durable, curated store. /learn is the curator.
This removes the need for the model to choose correctly between two competing instructions on every memory save. CC writes loudly and automatically; PAI consumes, synthesises, reviews, and applies on human-driven cadence.
The approach also explicitly declines automated mirror-on-hook infrastructure. Prior discussion considered a SessionEnd async: true hook that would mirror-then-delete, but the /learn extension is architecturally lighter, inherits /learn's existing human-review mechanism, and dissolves the contradiction-detection problem (no cheap automated solution exists; human review already handles it for reflections and ratings).
Design outline
Extend the existing /learn workflow (Check → Review → Apply) to treat ~/.claude/projects/*/memory/ as an additional input source.
Type-aware routing
CC's # auto memory directive defines four memory types. Each has a different shape and requires different handling:
| Type |
Shape |
Handling |
feedback |
Behavioural corrections |
Full /learn pipeline: Check synthesises with reflections + ratings, Review surfaces proposals for human adjudication, Apply writes canonical entry to ~/.pai/MEMORY/LEARNING/FEEDBACK/ |
user |
Profile / identity facts |
Pass-through: dedup by frontmatter name: key, write canonical entry to ~/.pai/MEMORY/LEARNING/USER/ |
project |
Ongoing work context |
Pass-through to ~/.pai/MEMORY/LEARNING/PROJECT/ |
reference |
External system pointers |
Pass-through to ~/.pai/MEMORY/LEARNING/REFERENCE/ |
For the three pass-through types, no synthesis or review is warranted — they are directive content, not pattern-mineable signals. Lightweight dedup-by-name is sufficient; deeper consolidation happens only if the user explicitly requests it in /learn review.
Silo processing + deletion
After /learn apply consolidates a silo entry into the PAI canonical store:
- Delete the source file from
~/.claude/projects/<cwd>/memory/<file>.md.
- Update that silo's
MEMORY.md index atomically to remove the reference.
- Record provenance in the PAI canonical file:
source_cwd:, ingested_at:, optionally original_path:.
CC's own # auto memory directive explicitly sanctions deletion ("find and remove the relevant entry", "update or remove memories that turn out to be wrong or outdated"), so this does not fight the harness — it completes the workflow CC designed but left unfinished.
Loader integration
Extend the existing SessionStart loader (hooks/lib/learning-readback.ts::loadFeedbackMemories) to also load from the three new pass-through homes (USER, PROJECT, REFERENCE), injecting their contents at session start the same way feedback memories are loaded. Consider consolidating into a single loadPaiMemories() function that routes by type.
Silo-accumulation mitigations
Main trade-off vs a hook-based mirror: if the user does not run /learn, silos accumulate indefinitely. Low-cost mitigations (separable, can ship later):
- Statusline signal — surface "N un-ingested memories across M silos" in the banner so queue size is visible.
- SessionStart nudge — when silos exceed a threshold, inject a gentle "consider running
/learn" into session context.
Both are cheap additions once the core ingestion works.
Acceptance criteria
/learn check scans all ~/.claude/projects/*/memory/ directories and classifies entries by type.
- Feedback-type entries are included in synthesis inputs alongside reflections and ratings.
- User/project/reference entries are consolidated into their respective PAI-native homes with name-key dedup.
/learn apply writes the canonical entries, deletes processed silo files, and updates silo MEMORY.md indices atomically.
- Provenance metadata (
source_cwd:, ingested_at:) is preserved on every canonical entry.
- Idempotent — safe to re-run if a prior run was interrupted.
- SessionStart loader reads all four canonical homes and injects appropriate content.
- Third-party or unrecognised memory files (missing or malformed frontmatter) are left untouched, not deleted.
Out of scope
- Implementation — this issue is design scope.
- Cross-session contradiction detection beyond what
/learn review already surfaces to the human.
- Automatic periodic
/learn invocation (candidate for a separate issue if statusline/nudge signals prove insufficient in practice).
- Migration of existing silo contents on first run — design should be idempotent so ordinary runs handle backlog naturally, but a one-shot migration pass may be wanted as a separate deliverable.
Related
Context
PAI runs on top of Claude Code. CC injects a
# auto memorydirective into every session's system prompt, which teaches the agent to save per-cwd memories under~/.claude/projects/<cwd-slug>/memory/. That directive is loud, fully specified in-context, and fires on every correction-worthy exchange.PAI has its own memory system under
~/.pai/MEMORY/. Prior work (#109, #118, #119) established~/.pai/MEMORY/LEARNING/FEEDBACK/as the canonical home for PAI-native feedback memories, wired aloadFeedbackMemories()reader into SessionStart viaLoadContext.hook.ts, and fixed the/learn applyworkflow to write there.But as identified in #140, #109's fix addressed only the
/learn applypath. The harness-level# auto memorydirective still routes every session's opportunistic memory save into the CC per-cwd silo, independent of/learn. #97 proposed fixing this by adding an override directive telling the agent to disregard# auto memory— an approach that relies on the model consistently preferring PAI's rule over CC's louder, more-specified in-prompt directive, which is the exact failure mode #97 itself describes.Strategy — augment, don't override
Rather than fight the CC harness, let it write to its silo as designed. Extend
/learnto consume and curate those silos into PAI's canonical store.Principle:
This removes the need for the model to choose correctly between two competing instructions on every memory save. CC writes loudly and automatically; PAI consumes, synthesises, reviews, and applies on human-driven cadence.
The approach also explicitly declines automated mirror-on-hook infrastructure. Prior discussion considered a SessionEnd
async: truehook that would mirror-then-delete, but the/learnextension is architecturally lighter, inherits/learn's existing human-review mechanism, and dissolves the contradiction-detection problem (no cheap automated solution exists; human review already handles it for reflections and ratings).Design outline
Extend the existing
/learnworkflow (Check → Review → Apply) to treat~/.claude/projects/*/memory/as an additional input source.Type-aware routing
CC's
# auto memorydirective defines four memory types. Each has a different shape and requires different handling:feedback/learnpipeline: Check synthesises with reflections + ratings, Review surfaces proposals for human adjudication, Apply writes canonical entry to~/.pai/MEMORY/LEARNING/FEEDBACK/username:key, write canonical entry to~/.pai/MEMORY/LEARNING/USER/project~/.pai/MEMORY/LEARNING/PROJECT/reference~/.pai/MEMORY/LEARNING/REFERENCE/For the three pass-through types, no synthesis or review is warranted — they are directive content, not pattern-mineable signals. Lightweight dedup-by-name is sufficient; deeper consolidation happens only if the user explicitly requests it in
/learn review.Silo processing + deletion
After
/learn applyconsolidates a silo entry into the PAI canonical store:~/.claude/projects/<cwd>/memory/<file>.md.MEMORY.mdindex atomically to remove the reference.source_cwd:,ingested_at:, optionallyoriginal_path:.CC's own
# auto memorydirective explicitly sanctions deletion ("find and remove the relevant entry", "update or remove memories that turn out to be wrong or outdated"), so this does not fight the harness — it completes the workflow CC designed but left unfinished.Loader integration
Extend the existing SessionStart loader (
hooks/lib/learning-readback.ts::loadFeedbackMemories) to also load from the three new pass-through homes (USER, PROJECT, REFERENCE), injecting their contents at session start the same way feedback memories are loaded. Consider consolidating into a singleloadPaiMemories()function that routes by type.Silo-accumulation mitigations
Main trade-off vs a hook-based mirror: if the user does not run
/learn, silos accumulate indefinitely. Low-cost mitigations (separable, can ship later):/learn" into session context.Both are cheap additions once the core ingestion works.
Acceptance criteria
/learn checkscans all~/.claude/projects/*/memory/directories and classifies entries by type./learn applywrites the canonical entries, deletes processed silo files, and updates siloMEMORY.mdindices atomically.source_cwd:,ingested_at:) is preserved on every canonical entry.Out of scope
/learn reviewalready surfaces to the human./learninvocation (candidate for a separate issue if statusline/nudge signals prove insufficient in practice).Related
/learn applyfix that established~/.pai/MEMORY/LEARNING/FEEDBACK/as the feedback home.