Skip to content

fix(readme): point agent-rules install path at the new in-package location#31

Merged
jaylfc merged 1 commit intomasterfrom
fix/readme-agent-rules-path
Apr 19, 2026
Merged

fix(readme): point agent-rules install path at the new in-package location#31
jaylfc merged 1 commit intomasterfrom
fix/readme-agent-rules-path

Conversation

@jaylfc
Copy link
Copy Markdown
Owner

@jaylfc jaylfc commented Apr 19, 2026

PR #23 moved docs/agent-rules.mdtaosmd/docs/agent-rules.md but the README still tells the install agent to read the old path — 404s on anyone following the install ritual right now. Update step 5 to use the taosmd.agent_rules() helper that PR #23 was designed for, plus fix the adjacent reference link. Pre-existing bug surfaced during a flat-repo audit.

Summary by CodeRabbit

  • Documentation
    • Simplified agent installation process with a programmatic method to fetch Memory rules via Python, replacing manual file copying.
    • Clarified rules location in the installed package and documented that rules are accessible through both file paths and a dedicated function call.

…ation

PR #23 moved `docs/agent-rules.md` → `taosmd/docs/agent-rules.md` and
exposed it via `importlib.resources` as `taosmd.agent_rules()`, but the
README still told the install agent to read the old path — which 404s
on anyone following the install ritual after #23 shipped.

Update step 5 of the install instructions to use the helper
(`python -c "import taosmd; print(taosmd.agent_rules())"`) and fix the
adjacent reference link to the new in-package path. Both editable and
wheel installs now work as advertised.
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 19, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro Plus

Run ID: e5d4b535-4850-4669-8146-a30141c0541d

📥 Commits

Reviewing files that changed from the base of the PR and between c360faa and 9b207bb.

📒 Files selected for processing (1)
  • README.md

📝 Walkthrough

Walkthrough

The README.md documentation is updated to change Step 5 of the "Let your agent install it" instructions from directing users to manually copy the "Memory — taosmd" rules block to fetching it programmatically via python -c "import taosmd; print(taosmd.agent_rules())", with corresponding path reference updates.

Changes

Cohort / File(s) Summary
Documentation Update
README.md
Updated Step 5 instructions to fetch "Memory — taosmd" rules block programmatically using taosmd.agent_rules() instead of manual copying, and updated referenced rules file path from docs/agent-rules.md to taosmd/docs/agent-rules.md.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Poem

🐰 A hop, a skip, no more copy-paste,
Just taosmd.agent_rules() to sedate,
The rules flow smooth as a rabbit's delight,
No manual fussing, just code shining bright! ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main change: updating README instructions to point to the new in-package location of agent-rules after it was moved in PR #23.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/readme-agent-rules-path

Comment @coderabbitai help to get the list of available commands and usage tips.

@kilo-code-bot
Copy link
Copy Markdown

kilo-code-bot bot commented Apr 19, 2026

Code Review Summary

Status: No Issues Found | Recommendation: Merge

Files Reviewed (1 file)
  • README.md - Updated installation instructions to use taosmd.agent_rules() for easier access to the rules block

Reviewed by grok-code-fast-1:optimized:free · 161,512 tokens

@jaylfc jaylfc merged commit 116edab into master Apr 19, 2026
2 checks passed
@jaylfc jaylfc deleted the fix/readme-agent-rules-path branch April 19, 2026 18:02
jaylfc added a commit that referenced this pull request Apr 19, 2026
… sections

PR #31 clarified the headline (lines 7-9) that our 97.0% is end-to-end
Judge accuracy, not Recall@5. But three downstream references still
labelled it as Recall@5:

1. Benchmark Results table (line 158) — column header was "Recall@5"
   even though our 97% is Judge and the competitors' numbers are the
   different, looser Recall@5 metric. Split into "Score" + "Metric"
   columns so each row is honestly labelled; added a clarifying
   paragraph below the table pointing at both benchmark scripts.
2. Fusion Strategy Comparison (line 181) — column said "Recall@5"
   but all four strategies were measured with the same Judge harness.
   Renamed header to "Judge accuracy" and softened the "MemPalace-
   equivalent" label to "same algorithm as MemPalace" so it describes
   the retrieval approach, not the metric.
3. Key Features list (line 263) — "97.0% Recall@5" → "97.0% end-to-end
   Judge accuracy".

Remaining Recall@5 references are all intentional: competitors'
published numbers, the narrative paragraph explaining the metric
difference, and the code-block comment for `longmemeval_recall.py`
which is the Recall@5 reproduction script.
jaylfc added a commit that referenced this pull request Apr 19, 2026
… sections (#32)

PR #31 clarified the headline (lines 7-9) that our 97.0% is end-to-end
Judge accuracy, not Recall@5. But three downstream references still
labelled it as Recall@5:

1. Benchmark Results table (line 158) — column header was "Recall@5"
   even though our 97% is Judge and the competitors' numbers are the
   different, looser Recall@5 metric. Split into "Score" + "Metric"
   columns so each row is honestly labelled; added a clarifying
   paragraph below the table pointing at both benchmark scripts.
2. Fusion Strategy Comparison (line 181) — column said "Recall@5"
   but all four strategies were measured with the same Judge harness.
   Renamed header to "Judge accuracy" and softened the "MemPalace-
   equivalent" label to "same algorithm as MemPalace" so it describes
   the retrieval approach, not the metric.
3. Key Features list (line 263) — "97.0% Recall@5" → "97.0% end-to-end
   Judge accuracy".

Remaining Recall@5 references are all intentional: competitors'
published numbers, the narrative paragraph explaining the metric
difference, and the code-block comment for `longmemeval_recall.py`
which is the Recall@5 reproduction script.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant