fix(readme): point agent-rules install path at the new in-package location#31
fix(readme): point agent-rules install path at the new in-package location#31
Conversation
…ation PR #23 moved `docs/agent-rules.md` → `taosmd/docs/agent-rules.md` and exposed it via `importlib.resources` as `taosmd.agent_rules()`, but the README still told the install agent to read the old path — which 404s on anyone following the install ritual after #23 shipped. Update step 5 of the install instructions to use the helper (`python -c "import taosmd; print(taosmd.agent_rules())"`) and fix the adjacent reference link to the new in-package path. Both editable and wheel installs now work as advertised.
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Plus Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughThe README.md documentation is updated to change Step 5 of the "Let your agent install it" instructions from directing users to manually copy the "Memory — taosmd" rules block to fetching it programmatically via Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
Code Review SummaryStatus: No Issues Found | Recommendation: Merge Files Reviewed (1 file)
Reviewed by grok-code-fast-1:optimized:free · 161,512 tokens |
… sections PR #31 clarified the headline (lines 7-9) that our 97.0% is end-to-end Judge accuracy, not Recall@5. But three downstream references still labelled it as Recall@5: 1. Benchmark Results table (line 158) — column header was "Recall@5" even though our 97% is Judge and the competitors' numbers are the different, looser Recall@5 metric. Split into "Score" + "Metric" columns so each row is honestly labelled; added a clarifying paragraph below the table pointing at both benchmark scripts. 2. Fusion Strategy Comparison (line 181) — column said "Recall@5" but all four strategies were measured with the same Judge harness. Renamed header to "Judge accuracy" and softened the "MemPalace- equivalent" label to "same algorithm as MemPalace" so it describes the retrieval approach, not the metric. 3. Key Features list (line 263) — "97.0% Recall@5" → "97.0% end-to-end Judge accuracy". Remaining Recall@5 references are all intentional: competitors' published numbers, the narrative paragraph explaining the metric difference, and the code-block comment for `longmemeval_recall.py` which is the Recall@5 reproduction script.
… sections (#32) PR #31 clarified the headline (lines 7-9) that our 97.0% is end-to-end Judge accuracy, not Recall@5. But three downstream references still labelled it as Recall@5: 1. Benchmark Results table (line 158) — column header was "Recall@5" even though our 97% is Judge and the competitors' numbers are the different, looser Recall@5 metric. Split into "Score" + "Metric" columns so each row is honestly labelled; added a clarifying paragraph below the table pointing at both benchmark scripts. 2. Fusion Strategy Comparison (line 181) — column said "Recall@5" but all four strategies were measured with the same Judge harness. Renamed header to "Judge accuracy" and softened the "MemPalace- equivalent" label to "same algorithm as MemPalace" so it describes the retrieval approach, not the metric. 3. Key Features list (line 263) — "97.0% Recall@5" → "97.0% end-to-end Judge accuracy". Remaining Recall@5 references are all intentional: competitors' published numbers, the narrative paragraph explaining the metric difference, and the code-block comment for `longmemeval_recall.py` which is the Recall@5 reproduction script.
PR #23 moved
docs/agent-rules.md→taosmd/docs/agent-rules.mdbut the README still tells the install agent to read the old path — 404s on anyone following the install ritual right now. Update step 5 to use thetaosmd.agent_rules()helper that PR #23 was designed for, plus fix the adjacent reference link. Pre-existing bug surfaced during a flat-repo audit.Summary by CodeRabbit