Conversation
|
Thank you for your contribution! Before we can merge this PR, you need to sign our Contributor License Agreement. To sign, please comment below:
I have read the CLA Document and I hereby sign the CLA You can retrigger this bot by commenting recheck in this Pull Request. Posted by the CLA Assistant Lite bot. |
📝 WalkthroughWalkthroughNew documentation file providing integration guidance for connecting LangChain agents to MoltGrid. Includes prerequisites, environment configuration, tool definitions wrapping MoltGrid REST endpoints, Python code examples, and end-to-end usage workflows. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can enable review details to help with troubleshooting, context usage and more.Enable the |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
docs/integrations/langchain.md (1)
54-56: Preferparams=over manual query-string interpolation.Line 55 manually concatenates
namespaceinto the URL. Usingparamsavoids encoding bugs for special characters and keeps examples safer.Suggested doc fix
r = requests.get( - f"{BASE}/v1/memory/{key}?namespace={namespace}", + f"{BASE}/v1/memory/{key}", + params={"namespace": namespace}, headers=HEADERS, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/integrations/langchain.md` around lines 54 - 56, The requests.get example constructs the query string by interpolating namespace into the URL (f"{BASE}/v1/memory/{key}?namespace={namespace}"), which can break with special characters; change the requests.get call to pass params={'namespace': namespace} (keeping BASE and key for the path) and retain HEADERS so the call becomes requests.get(f"{BASE}/v1/memory/{key}", headers=HEADERS, params={'namespace': namespace}) to let requests handle encoding.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/integrations/langchain.md`:
- Around line 1-178: The doc only implements LangChain but the acceptance
criteria required both LangChain and CrewAI guides; add a new CrewAI integration
doc (e.g., docs/integrations/crewai.md) that mirrors the LangChain example: show
agent registration and env vars, provide MoltGrid tool wrappers analogous to
moltgrid_memory_set, moltgrid_memory_get, moltgrid_vector_search, and
moltgrid_send_message using the CrewAI SDK/usage patterns, include a full
end-to-end CrewAI flow (example of invoking tools from a CrewAI agent and a
simple pipeline similar to research_with_memory), and update the PR
description/Issue `#17` note to state the CrewAI guide was added. Ensure examples
include authentication header details and the same error/response handling
patterns as the LangChain examples.
- Around line 172-174: Add a language tag to the fenced code block containing
"X-API-Key: af_your_key_here" by changing the opening backticks to specify a
language (e.g., ```text) so the block satisfies markdown lint rules; locate the
fenced block in the langchain integration example (the triple-backtick block
that currently contains X-API-Key: af_your_key_here) and update the opening
fence to ```text.
- Around line 65-77: The moltgrid_vector_search function builds its request with
the wrong parameter name and reads the wrong response field; change the request
JSON key from "top_k" to "limit" so the API receives the intended result count,
and update the result formatting to use r['text'] instead of r['value']; ensure
the returned string still handles an empty results list (the existing
check/results variable can remain) and update the formatted line in the join to
reference r['score'], r['key'], and r['text'][:100] to avoid KeyError and honor
the caller's limit.
---
Nitpick comments:
In `@docs/integrations/langchain.md`:
- Around line 54-56: The requests.get example constructs the query string by
interpolating namespace into the URL
(f"{BASE}/v1/memory/{key}?namespace={namespace}"), which can break with special
characters; change the requests.get call to pass params={'namespace': namespace}
(keeping BASE and key for the path) and retain HEADERS so the call becomes
requests.get(f"{BASE}/v1/memory/{key}", headers=HEADERS, params={'namespace':
namespace}) to let requests handle encoding.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 659b47a6-4ce8-4a19-b449-f55b900247f2
📒 Files selected for processing (1)
docs/integrations/langchain.md
| # LangChain + MoltGrid | ||
|
|
||
| Add persistent memory, inter-agent messaging, and background job queuing to your LangChain agents. Wrap MoltGrid REST calls as LangChain Tools to give your agents durable state across runs. | ||
|
|
||
| ## Prerequisites | ||
|
|
||
| - Python 3.9+ with `langchain`, `langchain-openai`, and `requests` packages | ||
| - A MoltGrid API key (`af_...`) — get one at [moltgrid.net](https://moltgrid.net) | ||
|
|
||
| ## Step 1: Register a MoltGrid Agent | ||
|
|
||
| ```bash | ||
| curl -X POST https://api.moltgrid.net/v1/register \ | ||
| -H "Content-Type: application/json" \ | ||
| -d '{"display_name": "my-langchain-agent"}' | ||
| ``` | ||
|
|
||
| Save the returned `api_key` as `MOLTGRID_API_KEY` in your environment. | ||
|
|
||
| ## Step 2: Configure the Integration | ||
|
|
||
| ```bash | ||
| export MOLTGRID_API_KEY=af_your_key_here | ||
| export OPENAI_API_KEY=sk_your_key_here | ||
| ``` | ||
|
|
||
| ## Step 3: Create MoltGrid Tools for LangChain | ||
|
|
||
| ```python | ||
| import os | ||
| import requests | ||
| from langchain_core.tools import tool | ||
|
|
||
| MOLTGRID_API_KEY = os.environ["MOLTGRID_API_KEY"] | ||
| BASE = "https://api.moltgrid.net" | ||
| HEADERS = {"X-API-Key": MOLTGRID_API_KEY, "Content-Type": "application/json"} | ||
|
|
||
|
|
||
| @tool | ||
| def moltgrid_memory_set(key: str, value: str, namespace: str = "default") -> str: | ||
| """Store a value in MoltGrid persistent memory. Use this to save findings, state, or context.""" | ||
| r = requests.post( | ||
| f"{BASE}/v1/memory", | ||
| json={"key": key, "value": value, "namespace": namespace}, | ||
| headers=HEADERS, | ||
| ) | ||
| r.raise_for_status() | ||
| return f"Stored key '{key}' in namespace '{namespace}'" | ||
|
|
||
|
|
||
| @tool | ||
| def moltgrid_memory_get(key: str, namespace: str = "default") -> str: | ||
| """Retrieve a value from MoltGrid persistent memory. Use this to recall prior findings or state.""" | ||
| r = requests.get( | ||
| f"{BASE}/v1/memory/{key}?namespace={namespace}", | ||
| headers=HEADERS, | ||
| ) | ||
| if r.status_code == 404: | ||
| return f"Key '{key}' not found in namespace '{namespace}'" | ||
| r.raise_for_status() | ||
| return r.json().get("value", "not found") | ||
|
|
||
|
|
||
| @tool | ||
| def moltgrid_vector_search(query: str, namespace: str = "default", top_k: int = 5) -> str: | ||
| """Semantic search over MoltGrid vector memory. Use this to find relevant past context by meaning, not exact key.""" | ||
| r = requests.post( | ||
| f"{BASE}/v1/vector/search", | ||
| json={"query": query, "namespace": namespace, "top_k": top_k}, | ||
| headers=HEADERS, | ||
| ) | ||
| r.raise_for_status() | ||
| results = r.json().get("results", []) | ||
| if not results: | ||
| return "No relevant results found." | ||
| return "\n".join(f"- [{r['score']:.2f}] {r['key']}: {r['value'][:100]}" for r in results) | ||
|
|
||
|
|
||
| @tool | ||
| def moltgrid_send_message(to_agent: str, message: str) -> str: | ||
| """Send a message to another MoltGrid agent. Use this for inter-agent coordination.""" | ||
| r = requests.post( | ||
| f"{BASE}/v1/relay/send", | ||
| json={"to_agent": to_agent, "payload": {"message": message}, "channel": "direct"}, | ||
| headers=HEADERS, | ||
| ) | ||
| r.raise_for_status() | ||
| return f"Message sent to {to_agent}" | ||
| ``` | ||
|
|
||
| ## Step 4: Use with a LangChain Agent | ||
|
|
||
| ```python | ||
| from langchain_openai import ChatOpenAI | ||
| from langchain.agents import create_tool_calling_agent, AgentExecutor | ||
| from langchain_core.prompts import ChatPromptTemplate | ||
|
|
||
| # Initialize LLM | ||
| llm = ChatOpenAI(model="gpt-4o", temperature=0) | ||
|
|
||
| # Register tools | ||
| tools = [ | ||
| moltgrid_memory_set, | ||
| moltgrid_memory_get, | ||
| moltgrid_vector_search, | ||
| moltgrid_send_message, | ||
| ] | ||
|
|
||
| # Create prompt | ||
| prompt = ChatPromptTemplate.from_messages([ | ||
| ("system", """You are a research assistant with persistent memory via MoltGrid. | ||
|
|
||
| Before starting any new task, check your memory for prior relevant work. | ||
| After completing a task, store your findings for future reference. | ||
| Use vector_search to find related past research by semantic similarity."""), | ||
| ("human", "{input}"), | ||
| ("placeholder", "{agent_scratchpad}"), | ||
| ]) | ||
|
|
||
| # Create agent | ||
| agent = create_tool_calling_agent(llm, tools, prompt) | ||
| agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) | ||
|
|
||
| # Run | ||
| result = agent_executor.invoke({ | ||
| "input": "Research the latest developments in LLM agent memory systems and store your findings." | ||
| }) | ||
| print(result["output"]) | ||
| ``` | ||
|
|
||
| ## Step 5: Use with LangChain Chains (Simpler Pattern) | ||
|
|
||
| If you don't need a full agent, use tools directly in a chain: | ||
|
|
||
| ```python | ||
| from langchain_core.runnables import RunnableLambda | ||
|
|
||
| def research_with_memory(query: str) -> str: | ||
| """Research pipeline with MoltGrid memory integration.""" | ||
|
|
||
| # 1. Check memory first | ||
| prior = moltgrid_memory_get.invoke({"key": f"research:{query[:50]}"}) | ||
| if "not found" not in prior: | ||
| return f"Found prior research: {prior}" | ||
|
|
||
| # 2. Do new research (replace with your logic) | ||
| result = f"Fresh research results for: {query}" | ||
|
|
||
| # 3. Store for next time | ||
| moltgrid_memory_set.invoke({"key": f"research:{query[:50]}", "value": result}) | ||
|
|
||
| return result | ||
|
|
||
| chain = RunnableLambda(research_with_memory) | ||
| output = chain.invoke("LLM agent memory architectures") | ||
| ``` | ||
|
|
||
| ## What You Can Do | ||
|
|
||
| | Feature | Tool | Use Case | | ||
| |---------|------|----------| | ||
| | **Key-Value Memory** | `moltgrid_memory_set/get` | Save and recall state, findings, context | | ||
| | **Vector Search** | `moltgrid_vector_search` | Semantic search over stored knowledge | | ||
| | **Messaging** | `moltgrid_send_message` | Coordinate between multiple agents | | ||
| | **Task Queue** | Use REST API directly | Background job processing | | ||
| | **Heartbeat** | Use REST API directly | Signal agent liveness | | ||
|
|
||
| ## Authentication Reference | ||
|
|
||
| All MoltGrid API calls use the `X-API-Key` header: | ||
|
|
||
| ``` | ||
| X-API-Key: af_your_key_here | ||
| ``` | ||
|
|
||
| Base URL: `https://api.moltgrid.net` | ||
|
|
||
| Register a new agent: `POST /v1/register` → returns `{ agent_id, api_key }` |
There was a problem hiding this comment.
Issue scope is incomplete: CrewAI integration guide is still missing.
This PR closes Issue #17, but the acceptance criteria call for both LangChain and CrewAI step-by-step guides with examples. This file only delivers LangChain.
If helpful, I can draft a matching docs/integrations/crewai.md section with MoltGrid tool wrappers and an end-to-end CrewAI flow.
🧰 Tools
🪛 markdownlint-cli2 (0.21.0)
[warning] 172-172: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/integrations/langchain.md` around lines 1 - 178, The doc only implements
LangChain but the acceptance criteria required both LangChain and CrewAI guides;
add a new CrewAI integration doc (e.g., docs/integrations/crewai.md) that
mirrors the LangChain example: show agent registration and env vars, provide
MoltGrid tool wrappers analogous to moltgrid_memory_set, moltgrid_memory_get,
moltgrid_vector_search, and moltgrid_send_message using the CrewAI SDK/usage
patterns, include a full end-to-end CrewAI flow (example of invoking tools from
a CrewAI agent and a simple pipeline similar to research_with_memory), and
update the PR description/Issue `#17` note to state the CrewAI guide was added.
Ensure examples include authentication header details and the same
error/response handling patterns as the LangChain examples.
| def moltgrid_vector_search(query: str, namespace: str = "default", top_k: int = 5) -> str: | ||
| """Semantic search over MoltGrid vector memory. Use this to find relevant past context by meaning, not exact key.""" | ||
| r = requests.post( | ||
| f"{BASE}/v1/vector/search", | ||
| json={"query": query, "namespace": namespace, "top_k": top_k}, | ||
| headers=HEADERS, | ||
| ) | ||
| r.raise_for_status() | ||
| results = r.json().get("results", []) | ||
| if not results: | ||
| return "No relevant results found." | ||
| return "\n".join(f"- [{r['score']:.2f}] {r['key']}: {r['value'][:100]}" for r in results) | ||
|
|
There was a problem hiding this comment.
Vector search example does not match API contract (top_k/value mismatch).
On Line 69, request body uses top_k, but API model expects limit.
On Line 76, response formatting reads r['value'], while vector search results return text.
This can silently ignore caller intent (defaulting to 5 results) and may crash with KeyError.
Suggested doc fix
`@tool`
def moltgrid_vector_search(query: str, namespace: str = "default", top_k: int = 5) -> str:
"""Semantic search over MoltGrid vector memory. Use this to find relevant past context by meaning, not exact key."""
r = requests.post(
f"{BASE}/v1/vector/search",
- json={"query": query, "namespace": namespace, "top_k": top_k},
+ json={"query": query, "namespace": namespace, "limit": top_k},
headers=HEADERS,
)
r.raise_for_status()
results = r.json().get("results", [])
if not results:
return "No relevant results found."
- return "\n".join(f"- [{r['score']:.2f}] {r['key']}: {r['value'][:100]}" for r in results)
+ return "\n".join(f"- [{item['score']:.2f}] {item['key']}: {item['text'][:100]}" for item in results)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/integrations/langchain.md` around lines 65 - 77, The
moltgrid_vector_search function builds its request with the wrong parameter name
and reads the wrong response field; change the request JSON key from "top_k" to
"limit" so the API receives the intended result count, and update the result
formatting to use r['text'] instead of r['value']; ensure the returned string
still handles an empty results list (the existing check/results variable can
remain) and update the formatted line in the join to reference r['score'],
r['key'], and r['text'][:100] to avoid KeyError and honor the caller's limit.
| ``` | ||
| X-API-Key: af_your_key_here | ||
| ``` |
There was a problem hiding this comment.
Add a language tag to the fenced code block.
The block at Line 172 should specify a language (text is fine) to satisfy markdown lint rules.
Suggested doc fix
-```
+```text
X-API-Key: af_your_key_here</details>
<!-- suggestion_start -->
<details>
<summary>📝 Committable suggestion</summary>
> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
```suggestion
🧰 Tools
🪛 markdownlint-cli2 (0.21.0)
[warning] 172-172: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/integrations/langchain.md` around lines 172 - 174, Add a language tag to
the fenced code block containing "X-API-Key: af_your_key_here" by changing the
opening backticks to specify a language (e.g., ```text) so the block satisfies
markdown lint rules; locate the fenced block in the langchain integration
example (the triple-backtick block that currently contains X-API-Key:
af_your_key_here) and update the opening fence to ```text.
D0NMEGA
left a comment
There was a problem hiding this comment.
Thanks for working on the LangChain guide! The structure and writing are solid. There are a few API contract issues that need fixing before merge:
Must Fix
1. Vector search uses wrong field names. The API expects limit, not top_k. And results return text, not value.
# Current (broken):
json={"query": query, "namespace": namespace, "top_k": top_k}
return "\n".join(f"- [{r['score']:.2f}] {r['key']}: {r['value'][:100]}" for r in results)
# Fixed:
json={"query": query, "namespace": namespace, "limit": top_k}
return "\n".join(f"- [{m['score']:.2f}] {m['key']}: {m['text'][:100]}" for m in results)Also note: the loop variable r shadows the outer requests.Response variable r. Rename the loop variable to m or match.
2. Relay payload must be a string, not a JSON object. This is the #1 agent friction point on the platform (confirmed by our own power-testing with 7 agents). The current code will 422 every time:
# Current (broken - payload is object):
json={"to_agent": to_agent, "payload": {"message": message}, "channel": "direct"}
# Fixed (payload must be string):
json={"to_agent": to_agent, "payload": message, "channel": "direct"}If structured data is needed, stringify it: "payload": json.dumps({"message": message}).
3. Issue #17 asks for both LangChain AND CrewAI guides. This PR only delivers LangChain. Either:
- Remove "Closes #17" from the PR description and just reference it ("Partial fix for #17")
- Or add a
docs/integrations/crewai.mdin this same PR
4. CLA not signed. Please sign by commenting the CLA statement.
Minor
5. Missing language tag on code fence at line 172. Change the bare triple backticks to specify text:
```text
X-API-Key: af_your_key_here
```
6. API key prefix. The guide says API keys start with af_ but registration actually returns keys starting with mg_. Double-check the current format from the live API. If it is mg_, update the examples.
What Looks Good
- Step-by-step structure is clear and follows a logical progression
- The "Simpler Pattern" chain example (Step 5) is a nice addition for non-agent use cases
- Tool docstrings are well-written for LLM consumption
- The feature table at the end is helpful
Once these fixes land, this will be a great addition to the docs.
Adds step-by-step integration guide for LangChain agents with MoltGrid:
Closes #17
Summary by CodeRabbit