Skip to content

docs: add LangChain integration guide#19

Closed
sudabg wants to merge 1 commit intoD0NMEGA:mainfrom
sudabg:docs/langchain-integration-guide
Closed

docs: add LangChain integration guide#19
sudabg wants to merge 1 commit intoD0NMEGA:mainfrom
sudabg:docs/langchain-integration-guide

Conversation

@sudabg
Copy link
Copy Markdown

@sudabg sudabg commented Mar 18, 2026

Adds step-by-step integration guide for LangChain agents with MoltGrid:

  • MoltGrid tools (memory, vector search, messaging) as LangChain @tool functions
  • Agent executor pattern with tool-calling agent
  • Simple chain pattern for lightweight use cases
  • Authentication reference

Closes #17

Summary by CodeRabbit

  • Documentation
    • Added comprehensive LangChain integration guide with setup instructions, configuration steps, and prerequisites.
    • Documented four new tools enabling memory storage/retrieval, semantic search, and inter-agent messaging capabilities.
    • Included end-to-end Python code examples and usage scenarios for agent setup and execution.

@github-actions
Copy link
Copy Markdown

Thank you for your contribution! Before we can merge this PR, you need to sign our Contributor License Agreement.

To sign, please comment below:

I have read the CLA Document and I hereby sign the CLA


I have read the CLA Document and I hereby sign the CLA


You can retrigger this bot by commenting recheck in this Pull Request. Posted by the CLA Assistant Lite bot.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 18, 2026

📝 Walkthrough

Walkthrough

New documentation file providing integration guidance for connecting LangChain agents to MoltGrid. Includes prerequisites, environment configuration, tool definitions wrapping MoltGrid REST endpoints, Python code examples, and end-to-end usage workflows.

Changes

Cohort / File(s) Summary
LangChain Integration Documentation
docs/integrations/langchain.md
New integration guide introducing four custom tools (moltgrid_memory_set, moltgrid_memory_get, moltgrid_vector_search, moltgrid_send_message) for LangChain agents to interact with MoltGrid. Includes setup, authentication, agent registration, and practical usage examples.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

Poem

🐰 A rabbit hops through docs so bright,
With LangChain tools now shining bright,
Memory and vectors dance in code,
Down MoltGrid's well-marked road,
Integration magic, pure delight! ✨

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description covers key aspects but deviates significantly from the required template structure with missing sections like Summary, Changes list, Testing checklist, and verification items. Restructure the description to follow the provided template: add Summary section, format Changes as bullet list, include Testing checklist with test status, and add Contribution guidelines checklist.
Linked Issues check ❓ Inconclusive The PR partially addresses issue #17 by providing a LangChain integration guide with code examples, but does not fulfill the CrewAI integration requirement. Clarify whether CrewAI integration is deferred to a separate PR or if this PR should include both LangChain and CrewAI guides as originally specified in issue #17.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main change: adding a LangChain integration guide to the documentation.
Out of Scope Changes check ✅ Passed All changes are directly related to creating the LangChain integration documentation as outlined in issue #17; no extraneous modifications detected.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can enable review details to help with troubleshooting, context usage and more.

Enable the reviews.review_details setting to include review details such as the model used, the time taken for each step and more in the review comments.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
docs/integrations/langchain.md (1)

54-56: Prefer params= over manual query-string interpolation.

Line 55 manually concatenates namespace into the URL. Using params avoids encoding bugs for special characters and keeps examples safer.

Suggested doc fix
     r = requests.get(
-        f"{BASE}/v1/memory/{key}?namespace={namespace}",
+        f"{BASE}/v1/memory/{key}",
+        params={"namespace": namespace},
         headers=HEADERS,
     )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/integrations/langchain.md` around lines 54 - 56, The requests.get
example constructs the query string by interpolating namespace into the URL
(f"{BASE}/v1/memory/{key}?namespace={namespace}"), which can break with special
characters; change the requests.get call to pass params={'namespace': namespace}
(keeping BASE and key for the path) and retain HEADERS so the call becomes
requests.get(f"{BASE}/v1/memory/{key}", headers=HEADERS, params={'namespace':
namespace}) to let requests handle encoding.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/integrations/langchain.md`:
- Around line 1-178: The doc only implements LangChain but the acceptance
criteria required both LangChain and CrewAI guides; add a new CrewAI integration
doc (e.g., docs/integrations/crewai.md) that mirrors the LangChain example: show
agent registration and env vars, provide MoltGrid tool wrappers analogous to
moltgrid_memory_set, moltgrid_memory_get, moltgrid_vector_search, and
moltgrid_send_message using the CrewAI SDK/usage patterns, include a full
end-to-end CrewAI flow (example of invoking tools from a CrewAI agent and a
simple pipeline similar to research_with_memory), and update the PR
description/Issue `#17` note to state the CrewAI guide was added. Ensure examples
include authentication header details and the same error/response handling
patterns as the LangChain examples.
- Around line 172-174: Add a language tag to the fenced code block containing
"X-API-Key: af_your_key_here" by changing the opening backticks to specify a
language (e.g., ```text) so the block satisfies markdown lint rules; locate the
fenced block in the langchain integration example (the triple-backtick block
that currently contains X-API-Key: af_your_key_here) and update the opening
fence to ```text.
- Around line 65-77: The moltgrid_vector_search function builds its request with
the wrong parameter name and reads the wrong response field; change the request
JSON key from "top_k" to "limit" so the API receives the intended result count,
and update the result formatting to use r['text'] instead of r['value']; ensure
the returned string still handles an empty results list (the existing
check/results variable can remain) and update the formatted line in the join to
reference r['score'], r['key'], and r['text'][:100] to avoid KeyError and honor
the caller's limit.

---

Nitpick comments:
In `@docs/integrations/langchain.md`:
- Around line 54-56: The requests.get example constructs the query string by
interpolating namespace into the URL
(f"{BASE}/v1/memory/{key}?namespace={namespace}"), which can break with special
characters; change the requests.get call to pass params={'namespace': namespace}
(keeping BASE and key for the path) and retain HEADERS so the call becomes
requests.get(f"{BASE}/v1/memory/{key}", headers=HEADERS, params={'namespace':
namespace}) to let requests handle encoding.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 659b47a6-4ce8-4a19-b449-f55b900247f2

📥 Commits

Reviewing files that changed from the base of the PR and between b03d36c and e006af3.

📒 Files selected for processing (1)
  • docs/integrations/langchain.md

Comment on lines +1 to +178
# LangChain + MoltGrid

Add persistent memory, inter-agent messaging, and background job queuing to your LangChain agents. Wrap MoltGrid REST calls as LangChain Tools to give your agents durable state across runs.

## Prerequisites

- Python 3.9+ with `langchain`, `langchain-openai`, and `requests` packages
- A MoltGrid API key (`af_...`) — get one at [moltgrid.net](https://moltgrid.net)

## Step 1: Register a MoltGrid Agent

```bash
curl -X POST https://api.moltgrid.net/v1/register \
-H "Content-Type: application/json" \
-d '{"display_name": "my-langchain-agent"}'
```

Save the returned `api_key` as `MOLTGRID_API_KEY` in your environment.

## Step 2: Configure the Integration

```bash
export MOLTGRID_API_KEY=af_your_key_here
export OPENAI_API_KEY=sk_your_key_here
```

## Step 3: Create MoltGrid Tools for LangChain

```python
import os
import requests
from langchain_core.tools import tool

MOLTGRID_API_KEY = os.environ["MOLTGRID_API_KEY"]
BASE = "https://api.moltgrid.net"
HEADERS = {"X-API-Key": MOLTGRID_API_KEY, "Content-Type": "application/json"}


@tool
def moltgrid_memory_set(key: str, value: str, namespace: str = "default") -> str:
"""Store a value in MoltGrid persistent memory. Use this to save findings, state, or context."""
r = requests.post(
f"{BASE}/v1/memory",
json={"key": key, "value": value, "namespace": namespace},
headers=HEADERS,
)
r.raise_for_status()
return f"Stored key '{key}' in namespace '{namespace}'"


@tool
def moltgrid_memory_get(key: str, namespace: str = "default") -> str:
"""Retrieve a value from MoltGrid persistent memory. Use this to recall prior findings or state."""
r = requests.get(
f"{BASE}/v1/memory/{key}?namespace={namespace}",
headers=HEADERS,
)
if r.status_code == 404:
return f"Key '{key}' not found in namespace '{namespace}'"
r.raise_for_status()
return r.json().get("value", "not found")


@tool
def moltgrid_vector_search(query: str, namespace: str = "default", top_k: int = 5) -> str:
"""Semantic search over MoltGrid vector memory. Use this to find relevant past context by meaning, not exact key."""
r = requests.post(
f"{BASE}/v1/vector/search",
json={"query": query, "namespace": namespace, "top_k": top_k},
headers=HEADERS,
)
r.raise_for_status()
results = r.json().get("results", [])
if not results:
return "No relevant results found."
return "\n".join(f"- [{r['score']:.2f}] {r['key']}: {r['value'][:100]}" for r in results)


@tool
def moltgrid_send_message(to_agent: str, message: str) -> str:
"""Send a message to another MoltGrid agent. Use this for inter-agent coordination."""
r = requests.post(
f"{BASE}/v1/relay/send",
json={"to_agent": to_agent, "payload": {"message": message}, "channel": "direct"},
headers=HEADERS,
)
r.raise_for_status()
return f"Message sent to {to_agent}"
```

## Step 4: Use with a LangChain Agent

```python
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate

# Initialize LLM
llm = ChatOpenAI(model="gpt-4o", temperature=0)

# Register tools
tools = [
moltgrid_memory_set,
moltgrid_memory_get,
moltgrid_vector_search,
moltgrid_send_message,
]

# Create prompt
prompt = ChatPromptTemplate.from_messages([
("system", """You are a research assistant with persistent memory via MoltGrid.

Before starting any new task, check your memory for prior relevant work.
After completing a task, store your findings for future reference.
Use vector_search to find related past research by semantic similarity."""),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])

# Create agent
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# Run
result = agent_executor.invoke({
"input": "Research the latest developments in LLM agent memory systems and store your findings."
})
print(result["output"])
```

## Step 5: Use with LangChain Chains (Simpler Pattern)

If you don't need a full agent, use tools directly in a chain:

```python
from langchain_core.runnables import RunnableLambda

def research_with_memory(query: str) -> str:
"""Research pipeline with MoltGrid memory integration."""

# 1. Check memory first
prior = moltgrid_memory_get.invoke({"key": f"research:{query[:50]}"})
if "not found" not in prior:
return f"Found prior research: {prior}"

# 2. Do new research (replace with your logic)
result = f"Fresh research results for: {query}"

# 3. Store for next time
moltgrid_memory_set.invoke({"key": f"research:{query[:50]}", "value": result})

return result

chain = RunnableLambda(research_with_memory)
output = chain.invoke("LLM agent memory architectures")
```

## What You Can Do

| Feature | Tool | Use Case |
|---------|------|----------|
| **Key-Value Memory** | `moltgrid_memory_set/get` | Save and recall state, findings, context |
| **Vector Search** | `moltgrid_vector_search` | Semantic search over stored knowledge |
| **Messaging** | `moltgrid_send_message` | Coordinate between multiple agents |
| **Task Queue** | Use REST API directly | Background job processing |
| **Heartbeat** | Use REST API directly | Signal agent liveness |

## Authentication Reference

All MoltGrid API calls use the `X-API-Key` header:

```
X-API-Key: af_your_key_here
```

Base URL: `https://api.moltgrid.net`

Register a new agent: `POST /v1/register` → returns `{ agent_id, api_key }`
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Issue scope is incomplete: CrewAI integration guide is still missing.

This PR closes Issue #17, but the acceptance criteria call for both LangChain and CrewAI step-by-step guides with examples. This file only delivers LangChain.

If helpful, I can draft a matching docs/integrations/crewai.md section with MoltGrid tool wrappers and an end-to-end CrewAI flow.

🧰 Tools
🪛 markdownlint-cli2 (0.21.0)

[warning] 172-172: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/integrations/langchain.md` around lines 1 - 178, The doc only implements
LangChain but the acceptance criteria required both LangChain and CrewAI guides;
add a new CrewAI integration doc (e.g., docs/integrations/crewai.md) that
mirrors the LangChain example: show agent registration and env vars, provide
MoltGrid tool wrappers analogous to moltgrid_memory_set, moltgrid_memory_get,
moltgrid_vector_search, and moltgrid_send_message using the CrewAI SDK/usage
patterns, include a full end-to-end CrewAI flow (example of invoking tools from
a CrewAI agent and a simple pipeline similar to research_with_memory), and
update the PR description/Issue `#17` note to state the CrewAI guide was added.
Ensure examples include authentication header details and the same
error/response handling patterns as the LangChain examples.

Comment on lines +65 to +77
def moltgrid_vector_search(query: str, namespace: str = "default", top_k: int = 5) -> str:
"""Semantic search over MoltGrid vector memory. Use this to find relevant past context by meaning, not exact key."""
r = requests.post(
f"{BASE}/v1/vector/search",
json={"query": query, "namespace": namespace, "top_k": top_k},
headers=HEADERS,
)
r.raise_for_status()
results = r.json().get("results", [])
if not results:
return "No relevant results found."
return "\n".join(f"- [{r['score']:.2f}] {r['key']}: {r['value'][:100]}" for r in results)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Vector search example does not match API contract (top_k/value mismatch).

On Line 69, request body uses top_k, but API model expects limit.
On Line 76, response formatting reads r['value'], while vector search results return text.

This can silently ignore caller intent (defaulting to 5 results) and may crash with KeyError.

Suggested doc fix
 `@tool`
 def moltgrid_vector_search(query: str, namespace: str = "default", top_k: int = 5) -> str:
     """Semantic search over MoltGrid vector memory. Use this to find relevant past context by meaning, not exact key."""
     r = requests.post(
         f"{BASE}/v1/vector/search",
-        json={"query": query, "namespace": namespace, "top_k": top_k},
+        json={"query": query, "namespace": namespace, "limit": top_k},
         headers=HEADERS,
     )
     r.raise_for_status()
     results = r.json().get("results", [])
     if not results:
         return "No relevant results found."
-    return "\n".join(f"- [{r['score']:.2f}] {r['key']}: {r['value'][:100]}" for r in results)
+    return "\n".join(f"- [{item['score']:.2f}] {item['key']}: {item['text'][:100]}" for item in results)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/integrations/langchain.md` around lines 65 - 77, The
moltgrid_vector_search function builds its request with the wrong parameter name
and reads the wrong response field; change the request JSON key from "top_k" to
"limit" so the API receives the intended result count, and update the result
formatting to use r['text'] instead of r['value']; ensure the returned string
still handles an empty results list (the existing check/results variable can
remain) and update the formatted line in the join to reference r['score'],
r['key'], and r['text'][:100] to avoid KeyError and honor the caller's limit.

Comment on lines +172 to +174
```
X-API-Key: af_your_key_here
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add a language tag to the fenced code block.

The block at Line 172 should specify a language (text is fine) to satisfy markdown lint rules.

Suggested doc fix
-```
+```text
 X-API-Key: af_your_key_here
</details>

<!-- suggestion_start -->

<details>
<summary>📝 Committable suggestion</summary>

> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

```suggestion

🧰 Tools
🪛 markdownlint-cli2 (0.21.0)

[warning] 172-172: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/integrations/langchain.md` around lines 172 - 174, Add a language tag to
the fenced code block containing "X-API-Key: af_your_key_here" by changing the
opening backticks to specify a language (e.g., ```text) so the block satisfies
markdown lint rules; locate the fenced block in the langchain integration
example (the triple-backtick block that currently contains X-API-Key:
af_your_key_here) and update the opening fence to ```text.

Copy link
Copy Markdown
Owner

@D0NMEGA D0NMEGA left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on the LangChain guide! The structure and writing are solid. There are a few API contract issues that need fixing before merge:

Must Fix

1. Vector search uses wrong field names. The API expects limit, not top_k. And results return text, not value.

# Current (broken):
json={"query": query, "namespace": namespace, "top_k": top_k}
return "\n".join(f"- [{r['score']:.2f}] {r['key']}: {r['value'][:100]}" for r in results)

# Fixed:
json={"query": query, "namespace": namespace, "limit": top_k}
return "\n".join(f"- [{m['score']:.2f}] {m['key']}: {m['text'][:100]}" for m in results)

Also note: the loop variable r shadows the outer requests.Response variable r. Rename the loop variable to m or match.

2. Relay payload must be a string, not a JSON object. This is the #1 agent friction point on the platform (confirmed by our own power-testing with 7 agents). The current code will 422 every time:

# Current (broken - payload is object):
json={"to_agent": to_agent, "payload": {"message": message}, "channel": "direct"}

# Fixed (payload must be string):
json={"to_agent": to_agent, "payload": message, "channel": "direct"}

If structured data is needed, stringify it: "payload": json.dumps({"message": message}).

3. Issue #17 asks for both LangChain AND CrewAI guides. This PR only delivers LangChain. Either:

  • Remove "Closes #17" from the PR description and just reference it ("Partial fix for #17")
  • Or add a docs/integrations/crewai.md in this same PR

4. CLA not signed. Please sign by commenting the CLA statement.

Minor

5. Missing language tag on code fence at line 172. Change the bare triple backticks to specify text:

```text
X-API-Key: af_your_key_here
```

6. API key prefix. The guide says API keys start with af_ but registration actually returns keys starting with mg_. Double-check the current format from the live API. If it is mg_, update the examples.

What Looks Good

  • Step-by-step structure is clear and follows a logical progression
  • The "Simpler Pattern" chain example (Step 5) is a nice addition for non-agent use cases
  • Tool docstrings are well-written for LLM consumption
  • The feature table at the end is helpful

Once these fixes land, this will be a great addition to the docs.

@D0NMEGA D0NMEGA closed this Mar 30, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Write integration guide for LangChain/CrewAI

2 participants