Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
2e83f64
chore: bump versions
mhordynski Sep 12, 2025
ee680a4
chore: fix docs build
mhordynski Sep 12, 2025
661e4f7
feat: support wrapping downstream agents as tools (#819)
akotyla Sep 16, 2025
61d6739
feat: class-based agents (#820)
dazy-ds Sep 17, 2025
dc1d068
fix: nightly builds
mhordynski Sep 22, 2025
3ca9cd4
fix: docs deployments
mhordynski Sep 22, 2025
3026929
fix: add docs login
mhordynski Sep 22, 2025
759f59e
feat: introduce post processors (#821)
mackurzawa Sep 22, 2025
769551f
feat: streaming from downstream agents (#825)
akotyla Sep 25, 2025
d45415f
feat: todo list for agent (#823)
jakubduda-dsai Sep 26, 2025
09f94e4
feat: introduce supervisor post processor (#830)
mackurzawa Sep 26, 2025
77ee2af
feat: todo list component (#827)
dazy-ds Oct 10, 2025
61ec71f
Automated UI build
ds-ragbits-robot Oct 10, 2025
2b9b8fb
docs: installation & source fixes (#844)
puzzle-solver Oct 13, 2025
9650a70
feat: conversation summary (#840)
dazy-ds Oct 13, 2025
7fa44c5
Automated UI build
ds-ragbits-robot Oct 13, 2025
892dcb8
feat: #826 - customizable colors (#841)
jakubduda-dsai Oct 14, 2025
8a42f22
Automated UI build
ds-ragbits-robot Oct 14, 2025
67ba97f
feat: agent parallel tool calling (#836)
puzzle-solver Oct 15, 2025
42b8da0
feat: support thinking in Agents (#837)
rk-izak Oct 15, 2025
6a9e726
feat: long-term semantic memory (#839)
ds-michal-rdzany Oct 15, 2025
c52cfa8
feat: add example evaluation pipelines targeting agents (#831)
rk-izak Oct 16, 2025
5040024
docs: add a new quickstart
ds-sebastianchwilczynski Oct 31, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 17 additions & 36 deletions .github/workflows/nightly-build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,43 +19,19 @@ jobs:
ref: develop
fetch-depth: 0

- name: Check if nightly build needed
id: check
run: |
# Get the latest commit hash on develop
COMMIT_HASH=$(git rev-parse --short HEAD)
echo "commit-hash=$COMMIT_HASH" >> "$GITHUB_OUTPUT"

# Check if we already built this commit as nightly
LAST_NIGHTLY_TAG=$(git tag -l "*dev*" --sort=-version:refname | head -1)
if [ -n "$LAST_NIGHTLY_TAG" ]; then
# Get the commit that the last nightly tag points to
LAST_NIGHTLY_COMMIT=$(git rev-list -n 1 $LAST_NIGHTLY_TAG)
CURRENT_COMMIT=$(git rev-parse HEAD)
if [ "$CURRENT_COMMIT" = "$LAST_NIGHTLY_COMMIT" ]; then
echo "should-build=false" >> "$GITHUB_OUTPUT"
echo "No new commits since last nightly build"
exit 0
fi
fi
- name: Install uv
uses: astral-sh/setup-uv@v2
with:
version: ${{ vars.UV_VERSION || '0.6.9' }}

# Generate nightly version
BASE_VERSION=$(python -c "
try:
import tomllib
except ImportError:
import tomli as tomllib
with open('packages/ragbits/pyproject.toml', 'rb') as f:
data = tomllib.load(f)
print(data['project']['version'])
")
# Use timestamp for unique nightly version (PEP 440 compliant)
TIMESTAMP=$(date +%Y%m%d%H%M)
NIGHTLY_VERSION="${BASE_VERSION}.dev${TIMESTAMP}"
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.10"

echo "should-build=true" >> "$GITHUB_OUTPUT"
echo "nightly-version=$NIGHTLY_VERSION" >> "$GITHUB_OUTPUT"
echo "Will build nightly version: $NIGHTLY_VERSION"
- name: Check if nightly build needed
id: check
run: uv run scripts/check_nightly_build.py

build-and-publish:
needs: check-for-changes
Expand Down Expand Up @@ -100,6 +76,7 @@ jobs:
git commit -m "chore: update package versions for nightly build ${{ env.NIGHTLY_VERSION }}"
git tag "${{ env.NIGHTLY_VERSION }}"
git push origin "${{ env.NIGHTLY_VERSION }}"
git push origin develop
env:
GH_TOKEN: ${{ secrets.GH_TOKEN }}
NIGHTLY_VERSION: ${{ needs.check-for-changes.outputs.nightly-version }}
Expand All @@ -114,7 +91,11 @@ jobs:

- name: Deploy nightly documentation
shell: bash
run: uv run mike deploy --push nightly
run: |
git config user.name "ds-ragbits-robot"
git config user.email "[email protected]"
git fetch origin gh-pages
uv run mike deploy --push --alias-type copy nightly
env:
GH_TOKEN: ${{ secrets.GH_TOKEN }}

Expand Down
3 changes: 3 additions & 0 deletions .github/workflows/publish-docs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,9 @@ jobs:
contents: write
steps:
- uses: actions/checkout@v4
with:
ref: gh-pages
fetch-depth: 1

- name: Deploy docs
shell: bash
Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/publish-pypi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@ jobs:

- name: Deploy documentation
run: |
uv run mike deploy --push stable
git fetch origin gh-pages
uv run mike deploy --push --alias-type copy stable
env:
GH_TOKEN: ${{ secrets.GH_TOKEN }}
6 changes: 6 additions & 0 deletions docs/api_reference/agents/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,9 @@
::: ragbits.agents.AgentResultStreaming

::: ragbits.agents.a2a.server.create_agent_server

::: ragbits.agents.post_processors.base

::: ragbits.agents.post_processors.supervisor

::: ragbits.agents.AgentRunContext
Binary file added docs/assets/chat.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
95 changes: 93 additions & 2 deletions docs/how-to/agents/define_and_use_agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Use a structured prompt to instruct the LLM. For details on writing prompts with
from pydantic import BaseModel
from ragbits.core.prompt import Prompt

--8<-- "examples/agents/tool_use.py:51:70"
--8<-- "examples/agents/tool_use.py:51:72"
```

### Run the agent
Expand All @@ -49,6 +49,33 @@ The result is an [AgentResult][ragbits.agents.AgentResult], which includes the m

You can find the complete code example in the Ragbits repository [here](https://github.com/deepsense-ai/ragbits/blob/main/examples/agents/tool_use.py).

### Alternative approach: inheritance with `prompt_config`

In addition to explicitly attaching a Prompt instance, Ragbits also supports defining agents through a combination of inheritance and the `@Agent.prompt_config` decorator.

This approach lets you bind input (and optionally output) models directly to your agent class. The agent then derives its prompt structure automatically, without requiring a prompt argument in the constructor.

```python
from pydantic import BaseModel
from ragbits.agents import Agent

--8<-- "examples/agents/with_decorator.py:51:71"
```

The decorator can also accept an output type, allowing you to strongly type both the inputs and outputs of the agent. If you do not explicitly define a `user_prompt`, Ragbits will default to `{{ input }}`.

Once defined, the agent class can be used directly, just like any other subclass of Agent:

```python
import asyncio
from ragbits.agents import Agent
from ragbits.core.llms import LiteLLM

--8<-- "examples/agents/with_decorator.py:73:84"
```

You can find the complete code example in the Ragbits repository [here](https://github.com/deepsense-ai/ragbits/blob/main/examples/agents/with_decorator.py).

## Tool choice
To control what tool is used at first call you could use `tool_choice` parameter. There are the following options:
- "auto": let model decide if tool call is needed
Expand Down Expand Up @@ -84,6 +111,70 @@ In this scenario, the agent recognizes that the follow-up question "What about T
AgentResult(content='The current temperature in Tokyo is 10°C.', ...)
```

### Long term memory tool
While `keep_history` maintains context within a single session, long-term memory tool enables agents to store and retrieve information across multiple separate conversations. It uses a vector store for semantic search and organizes memories by keys, allowing personalized context based on provided id.
```python
from pydantic import BaseModel

from ragbits.agents import Agent
from ragbits.agents.tools.memory import LongTermMemory, create_memory_tools
from ragbits.core.embeddings import LiteLLMEmbedder
from ragbits.core.llms import LiteLLM
from ragbits.core.prompt import Prompt
from ragbits.core.vector_stores.in_memory import InMemoryVectorStore

class ConversationInput(BaseModel):
message: str


class ConversationPrompt(Prompt[ConversationInput, str]):
"""Prompt for conversation with memory capabilities."""

system_prompt = """
You are a helpful assistant with long-term memory. You can remember information
from previous conversations and use it to provide more personalized responses.

You have access to memory tools that allow you to:
- Store important facts from conversations
- Retrieve relevant memories based on queries

Store all information about the user that might be useful in future conversations.
Always start with retrieving memories (implicit) to provide more relevant and personalized experience.
"""

user_prompt = """
Message: {{ message }}
"""

async def main() -> None:
# Initialize components
llm = LiteLLM(model_name="gpt-4o-mini")
embedder = LiteLLMEmbedder(model_name="text-embedding-3-small")
vector_store = InMemoryVectorStore(embedder=embedder)
long_term_memory = LongTermMemory(vector_store=vector_store)

memory_tools = create_memory_tools(long_term_memory, user_id="user_1")
agent = Agent(llm=llm, prompt=ConversationPrompt, tools=[*memory_tools])

# Provide context
await agent.run(ConversationInput(
message="I love hiking in the mountains. I'm planning a trip to Rome next month."
))

# New session
llm = LiteLLM(model_name="gpt-4o-mini")
memory_tools = create_memory_tools(long_term_memory, user_id="user_1")
agent = Agent(llm=llm, prompt=ConversationPrompt, tools=[*memory_tools])

response2 = await agent.run(ConversationInput(
message="What outdoor activities would you recommend for my trip?"
))
print(response2.content)

# Agent remembers Rome trip and hiking preference, suggests Castelli Romani trails, etc.
```
LongTermMemory class also provides internal methods for managing the memories. You can find the complete code example in the Ragbits repository [here](https://github.com/deepsense-ai/ragbits/blob/main/examples/agents/memory_tool_example.py).

## Binding dependencies via AgentRunContext
You can bind your external dependencies before the first access and safely use them in tools. After first attribute lookup, the dependencies container freezes to prevent mutation during a run.

Expand Down Expand Up @@ -154,4 +245,4 @@ tool_params = {
}
}
web_search_tool = get_web_search_tool("gpt-4o", tool_params)
```
```
48 changes: 48 additions & 0 deletions docs/how-to/agents/stream_downstream_agents.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# How-To: Stream downstream agents with Ragbits

Ragbits [Agent][ragbits.agents.Agent] can call other agents as tools, creating a chain of reasoning where downstream agents provide structured results to the parent agent.

Using the streaming API, you can observe every chunk of output as it is generated, including tool calls, tool results, and final text - perfect for real-time monitoring or chat interfaces.

## Define a simple tool

A tool is just a Python function returning a JSON-serializable result. Here’s an example tool returning the current time for a given location:

```python
import json

--8<-- "examples/agents/downstream_agents_streaming.py:33:51"
```

## Create a downstream agent

The downstream agent wraps the tool with a prompt, allowing the LLM to use it as a function.

```python
from pydantic import BaseModel
from ragbits.core.prompt import Prompt
from ragbits.agents import Agent
from ragbits.agents._main import AgentOptions
from ragbits.core.llms import LiteLLM

--8<-- "examples/agents/downstream_agents_streaming.py:54:82"
```

## Create a parent QA agent

The parent agent can call downstream agents as tools. This lets the LLM reason and decide when to invoke the downstream agent.

```python
--8<-- "examples/agents/downstream_agents_streaming.py:85:111"
```

## Streaming output from downstream agents

Use `run_streaming` with an [AgentRunContext][ragbits.agents.AgentRunContext] to see output as it happens. Each chunk contains either text, a tool call, or a tool result. You can print agent names when they change and handle downstream agent events.

```python
import asyncio
from ragbits.agents import DownstreamAgentResult

--8<-- "examples/agents/downstream_agents_streaming.py:114:133"
```
Loading
Loading