Skip to content

Commit

Permalink
example using GROQ
Browse files Browse the repository at this point in the history
  • Loading branch information
maeste committed Sep 24, 2024
1 parent 8a785aa commit 39ce33b
Show file tree
Hide file tree
Showing 5 changed files with 138 additions and 2 deletions.
6 changes: 4 additions & 2 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,10 @@ The example consists of four agents:
3. **ActionAgent**: This agent will write what received (the books suggested) in a file (books_suggested.txt). It will override the file each time. This agent code is not part of wise-agents framework, but it's defined for this example use only in `custom_agent.py` present in the example directory.
4. **SequentialCoordinator**: Take care of coordinating the request handling from the user delagating the work to other agents in a predetermined order.



* [run_examples_podman](./run_examples_podman/README.md)

This guide walks you through running any of the example in the examples directory in podman containers

* [memory_agentic_chatbot_groq](./memory_agentic_chatbot_groq/README.md)

This guide walks you through running a practical example of a multi-agent system using Wise Agents. In this example, two agents (a web interface agent and an intelligent agent) are started, allowing you to experiment with agent communication and interaction in a simulated environment. This is the same as described in [memory_agentic_chatbot](./memory_agentic_chatbot/), but instead of using a local LLM it uses [https://groq.com/](https://groq.com/) for model inference. Please obtain an API key from [https://console.groq.com/keys](https://console.groq.com/keys) and set the env variable GROQ_API_KEY to start playing with this example.
109 changes: 109 additions & 0 deletions examples/memory_agentic_chatbot_groq/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@

# Wise Agents Example: Memory Agentic Chatbot with GROQ api

This guide walks you through running a practical example of a multi-agent system using Wise Agents. In this example, two agents (a web interface agent and an intelligent agent) are started, allowing you to experiment with agent communication and interaction in a simulated environment. This is the same as described in [memory_agentic_chatbot](../memory_agentic_chatbot/), but instead of using a local LLM it uses [https://groq.com/](https://groq.com/) for model inference. Please obtain an API key from [https://console.groq.com/keys](https://console.groq.com/keys) and set the env variable GROQ_API_KEY to start playing with this example.


## Example Overview

The example consists of two main agents:

1. **Web Interface Agent**: Simulates a web-based client for interacting with other agents.
2. **Intelligent Agent**: Handles requests and provides intelligent responses based on memory and context.

These agents are defined in YAML configuration files located in the `examples/memory_agentic_chatbot` directory.

## Running the Example

### Step 1: Clone the Repository

If you haven't already, clone the Wise Agents repository from GitHub:

```bash
git clone https://github.com/wise-agents/wise-agents.git
cd wise-agents
```

### Step 2: Configure and Start Redis

In this step, we will set up Redis for agent context and registry.

1. **Create a hidden directory `.wise-agents`** in the root of your project:

```bash
mkdir .wise-agents
```

2. **Copy the Redis configuration file** as shown in the [`.wise-agents` directory](https://github.com/wise-agents/wise-agents/tree/main/.wise-agents) from the GitHub repo. Create a file named `redis-config.yaml` inside `.wise-agents`:

```yaml
redis:
host: localhost
port: 6379
```
3. **Ensure Redis is installed and running**. You can start redis as a podman/docker image following instructions in the [Redis README.MD](../../redis/README.MD)
### Step 3: Start Artemis
To support async communication between agents you need to have Artemis up and running
1. Start the Artemis podman/docker image following the instructions in the [Artemis README.MD](../../artemis/README.MD)
2. Set the environment variables for artemis secure login. If you haven't changed any configuration, starting Artemis following the previous point instructions they are:
```bash
export STOMP_USER=artemis
export STOMP_PASSWORD=artemis
```


### Step 4: Start the Intelligent Agent

In a second console, run the intelligent agent, also from the project’s home directory, using the following command:

```bash
python src/wiseagents/cli/wise_agent_cli.py examples/memory_agentic_chatbot_groq/intelligent-agent.yaml
```

This will initialize the intelligent agent, which will be ready to respond to requests sent by the web interface agent.


### Step 6: Start the Web Interface Agent

In your first console, navigate to the project’s home directory and run the web interface agent using the provided YAML configuration file:

```bash
python src/wiseagents/cli/wise_agent_cli.py examples/memory_agentic_chatbot_groq/web-interface.yaml
```

This will initialize the Assistant agent with its web interface. You should see logs indicating that the agent is started and waiting for requests. You will see in the console also a web server listening at [http://127.0.0.1:7860](http://127.0.0.1:7860)

```plain-text
Running on local URL: http://127.0.0.1:7860
```

### Step 7: Interaction

Once both agents are up and running, you can use the web interface agent as a chatbot and it will start sending requests to the intelligent agent. You will be able to see the interaction between the two agents through the logs in both consoles.

### Step 8: Experiment

You can experiment with different agent configurations or modify the agent behaviors by editing the YAML files located in the `examples/memory_agentic_chatbot_groq` directory. These configuration files define the agents' properties, including memory, communication methods, and response patterns.

## Understanding the YAML Configuration

- **web-interface.yaml**: Defines the web interface agent, which serves as the client interface for interacting with other agents.
- **intelligent-agent.yaml**: Defines the intelligent agent, which processes the requests and generates responses based on the provided input. *Note:* This agent use an env variable GROQ_API_KEY you must set before launching it.

These YAML files include the specific `WiseAgent` classes and configuration needed to run the agents. Feel free to explore and modify these files to customize the agents' behavior.

## Additional Resources

For more information about the architecture and advanced configurations of wise-agents, refer to the [Wise Agents Architecture Document](wise_agents_architecture.md), which provides insights into how the system can be scaled and deployed in distributed environments.

## Conclusion

By following these steps, you have successfully run a simple memory-agentic chatbot using Wise Agents. You can now explore further by modifying agent behaviors, adding new agents, or experimenting with different message flows.

For any further assistance, feel free to refer to the official Wise Agents documentation or reach out to the repository maintainers.
13 changes: 13 additions & 0 deletions examples/memory_agentic_chatbot_groq/intelligent-agent.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
!wiseagents.agents.ChatWiseAgent
_description: This is another test agent
_llm: !wiseagents.llm.OpenaiAPIWiseAgentLLM
_model_name: llama-3.1-70b-versatile
_remote_address: https://api.groq.com/openai/v1
_api_key: ${GROQ_API_KEY}
_system_message: Answer my greeting saying Hello and my name
_name: WiseIntelligentAgent
_transport: !wiseagents.transports.StompWiseAgentTransport
_host: localhost
_port: 61616
_agent_name: WiseIntelligentAgent
11 changes: 11 additions & 0 deletions examples/memory_agentic_chatbot_groq/web-interface.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
!wiseagents.agents.AssistantAgent
_description: This is a test agent
_name: AssistantAgent
_destination_agent_name: WiseIntelligentAgent
_transport: !wiseagents.transports.StompWiseAgentTransport
_host: localhost
_port: 61616
_agent_name: AssistantAgent


1 change: 1 addition & 0 deletions src/wiseagents/llm/openai_API_wise_agent_LLM.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ def __getstate__(self) -> object:

def connect(self):
'''Connect to the remote machine.'''
print(f"Connecting to WiseAgentLLM on remote machine at {self.remote_address} with API key {self.api_key}")
self.client = openai.OpenAI(base_url=self.remote_address,
api_key=self.api_key)

Expand Down

0 comments on commit 39ce33b

Please sign in to comment.