diff --git a/.env b/.env
new file mode 100644
index 0000000..a65378c
--- /dev/null
+++ b/.env
@@ -0,0 +1,2 @@
+OPENAI_API_KEY="INSERTYOURAPIKEYHERE"
+GROQ_API_KEY="INSERTYOURAPIKEYHERE"
diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md
new file mode 100644
index 0000000..dd84ea7
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/bug_report.md
@@ -0,0 +1,38 @@
+---
+name: Bug report
+about: Create a report to help us improve
+title: ''
+labels: ''
+assignees: ''
+
+---
+
+**Describe the bug**
+A clear and concise description of what the bug is.
+
+**To Reproduce**
+Steps to reproduce the behavior:
+1. Go to '...'
+2. Click on '....'
+3. Scroll down to '....'
+4. See error
+
+**Expected behavior**
+A clear and concise description of what you expected to happen.
+
+**Screenshots**
+If applicable, add screenshots to help explain your problem.
+
+**Desktop (please complete the following information):**
+ - OS: [e.g. iOS]
+ - Browser [e.g. chrome, safari]
+ - Version [e.g. 22]
+
+**Smartphone (please complete the following information):**
+ - Device: [e.g. iPhone6]
+ - OS: [e.g. iOS8.1]
+ - Browser [e.g. stock browser, safari]
+ - Version [e.g. 22]
+
+**Additional context**
+Add any other context about the problem here.
diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md
new file mode 100644
index 0000000..bbcbbe7
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/feature_request.md
@@ -0,0 +1,20 @@
+---
+name: Feature request
+about: Suggest an idea for this project
+title: ''
+labels: ''
+assignees: ''
+
+---
+
+**Is your feature request related to a problem? Please describe.**
+A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
+
+**Describe the solution you'd like**
+A clear and concise description of what you want to happen.
+
+**Describe alternatives you've considered**
+A clear and concise description of any alternative solutions or features you've considered.
+
+**Additional context**
+Add any other context or screenshots about the feature request here.
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
new file mode 100644
index 0000000..0f28be2
--- /dev/null
+++ b/CONTRIBUTING.md
@@ -0,0 +1,13 @@
+## Contributing
+
+We welcome contributions! Please read the [CONTRIBUTING](CONTRIBUTING.md) guidelines before submitting a pull request.
+
+### Note for Contributors
+
+This project is based on Groq's API, not the original Together API. If you are contributing, please ensure compatibility and optimizations are aligned with Groq's specifications and guidelines.
+
+## Contact
+
+For any questions or feedback, please open an issue in this repository.
+
+---
diff --git a/README.md b/README.md
index 9ad8e66..d6d1e4b 100644
--- a/README.md
+++ b/README.md
@@ -1,160 +1,162 @@
-# Mixture-of-Agents (MoA)
-[](LICENSE)
-[](https://arxiv.org/abs/2406.04692)
-[](https://discord.com/invite/9Rk6sSeWEG)
-[](https://twitter.com/togethercompute)
-
+ ```sh
+ git clone https://github.com/LebToki/MoA.git
+ cd MoA
+ ```
-
- Overview · - Quickstart · - Advanced example · - Interactive CLI Demo - · - Evaluation - · - Results - . - Credits -
+2. **Install Dependencies**: -## Overview + ```sh + pip install -r requirements.txt + ``` + +2.1. **List of Dependencies** -Mixture of Agents (MoA) is a novel approach that leverages the collective strengths of multiple LLMs to enhance performance, achieving state-of-the-art results. By employing a layered architecture where each layer comprises several LLM agents, **MoA significantly outperforms GPT-4 Omni’s 57.5% on AlpacaEval 2.0 with a score of 65.1%**, using only open-source models! +- **openai**: OpenAI API client library. +- **fire**: A Python library for creating command line interfaces. +- **loguru**: A library to make logging in Python simpler and more readable. +- **datasets**: Hugging Face's library for accessing and managing datasets. +- **typer**: A library for building command line interface applications. +- **rich**: A Python library for rich text and beautiful formatting in the terminal. +- **Flask**: A micro web framework for Python. +- **Flask-SQLAlchemy**: Adds SQLAlchemy support to Flask applications. +- **Flask-Uploads**: A flexible library to handle file uploading in Flask. +- **Werkzeug**: A comprehensive WSGI web application library. +- **Flask-Migrate**: Handles SQLAlchemy database migrations for Flask applications using Alembic. +- **PyMuPDF (fitz)**: A Python binding for MuPDF – a lightweight PDF and XPS viewer. +- **python-docx**: A Python library for creating and updating Microsoft Word (.docx) files. -## Quickstart: MoA in 50 LOC -To get to get started with using MoA in your own apps, see `moa.py`. In this simple example, we'll use 2 layers and 4 LLMs. You'll need to: +3. **Set Up Environment Variables**: -1. Install the Together Python library: `pip install together` -2. Get your [Together API Key](https://api.together.xyz/settings/api-keys) & export it: `export TOGETHER_API_KEY=` -3. Run the python file: `python moa.py` + Create a `.env` file in the root directory and add your API keys: -
+ ```
+ GROQ_API_KEY=your_groq_api_key
+ OPENAI_API_KEY=your_openai_api_key
+ DEBUG=0
+ ```
-## Multi-layer MoA Example
+### Running the Application
-In the previous example, we went over how to implement MoA with 2 layers (4 LLMs answering and one LLM aggregating). However, one strength of MoA is being able to go through several layers to get an even better response. In this example, we'll go through how to run MoA with 3+ layers in `advanced-moa.py`.
-
-```python
-python advanced-moa.py
+ Open your web browser and navigate to the following URL to access the web-based interface:
+```sh
+ http://127.0.0.1:5000/
```
-
-## Interactive CLI Demo
+## Usage
-This interactive CLI demo showcases a simple multi-turn chatbot where the final response is aggregated from various reference models.
+### Interacting with the MoA Grog Chatbot
-To run the interactive demo, follow these 3 steps:
+- **Model Selection**: Choose the model you want to use from the dropdown menu. The default model is `llama3-70b-8192`, which balances performance and speed.
+
+- **Temperature Control**: Adjust the temperature setting to control the randomness and creativity of the chatbot's responses. The default value is `0.7`, providing a good balance between deterministic and varied outputs.
+
+- **Max Tokens**: Define the maximum number of tokens (words or characters) for the response. The default is `2048`, which ensures comprehensive answers without overwhelming verbosity.
+
+- **Create Your Topics**: Easily create new conversation topics by entering your desired topic names in the text field provided. This helps organize your interactions and revisit previous conversations.
+
+- **Choose Your Topic**: Select a topic by clicking on it in the sidebar. The chat interface will load on the right, allowing you to continue your discussion seamlessly.
+
+- **Instruction Input**: Enter your prompt or instruction in the text area. This is where you ask questions or provide commands to the chatbot.
+
+- **Theme Toggle**: Enhance your user experience by switching between light and dark modes. Use the "Switch to Dark Mode" button to toggle themes based on your preference.
+
+- **Submit and View Responses**: After filling in the necessary fields, submit the form to receive a response from the MoA Grog Chatbot. The response will be displayed on the same page, within the chat interface.
-1. Export Your API Key: `export TOGETHER_API_KEY={your_key}`
-2. Install Requirements: `pip install -r requirements.txt`
-3. Run the script: `python bot.py`
+### Additional Features
-The CLI will prompt you to input instructions interactively:
+- **Create New Conversations**: Use the sidebar to create new conversation topics, helping you manage different discussions effectively.
+
+- **Reset All Conversations**: If needed, reset all conversations from the sidebar to start fresh.
-1. Start by entering your instruction at the ">>>" prompt.
-2. The system will process your input using the predefined reference models.
-3. It will generate a response based on the aggregated outputs from these models.
-4. You can continue the conversation by inputting more instructions, with the system maintaining the context of the multi-turn interaction.
+This intuitive interface makes it easy to engage with the MoA Grog Chatbot, providing a seamless and interactive user experience.
-### [Optional] Additional Configuration
-The demo will ask you to specify certain options but if you want to do additional configuration, you can specify these parameters:
+### Planned Features
-- `--aggregator`: The primary model used for final response generation.
-- `--reference_models`: List of models used as references.
-- `--temperature`: Controls the randomness of the response generation.
-- `--max_tokens`: Maximum number of tokens in the response.
-- `--rounds`: Number of rounds to process the input for refinement. (num rounds == num of MoA layers - 1)
-- `--num_proc`: Number of processes to run in parallel for faster execution.
-- `--multi_turn`: Boolean to toggle multi-turn interaction capability.
+- **Uploading Documents**: The MoA Groq Chatbot will support file uploads and interact with the content of the uploaded documents, adding functionality to handle file uploads, process the contents of these files, and integrate the results into the chatbot's conversation flow.
+- **Refine the styling**: With the latest Bootstrap and jQuery, we have room to enhance the UI/UX further.
+- **Chronological ordering**: Move recent chats to the top (DESC order) to cut down on scrolling.
+- **Further enhancement of the output**: Currently, not much has been implemented to control the output except basic styling. This is an area to be worked on based on various use cases.
-## Evaluation
-We provide scripts to quickly reproduce some of the results presented in our paper
-For convenience, we have included the code from [AlpacaEval](https://github.com/tatsu-lab/alpaca_eval),
-[MT-Bench](https://github.com/lm-sys/FastChat), and [FLASK](https://github.com/kaistAI/FLASK), with necessary modifications.
-We extend our gratitude to these projects for creating the benchmarks.
+## File Structure
-### Preparation
+- **MoA/**
+ - `app.py` - Flask application
+ - `bot.py` - Main chatbot logic
+ - `utils.py` - Utility functions
+ - `requirements.txt` - Python dependencies
+ - **templates/**
+ - `index.html` - HTML template for the web interface
+ - `chat.html` - HTML template for the chat interface
+ - **static/**
+ - `style.css` - CSS styles for the web interface
+ - `script.css` - JavaScript for theme switching and UI enhancements
+ - `bot.png` - favicon
-```bash
-# install requirements
-pip install -r requirements.txt
-cd alpaca_eval
-pip install -e .
-cd FastChat
-pip install -e ".[model_worker,llm_judge]"
-cd ..
-# setup api keys
-export TOGETHER_API_KEY=This work by Tarek Tarabichi is licensed under CC BY 4.0
-
-