AmritaGPT is a chatbot designed to answer all Amrita Viswa Vidyapeetham related questions, covering topics such as clubs, placements, entrance exams, and more. The system facilitates text-to-text conversation as well as speech-to-text and text-to-speech functionalities. π€πποΈ
-
AmritaGPT collects data from various sources including the Amrita website, Quora, and other relevant platforms. This data is utilized by a RAG-based Large Language Model (LLM) to generate responses. The project relies on LLAMA 3/Gemini model, FAISS, and Langchain for its functionality. The API gateway is powered by FastAPI, with temporary use of ngrok. ππ οΈ
-
For speech-to-text conversion, Whisper is employed, which sends the input to the LLM. gTTS is then used to convert the generated text into speech. ππ£οΈ
-
The front-end is developed using React.js, while Flask is used for backend operations. The web UI and system integration are currently under development and will be completed shortly. π»βοΈ
This Section explains how to set up, run, and interact with the chatbot API.
- Python: Install Python 3.9 or higher.
- Dependencies: Ensure the required Python packages are installed.
- Environment File: Create a
.env
file and add your gemini and huggingface key. - Text Data: Ensure a text file named
general.txt
exists in the root directory containing the knowledge base. - Models:
- HuggingFace
sentence-transformers/all-MiniLM-L6-v2
- Google Generative AI Embeddings (
embedding-001
) - Meta Llama
Llama-3-8B-Instruct
- HuggingFace
-
Clone the repository:
git clone <repository_url> cd <repository_directory>
-
Install dependencies:
pip install -r requirements.txt
-
Configure the
.env
file:GOOGLE_API_KEY=<your_google_api_key> HF_API_TOKEN = <your token>
-
Start the server:
python api.py
-
The API will be available at:
http://127.0.0.1:8000
-
Description: Get chatbot response.
-
Request Body:
{ "session_id": "<optional_session_id>", "input_text": "<user_question>", "use_google": false }
session_id
(optional): Reuse a session ID for conversation continuity.input_text
: The user query.use_google
: Use Google Generative AI (true
) or HuggingFace (false
).
-
Response:
{ "session_id": "<session_id>", "response": "<bot_response>", "history": [ {"user": "<input_text>"}, {"bot": "<response_text>"} ] }
- Chat History: Maintains context from the last two exchanges.
- Embedding Models: Supports both HuggingFace and Google Generative AI embeddings.
- Custom Prompts: Tailored for educational use cases.
-
Change Models:
- Update the
model_name
for HuggingFace embeddings in:huggingface_embeddings = HuggingFaceEmbeddings(model_name="<new_model_name>")
- Update the
-
Modify Prompt:
- Adjust the prompt template in
get_conversational_chain()
to fit your use case.
- Adjust the prompt template in
-
Add New Endpoints:
- Use FastAPIβs routing capabilities to add more endpoints as needed.
-
Model Loading Errors:
- Ensure all required models are correctly placed in the
models
directory.
- Ensure all required models are correctly placed in the
-
Environment Variables Not Found:
- Check that
.env
is correctly configured and loaded.
- Check that
-
API Not Starting:
- Ensure all dependencies are installed and use Python 3.9+.
For issues, please contact Team IETE.
This project is developed by IETE Amrita SF under the initiative of IETE Amrita SF 2023-24 team, Amrita Vishwa Vidyapeetham, Coimbatore.
Β© 2024 IETE Amrita. All rights reserved.