This is a Retrieval-Augmented Generation (RAG) chatbot using LangChain, FAISS, and Ollama for local LLM inference. The chatbot can load and process .txt and .pdf documents from the docs/ folder, retrieve relevant content, and generate responses. In ~60 lines of code.
✅ Retrieval-Augmented Generation (RAG) – Enhances responses using document retrieval.
✅ Supports .txt and .pdf files – Loads all documents from the docs/ folder.
✅ Uses FAISS for efficient vector search – Enables fast and scalable retrieval.
✅ Local AI-powered chatbot – Runs offline using sentence-transformers & Ollama.
✅ Gradio Web Interface – Provides a simple UI for user interaction.
Make sure you have Python 3.8+ installed, then run:
pip install -r requirements.txtPlace .txt and .pdf files inside the docs/ folder.
To run the Gradio UI:
python main.py- Reads all .txt and .pdf files from docs/.
- Splits documents into chunks.
- Converts text into vector embeddings using sentence-transformers.
- Stores embeddings in FAISS for retrieval.
Uses FAISS to retrieve relevant document chunks. Feeds them into the Mistral LLM (via Ollama) for response generation.