Skip to content

A simple local RAG system built with langchain in ~60 lines of code

Notifications You must be signed in to change notification settings

ShayakC98/simple-local-rag

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

RAG Chatbot with LangChain & Ollama

This is a Retrieval-Augmented Generation (RAG) chatbot using LangChain, FAISS, and Ollama for local LLM inference. The chatbot can load and process .txt and .pdf documents from the docs/ folder, retrieve relevant content, and generate responses. In ~60 lines of code.

Features

Retrieval-Augmented Generation (RAG) – Enhances responses using document retrieval.
Supports .txt and .pdf files – Loads all documents from the docs/ folder.
Uses FAISS for efficient vector search – Enables fast and scalable retrieval.
Local AI-powered chatbot – Runs offline using sentence-transformers & Ollama.
Gradio Web Interface – Provides a simple UI for user interaction.


Installation

1. Install Dependencies

Make sure you have Python 3.8+ installed, then run:

pip install -r requirements.txt

2. Add Your Documents

Place .txt and .pdf files inside the docs/ folder.

3. Start the Web Interface

To run the Gradio UI:

python main.py

How It Works

Document Loading

  • Reads all .txt and .pdf files from docs/.

Text Splitting & Embeddings

  • Splits documents into chunks.
  • Converts text into vector embeddings using sentence-transformers.
  • Stores embeddings in FAISS for retrieval.

Query Processing

Uses FAISS to retrieve relevant document chunks. Feeds them into the Mistral LLM (via Ollama) for response generation.

About

A simple local RAG system built with langchain in ~60 lines of code

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages