The goal of the InsightChain project is to create an intuitive and interactive chatbot application that enables users to engage in natural language conversations while extracting and processing information from uploaded PDF documents. By leveraging the ChatOllama model (Llama 3.2), InsightChain aims to provide accurate and contextually relevant responses based solely on the content of the PDFs, enhancing user experience and facilitating efficient information retrieval for everyday tasks.
- User-Friendly Interface π₯οΈ: Intuitive design that allows users to easily interact with the application through a web interface powered by Streamlit.
- PDF Upload and Processing π: Users can upload multiple PDF documents, which the application processes to extract text content for querying.
- Conversational AI π¬: Utilizes the ChatOllama model (Llama 3.2) to provide accurate responses based on the content extracted from uploaded PDFs.
- Contextual Responses π: Ensures that answers to user queries are derived directly from the uploaded documents, minimizing the risk of hallucination.
- Real-Time Interaction β±οΈ: Engage in a back-and-forth conversation, asking questions related to the uploaded PDFs or general topics.
- Chat History ποΈ: Maintains a history of the conversation, allowing users to review previous interactions and responses.
- Text Chunking βοΈ: Implements intelligent text chunking to optimize the processing and retrieval of information from lengthy documents.
- Embedding π: Leverages embeddings for efficient searching and retrieval of relevant information from the vector store.
- Session State Management π: Keeps track of the conversation state, uploaded documents, and chat history to provide a seamless user experience.
βββ app_logic.py
βββ app_ui.py
βββ requirements.txt
βββ LICENSE
βββ README.md
Please click on the link to watch the demo (the file is more than 100 MB, so I can't upload it directly here) - https://drive.google.com/file/d/1zZ1nDZqV7mFTZLLPFPOv3ITJ0qftAcCX/view?usp=sharing
To run this project, ensure you have the following dependencies installed:
streamlitPyPDF2langchainlangchain-ollamafaiss-cputransformerstorch
You can install the required packages using pip:
pip install streamlit PyPDF2 langchain langchain-ollama huggingface-hubModel Setup Before running the application, you need to install the Ollama CLI and download the Llama model:
- Install the Ollama CLI by following the instructions from the Ollama website.
- After installing the CLI, download the Llama model by running:
ollama pull llama3.2Clone the repository:
git clone https://github.com/mrinmoycyber/InsightChain.gitNavigate to the project directory:
cd InsightChainRun the Streamlit app:
streamlit run app_ui.py