A modern web application built with Reflex that combines Google Search with local Ollama LLM to provide AI-powered search responses. This app creates an intelligent search agent that can query Google for recent information and generate comprehensive responses using the Llama 3.2 model.
- π Google Search Integration - Real-time web search using Serper API
- π€ Local LLM Processing - Powered by Ollama and Llama 3.2 model
- π¨ Modern UI - Beautiful gradient design with responsive components
- β‘ Real-time Updates - Async processing with loading states
- π‘οΈ Error Handling - Comprehensive error management and user feedback
- π± Responsive Design - Works seamlessly across all devices
Before running this application, make sure you have the following installed:
- Python 3.10 or higher
- Ollama installed and running
- A Serper API key (get one from serper.dev)
-
Clone the repository
git clone https://github.com/bassemalyyy/LangChain-Search-Agent.git cd LangChain-Search-Agent
-
Create a virtual environment
python -m venv venv .\venv\Scripts\activate # On Windows
-
Install dependencies
pip install reflex langchain langchain-community langchain-ollama
-
Install and setup Ollama
Download and install Ollama from ollama.ai
Then pull the Llama 3.2 model:
ollama pull llama3.2
Start Ollama service:
ollama serve
-
Configure API Key
Replace the Serper API key in the code with your own:
serper_search = GoogleSerperAPIWrapper( serper_api_key="serper_api_key")
-
Initialize Reflex
reflex init
-
Run the development server
reflex run
-
Open your browser
Navigate to
http://localhost:3000
to access the application.
langchain_reflex_agent/
βββ assets/
βββ langchain_reflex_agent/ # Application package
β βββ __init__.py
β βββ langchain_reflex_agent.py # Main Reflex entrypoint
βββ rxconfig.py # Reflex build configuration
βββ requirements.txt # Python dependencies
βββ README.md # Project overview & setup instructions
βββ .gitignore # Ignored files
You can change the Ollama model by modifying this line in main.py
:
ollama_llm = OllamaLLM(model="llama3.2") # Change to any model you have installed
Available models can be listed with:
ollama list
- Enter your search query in the input field
- Click "Submit Query" or press Enter
- The application will:
- Search Google for relevant information
- Process the results using the local Llama 3.2 model
- Generate a comprehensive AI-powered response
- reflex - Web framework for Python
- langchain - LLM application framework
- langchain-community - Community integrations for LangChain
- langchain-ollama - Ollama integration for LangChain
- Reflex for the amazing Python web framework
- Ollama for local LLM capabilities
- LangChain for LLM orchestration
- Serper for Google Search API
If you encounter any issues or have questions, please:
- Check the troubleshooting section above
- Search existing GitHub Issues
- Create a new issue with detailed information about your problem
β If you found this project helpful, please give it a star!