Local LLM UI is a Flask application that provides a simple web interface for interacting with local LLMs via ollama (I used the gguf model file in my huggingface repo -> doaonduty/llama-3.1-8b-instruct-gguf that I converted from vanilla llama-3.1-8b-instruct). It has some basic UI features like enter to send, spinning wheel etc. I utilized the project to remind me that simple things in life what keeps it going. If you are wondering why I included langchain in requirements? because I plan to add RAG module soon :).
- Python 3.12+
- Flask 3.1.0
- langchain_ollama 0.2.0 (https://python.langchain.com/docs/integrations/providers/ollama/)
- ollama 0.3.3 (https://ollama.com/download)
- Clone the repository:
git clone https://github.com/doaonduty/LocalLLMUI.git
- Make sure you have venv setup to keep things clean and isolated.
- Install dependencies:
pip install -r requirements.txt
- Run the application:
python local_llm_ui.py
- Open a web browser and navigate to
http://localhost:5000
- Enter a question or prompt in the input field and click the "Send" button
- View the model's response in the conversation area
The application provides a single endpoint:
/chat
: Handles POST requests with a JSON payload containing the user's inputcurl -X POST -H "Content-Type: application/json" -d '{"message": "hello are you there?"}' http://127.0.0.1:5000/chat |jq
Contributions are welcome! Please submit a pull request with your changes.