Description:
Right now, the project is designed for OpenAI endpoints. I propose adding support for Ollama, which runs LLMs locally via http://localhost:11434. This will allow users to run models offline without requiring API keys.
Proposed Changes:
- Detect when Base URL points to Ollama (
http://localhost:11434).
- Update
/fetch-models route to query Ollama’s /api/tags endpoint.
- Adjust
/save-settings so it doesn’t require an API key when using Ollama.
- Ensure dropdown populates with installed Ollama models (e.g.,
tinyllama:latest, llama3.2:3b...).
Next Steps:
Patch app.py to support Ollama endpoints, update README with instructions, and test with installed models.
Note: I’m open to implementing this feature myself.
Description:
Right now, the project is designed for OpenAI endpoints. I propose adding support for Ollama, which runs LLMs locally via
http://localhost:11434. This will allow users to run models offline without requiring API keys.Proposed Changes:
http://localhost:11434)./fetch-modelsroute to query Ollama’s/api/tagsendpoint./save-settingsso it doesn’t require an API key when using Ollama.tinyllama:latest,llama3.2:3b...).Next Steps:
Patch
app.pyto support Ollama endpoints, update README with instructions, and test with installed models.