It would be great to add support for local/self-hosted LLM providers such as:
Ollama
LM Studio
Lemonade
This would allow NeverWrite to work with fully local models in addition to cloud APIs.
Why This Would Help
Many users are moving toward local inference for:
Privacy and data ownership
Offline usage
Lower API costs
Faster experimentation
Self-hosted/enterprise environments
NeverWrite looks to be an excellent upgrade to Obsidian and local model support would make it even more flexible for those with privacy concerns.
It would be great to add support for local/self-hosted LLM providers such as:
Ollama
LM Studio
Lemonade
This would allow NeverWrite to work with fully local models in addition to cloud APIs.
Why This Would Help
Many users are moving toward local inference for:
Privacy and data ownership
Offline usage
Lower API costs
Faster experimentation
Self-hosted/enterprise environments
NeverWrite looks to be an excellent upgrade to Obsidian and local model support would make it even more flexible for those with privacy concerns.