yet another (trivial-minimal) emacs client to chat with LLM using Ollama API
ollama-chat-mode
is an Emacs package that allows users to interact with a local Large Language Model (LLM) using the Ollama API. This mode provides a conversational interface within Emacs, similar to classic chatbots like ELIZA.
- Local LLM Interaction: Communicate with a local language model via the Ollama API.
- Customizable Settings: Configure endpoint, model name, keystrings, and more.
- Emacs Integration: Seamlessly integrates into Emacs as a major mode.
- Interactive Chat: Supports interactive chatting within Emacs buffers.
Ensure you have the following installed:
- GNU Emacs
- Ollama package from ollama GitHub repository
-
Clone this repository to your desired location.
-
Add the following line to your
.emacs
orinit.el
file to load the package:(add-to-list 'load-path "/path/to/ollama-chat-mode") (require 'ollama-chat-mode)
-
Customize settings in your Emacs configuration if needed:
(setq ollama-chat-mode:endpoint "http://localhost:11434/api/generate" ollama-chat-mode:model-name "phi4:latest")
- Start a chat session by running
M-x emacs-ollama-chat
. - Interact with the LLM by typing your messages.
- Press
RET
twice to send your message and receive a response.
You can customize various aspects of ollama-chat-mode
:
- Endpoint: Set the Ollama HTTP service endpoint using
ollama-chat-mode:endpoint
. - Model Name: Choose the LLM model with
ollama-chat-mode:model-name
. - Keystrings: Customize human and bot keystrings for message formatting.
- Buffer Name: Change the chat buffer name via
ollama-chat-mode:ollama-chat-buffer
.
For further information or issues, please check the GitHub repository or contact the maintainer via LinkedIn.
ollama-chat-mode
is licensed under the GNU General Public License v3.0 or later.
For more details, see the GNU General Public License.