Skip to content

Vali-98/ChatterUI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

6dc13b1 · Mar 14, 2025
Jan 6, 2025
Nov 28, 2024
Mar 13, 2025
Nov 24, 2024
Mar 2, 2025
Feb 20, 2025
Mar 4, 2025
Mar 14, 2025
Mar 13, 2025
Feb 19, 2025
Jan 28, 2025
Oct 9, 2024
Aug 7, 2024
Aug 7, 2024
Aug 7, 2024
Aug 7, 2024
Dec 23, 2024
Mar 14, 2025
Oct 10, 2024
Sep 18, 2024
Aug 7, 2024
Jan 21, 2025
Nov 20, 2024
Mar 12, 2025
Mar 12, 2025
Jan 23, 2025

Repository files navigation

ChatterUI - A simple app for LLMs

ChatterUI is a native mobile frontend for LLMs.

Run LLMs on device or connect to various commercial or open source APIs. ChatterUI aims to provide a mobile-friendly interface with fine-grained control over chat structuring.

If you like the app, feel free support me here:

Support me on ko-fi.com

Chat With Characters or Assistants

Use on-device Models or APIs

Modify And Customize

Personalize Yourself

Features:

  • Run LLMs on-device in Local Mode
  • Connect to various APIs in Remote Mode
  • Chat with characters. (Supports the Character Card v2 specification.)
  • Create and manage multiple chats per character.
  • Customize Sampler fields and Instruct formatting
  • Integrates with your device’s text-to-speech (TTS) engine

Usage

Download and install latest APK from the releases page.

iOS is Currently unavailable due to lacking iOS hardware for development

Local Mode

ChatterUI uses a llama.cpp under the hood to run gguf files on device. A custom adapter is used to integrate with react-native: cui-llama.rn

To use on-device inferencing, first enable Local Mode, then go to Models > Import Model / Use External Model and choose a gguf model that can fit on your device's memory. The importing functions are as follows:

  • Import Model: Copies the model file into ChatterUI, potentially speeding up startup time.
  • Use External Model: Uses a model from your device storage directly, removing the need to copy large files into ChatterUI but with a slight delay in load times.

After that, you can load the model and begin chatting!

Note: For devices with Snapdragon 8 Gen 1 and above or Exynos 2200+, it is recommended to use the Q4_0 quantization for optimized performance.

Remote Mode

Remote Mode allows you to connect to a few common APIs from both commercial and open source projects.

Open Source Backends:

  • koboldcpp
  • text-generation-webui
  • Ollama

Dedicated API:

  • OpenAI
  • Claude (with ability to use a proxy)
  • Cohere
  • Open Router
  • Mancer
  • AI Horde

Generic backends:

  • Generic Text Completions
  • Generic Chat Completions

These should be compliant with any Text Completion/Chat Completion backends such as Groq or Infermatic.

Custom APIs:

Is your API provider missing? ChatterUI allows you to define APIs using its template system.

Read more about it here!

Development

Android

To run a development build, follow these simple steps:

  • Install any Java 17/21 SDK of your choosing
  • Install android-sdk via Android Studio
  • Clone the repo:
git clone https://github.com/Vali-98/ChatterUI.git
  • Install dependencies via npm and run via Expo:
npm install
npx expo run:android

Building an APK

Requires Node.js, Java 17/21 SDK and Android SDK. Expo uses EAS to build apps which requires a Linux environment.

  1. Clone the repo.
  2. Rename the eas.json.example to eas.json.
  3. Modify "ANDROID_SDK_ROOT" to the directory of your Android SDK
  4. Run the following:
npm install
eas build --platform android --local

IOS

Currently untested as I do not own hardware for iOS development. Assistance here would be greatly appreciated!

Possible issues:

  • cui-llama.rn lacking Swift implementation for cui-specific functions
  • cui-fs having no Swift integration
  • Platform specific shadows
  • Exporting files not using shareAsync

Acknowledgement

  • llama.cpp - the underlying engine to run LLMs
  • llama.rn - the original react-native llama.cpp adapter