Skip to content

feat: add MiniMax as alternative LLM provider#74

Open
octo-patch wants to merge 1 commit intoelder-plinius:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as alternative LLM provider#74
octo-patch wants to merge 1 commit intoelder-plinius:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

  • Add MiniMax as a direct LLM provider alongside OpenRouter
  • Users can switch between providers in Settings > LLM Provider
  • MiniMax models: M2.7, M2.7-highspeed, M2.5, M2.5-highspeed (1M / 204K context)
  • Full tool-calling support with execute_command — all 32 Flipper actions work
  • Temperature clamping (0,1] and <think> tag stripping for MiniMax models
  • Settings UI: provider dropdown, conditional API key field, per-provider model list

Changes

File What
LlmProvider.kt Provider enum (OPEN_ROUTER, MINIMAX)
MiniMaxClient.kt OpenAI-compatible client with rate limiting, retry, tool calling
OpenRouterClient.kt Expose shared constants for MiniMaxClient reuse
VesperAgent.kt Route chat() to selected provider; keep OpenRouter for vision
SettingsStore.kt MiniMax API key + provider preference persistence
SettingsScreen.kt Provider selector, conditional API key field, dynamic model list
SettingsViewModel.kt Provider/MiniMax state management
README.md MiniMax models table, setup instructions, architecture diagram

Test plan

  • 28 unit tests covering thinking-tag stripping, response parsing, model catalog, provider enum, tool-call arguments, edge cases
  • 3 integration tests (require MINIMAX_API_KEY env var, auto-skipped otherwise)
  • Manual: switch provider to MiniMax, enter API key, verify chat works
  • Manual: verify OpenRouter still works after switching back
  • Manual: verify model dropdown updates per provider

10 files changed, 1,458 additions, 62 deletions

Add MiniMax (M2.7, M2.7-highspeed, M2.5, M2.5-highspeed) as a direct
LLM provider alongside OpenRouter. Users can now choose between
OpenRouter (200+ models via router) or MiniMax (direct API access)
in Settings > LLM Provider.

Changes:
- MiniMaxClient: OpenAI-compatible chat client with tool-calling,
  temperature clamping (0,1], and thinking-tag stripping
- LlmProvider enum for provider selection
- SettingsStore: MiniMax API key + provider preference persistence
- VesperAgent: routes chat() to selected provider, keeps OpenRouter
  for vision preprocessing and command parsing
- Settings UI: provider dropdown, conditional API key field,
  dynamic model list per provider
- 28 unit tests + 3 integration tests
- README: MiniMax models table, setup instructions, updated Quick Start
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant