Skip to content

feat: add MiniMax as inference provider with M2.7 as default#272

Open
octo-patch wants to merge 3 commits intoConway-Research:mainfrom
octo-patch:feat/add-minimax-provider
Open

feat: add MiniMax as inference provider with M2.7 as default#272
octo-patch wants to merge 3 commits intoConway-Research:mainfrom
octo-patch:feat/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 12, 2026

Summary

Adds MiniMax as a first-class inference provider across the automaton's inference stack, with MiniMax-M2.7 as the default model.

Changes

  • Provider Registry: MiniMax added to DEFAULT_PROVIDERS with OpenAI-compatible https://api.minimax.io/v1 endpoint, four model entries (M2.7 reasoning, M2.7-highspeed fast, M2.5 reasoning, M2.5-highspeed fast)
  • Static Model Baseline: MiniMax-M2.7 and MiniMax-M2.7-highspeed added to STATIC_MODEL_BASELINE with tier/cost metadata, plus M2.5 variants as alternatives
  • Inference Router: MiniMax backend with temperature clamping (0.01–1.0) and max_tokens parameter style
  • Config & Setup: minimaxApiKey threaded through config, setup wizard, and CLI prompts
  • Fallback Order: MiniMax included in reasoning/fast/cheap tier fallback chains
  • Tests: Provider presence, model resolution, and fallback inclusion tests updated
  • README: MiniMax M2.7 listed alongside other frontier models

Why

MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities. The OpenAI-compatible API makes integration straightforward.

Testing

  • Unit tests updated and passing
  • Integration tested with MiniMax API

PR Bot and others added 3 commits March 12, 2026 19:26
Add MiniMax (api.minimax.io) as a first-class inference provider alongside
OpenAI, Anthropic, and others. Both MiniMax-M2.5 (reasoning tier) and
MiniMax-M2.5-highspeed (fast tier) are now available for model selection.

Changes:
- Add "minimax" to ModelProvider type and minimaxApiKey to config
- Register MiniMax models in provider-registry defaults (enabled)
- Add MiniMax backend routing in inference client with proper
  temperature clamping (MiniMax rejects temperature=0)
- Seed MiniMax models in STATIC_MODEL_BASELINE with correct
  parameterStyle ("max_tokens")
- Wire MINIMAX_API_KEY through setup wizard, configure menu,
  agent loop env export, and CLI entrypoint
- Add provider-registry tests for MiniMax provider

Co-Authored-By: Claude Opus 4.6 <[email protected]>
The STATIC_MODEL_BASELINE now includes MiniMax models, so the
valid provider list must include "minimax".
- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to model list
- Set MiniMax-M2.7 as default model
- Keep all previous models (M2.5, M2.5-highspeed) as alternatives
- Update related unit tests
@octo-patch octo-patch changed the title feat: add MiniMax as inference provider feat: add MiniMax as inference provider with M2.7 as default Mar 18, 2026
@octo-patch octo-patch force-pushed the feat/add-minimax-provider branch from 627776e to 242e7c8 Compare March 18, 2026 05:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant