feat: add MiniMax as first-class LLM provider#721
feat: add MiniMax as first-class LLM provider#721octo-patch wants to merge 1 commit intoSuanmoSuanyangTechnology:mainfrom
Conversation
Add MiniMax AI as a new model provider, enabling users to leverage MiniMax M2.7 and M2.7-highspeed models (204K context) via the OpenAI-compatible API. Backend: - Register MINIMAX in ModelProvider enum (auto-populates provider dropdown) - Add MiniMax support in RedBearModelFactory with temperature clamping (MiniMax requires temp in (0, 1]) and default base_url - Route MiniMax to ChatOpenAI/OpenAIEmbeddings via OpenAI-compat API - Add MiniMaxClient in services/llm_client.py with think-tag stripping for M2.7 extended thinking responses - Register in LLMClientFactory for env-based provider selection - Add MINIMAX_API_KEY to env.example Frontend: - Add MiniMax SVG icon to model assets - Register icon in ModelManagement utils.ts - Add i18n translations (en/zh) for both model and modelNew namespaces Tests: - 22 unit tests covering enum, factory, temperature clamping, think-tag stripping, client creation, and factory dispatch - 3 integration tests with real MiniMax API calls (skipped without key)
审阅者指南通过现有的兼容 OpenAI 的技术栈,将 MiniMax 作为一等 LLM 提供方接入,包括后端客户端/工厂路由、模型配置参数传递、temperature 限制与 think 标签剥离,以及前端提供方注册与国际化(i18n);上述能力均由针对性的单元测试与集成测试覆盖。 MiniMax 提供方聊天流程时序图sequenceDiagram
actor User
participant ModelManagementUI
participant BackendAPI
participant RedBearModelFactory
participant LLMClientFactory
participant MiniMaxClient
participant MiniMaxAPI
User->>ModelManagementUI: Select provider minimax and enter API key
ModelManagementUI->>BackendAPI: Save model config (provider=minimax)
BackendAPI->>RedBearModelFactory: get_model_params(config provider=minimax)
RedBearModelFactory-->>BackendAPI: params (base_url, api_key, temperature)
User->>ModelManagementUI: Send chat message
ModelManagementUI->>BackendAPI: POST /chat (provider=minimax, prompt)
BackendAPI->>LLMClientFactory: create(provider=minimax, api_key, model, base_url)
LLMClientFactory->>MiniMaxClient: __init__(api_key, model, base_url)
MiniMaxClient-->>LLMClientFactory: client instance
LLMClientFactory-->>BackendAPI: MiniMaxClient
BackendAPI->>MiniMaxClient: chat(prompt, temperature, max_tokens)
MiniMaxClient->>MiniMaxClient: _clamp_temperature(temperature)
MiniMaxClient->>MiniMaxAPI: chat.completions.create(model, messages, temperature)
MiniMaxAPI-->>MiniMaxClient: response with content (may include think tags)
MiniMaxClient->>MiniMaxClient: strip <think>...</think> tags
MiniMaxClient-->>BackendAPI: cleaned content
BackendAPI-->>ModelManagementUI: chat response
ModelManagementUI-->>User: Display MiniMax answer
MiniMax LLM 集成类图classDiagram
class BaseLLMClient {
<<abstract>>
+chat(prompt, **kwargs): str
}
class MiniMaxClient {
-api_key: str
-model: str
-base_url: str
-client: AsyncOpenAI
+MiniMaxClient(api_key, model, base_url)
+chat(prompt, **kwargs): str
+_clamp_temperature(temperature): float
}
class LLMClientFactory {
+create(provider, **kwargs): BaseLLMClient
}
class ModelProvider {
<<enum>>
OPENAI
AZURE
ANTHROPIC
MINIMAX
OLLAMA
XINFERENCE
GPUSTACK
DASHSCOPE
}
class RedBearModelConfig {
+model_name: str
+base_url: str
+api_key: str
+timeout: float
+max_retries: int
+extra_params: dict
+provider: ModelProvider
}
class RedBearModelFactory {
+get_model_params(config): dict
+get_provider_llm_class(config, type): type
+get_provider_embedding_class(provider): type
}
class ChatOpenAI {
}
class OpenAIEmbeddings {
}
BaseLLMClient <|-- MiniMaxClient
LLMClientFactory --> BaseLLMClient : create
LLMClientFactory --> MiniMaxClient : provider minimax
RedBearModelFactory --> RedBearModelConfig : uses
RedBearModelFactory --> ModelProvider : checks
RedBearModelFactory --> ChatOpenAI : returns_for_minimax
RedBearModelFactory --> OpenAIEmbeddings : returns_for_minimax
ModelProvider ..> MiniMaxClient : value minimax
文件级变更
提示与命令与 Sourcery 交互
自定义你的体验访问你的 仪表盘 以:
获取帮助Original review guide in EnglishReviewer's GuideAdds MiniMax as a first-class LLM provider wired through the existing OpenAI-compatible stack, including backend client/factory routing, model config plumbing, temperature clamping and think-tag stripping, plus frontend provider registration and i18n, all covered by targeted unit and integration tests. Sequence diagram for MiniMax provider chat flowsequenceDiagram
actor User
participant ModelManagementUI
participant BackendAPI
participant RedBearModelFactory
participant LLMClientFactory
participant MiniMaxClient
participant MiniMaxAPI
User->>ModelManagementUI: Select provider minimax and enter API key
ModelManagementUI->>BackendAPI: Save model config (provider=minimax)
BackendAPI->>RedBearModelFactory: get_model_params(config provider=minimax)
RedBearModelFactory-->>BackendAPI: params (base_url, api_key, temperature)
User->>ModelManagementUI: Send chat message
ModelManagementUI->>BackendAPI: POST /chat (provider=minimax, prompt)
BackendAPI->>LLMClientFactory: create(provider=minimax, api_key, model, base_url)
LLMClientFactory->>MiniMaxClient: __init__(api_key, model, base_url)
MiniMaxClient-->>LLMClientFactory: client instance
LLMClientFactory-->>BackendAPI: MiniMaxClient
BackendAPI->>MiniMaxClient: chat(prompt, temperature, max_tokens)
MiniMaxClient->>MiniMaxClient: _clamp_temperature(temperature)
MiniMaxClient->>MiniMaxAPI: chat.completions.create(model, messages, temperature)
MiniMaxAPI-->>MiniMaxClient: response with content (may include think tags)
MiniMaxClient->>MiniMaxClient: strip <think>...</think> tags
MiniMaxClient-->>BackendAPI: cleaned content
BackendAPI-->>ModelManagementUI: chat response
ModelManagementUI-->>User: Display MiniMax answer
Class diagram for MiniMax LLM integrationclassDiagram
class BaseLLMClient {
<<abstract>>
+chat(prompt, **kwargs): str
}
class MiniMaxClient {
-api_key: str
-model: str
-base_url: str
-client: AsyncOpenAI
+MiniMaxClient(api_key, model, base_url)
+chat(prompt, **kwargs): str
+_clamp_temperature(temperature): float
}
class LLMClientFactory {
+create(provider, **kwargs): BaseLLMClient
}
class ModelProvider {
<<enum>>
OPENAI
AZURE
ANTHROPIC
MINIMAX
OLLAMA
XINFERENCE
GPUSTACK
DASHSCOPE
}
class RedBearModelConfig {
+model_name: str
+base_url: str
+api_key: str
+timeout: float
+max_retries: int
+extra_params: dict
+provider: ModelProvider
}
class RedBearModelFactory {
+get_model_params(config): dict
+get_provider_llm_class(config, type): type
+get_provider_embedding_class(provider): type
}
class ChatOpenAI {
}
class OpenAIEmbeddings {
}
BaseLLMClient <|-- MiniMaxClient
LLMClientFactory --> BaseLLMClient : create
LLMClientFactory --> MiniMaxClient : provider minimax
RedBearModelFactory --> RedBearModelConfig : uses
RedBearModelFactory --> ModelProvider : checks
RedBearModelFactory --> ChatOpenAI : returns_for_minimax
RedBearModelFactory --> OpenAIEmbeddings : returns_for_minimax
ModelProvider ..> MiniMaxClient : value minimax
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Hey - 我发现了 3 个问题,并给出了一些整体反馈:
- MiniMax 的温度截断逻辑同时实现于
RedBearModelFactory.get_model_params和MiniMaxClient._clamp_temperature中;建议将这部分逻辑集中到一个公共的 helper 中,这样在允许范围调整时可以避免两处实现出现偏差。 - 在
MiniMaxClient.chat中,re每次调用时都在方法内部导入;建议把这个导入移动到模块级,以简化函数并避免重复导入。
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Temperature clamping for MiniMax is implemented both in `RedBearModelFactory.get_model_params` and in `MiniMaxClient._clamp_temperature`; consider centralizing this logic in a single helper to avoid drift if the allowed range changes.
- In `MiniMaxClient.chat`, `re` is imported inside the method on each call; moving this import to the module level will simplify the function and avoid repeated imports.
## Individual Comments
### Comment 1
<location path="api/tests/test_minimax_provider.py" line_range="2-7" />
<code_context>
+# -*- coding: UTF-8 -*-
+"""Unit tests for MiniMax LLM provider integration.
+
+Tests cover:
+- ModelProvider enum registration
+- MiniMaxClient temperature clamping, think-tag stripping
+- LLMClientFactory.create("minimax") dispatching
+
+Run: cd api && python -m pytest tests/test_minimax_provider.py -v
</code_context>
<issue_to_address>
**suggestion (testing):** 为 RedBearModelFactory 中 MiniMax 相关分支以及 provider 路由 helper 增加测试用例
当前测试覆盖了枚举、`MiniMaxClient` 和 `LLMClientFactory`,但没有覆盖 `get_model_params` 以及 provider 路由 helper 中 MiniMax 相关的分支逻辑。请补充以下测试:
- 使用 `RedBearModelConfig(provider=ModelProvider.MINIMAX, ...)` 配合多种 `temperature` 值(负数、0、区间内、>1.0),并断言温度被正确截断以及默认的 `base_url`。
- 断言对 MiniMax 而言,`get_provider_llm_class` 在所有相关的 `ModelType` 上都返回 `ChatOpenAI`。
- 断言当 `provider` 为 `MINIMAX` 时,`get_provider_embedding_class` 返回 `OpenAIEmbeddings`。
这样可以保证 MiniMax 的集成在更高层的工厂/路由层面同样正常工作,而不仅仅是在 `MiniMaxClient` 中。
建议的实现如下:
```python
import os
import sys
import json
import importlib
import importlib.util
from enum import Enum
from unittest.mock import patch, AsyncMock, MagicMock
import pytest
# NOTE: Import paths may need to be adjusted to match the actual project layout.
# They are written here to be explicit and discoverable.
from api.llm.model_config import RedBearModelConfig, ModelType
from api.llm.model_provider import ModelProvider
from api.llm.factory import (
get_model_params,
get_provider_llm_class,
get_provider_embedding_class,
)
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
@pytest.mark.parametrize(
"temperature,expected",
[
(-1.0, 0.0),
(0.0, 0.0),
(0.3, 0.3),
(1.0, 1.0),
(2.0, 1.0),
],
)
def test_minimax_get_model_params_temperature_clamping_and_default_base_url(
temperature: float, expected: float
):
"""MiniMax-specific paths in RedBearModelFactory: clamp temperature and set default base_url.
Ensures that when using MiniMax as the provider:
- temperature is clamped into [0.0, 1.0]
- a default base_url is provided if none is specified on the config
"""
cfg = RedBearModelConfig(
provider=ModelProvider.MINIMAX,
model="minimax-text-model",
model_type=ModelType.LLM,
temperature=temperature,
base_url=None, # force factory to apply its MiniMax default
)
params = get_model_params(cfg)
# Temperature should be clamped at the factory level for MiniMax
assert "temperature" in params
assert params["temperature"] == pytest.approx(expected)
# Default MiniMax base_url should be applied when not set on the config
assert "base_url" in params
# We don't hard-code the URL here; we just assert a non-empty default is applied.
assert isinstance(params["base_url"], str)
assert params["base_url"] # non-empty
def test_minimax_get_model_params_respects_explicit_base_url():
"""When base_url is explicitly set on the config, it should be propagated unchanged."""
custom_base_url = "https://minimax.internal.example/v1"
cfg = RedBearModelConfig(
provider=ModelProvider.MINIMAX,
model="minimax-text-model",
model_type=ModelType.LLM,
temperature=0.5,
base_url=custom_base_url,
)
params = get_model_params(cfg)
assert params["base_url"] == custom_base_url
@pytest.mark.parametrize(
"model_type",
[
ModelType.LLM,
ModelType.CHAT, # if your enum differentiates, otherwise remove
],
)
def test_get_provider_llm_class_returns_chat_openai_for_minimax(model_type: ModelType):
"""MiniMax provider should route to ChatOpenAI for all relevant LLM ModelTypes."""
llm_cls = get_provider_llm_class(provider=ModelProvider.MINIMAX, model_type=model_type)
assert llm_cls is ChatOpenAI
def test_get_provider_embedding_class_returns_openai_embeddings_for_minimax():
"""MiniMax provider should reuse OpenAIEmbeddings for embeddings."""
embedding_cls = get_provider_embedding_class(provider=ModelProvider.MINIMAX)
assert embedding_cls is OpenAIEmbeddings
```
上述改动假定了以下导入路径和名称:
1. `RedBearModelConfig` 和 `ModelType` 位于 `api.llm.model_config`。
2. `ModelProvider` 位于 `api.llm.model_provider`。
3. `get_model_params`、`get_provider_llm_class` 和 `get_provider_embedding_class` 位于 `api.llm.factory`。
4. `ChatOpenAI` 和 `OpenAIEmbeddings` 来自 `langchain_openai`。
如果你的项目结构不同,请在保持测试逻辑不变的前提下,调整对应的导入路径。
同时,如果你的 `ModelType` 枚举中没有将 `CHAT` 与 `LLM` 分开定义,请从参数化中删除该值,让测试只覆盖你代码库中实际存在的合法 `ModelType`。
</issue_to_address>
### Comment 2
<location path="api/tests/test_minimax_provider.py" line_range="134" />
<code_context>
+# 2. MiniMaxClient
+# ===========================================================================
+
+class TestMiniMaxClient:
+
+ def test_missing_api_key_raises(self):
</code_context>
<issue_to_address>
**suggestion (testing):** 建议为 `openai` 包缺失时的 ImportError 分支添加单元测试
当前 `MiniMaxClient` 的测试覆盖了 API key 选择和默认值,但没有覆盖 `openai` 依赖缺失的情况。由于当 `from openai import AsyncOpenAI` 失败时,构造函数会抛出带有具体帮助信息的 `ImportError`,请添加一个测试:通过补丁方式屏蔽 `openai`(例如使用 `sys.modules`),然后构造 `MiniMaxClient`,并断言抛出了预期的 `ImportError` 及其错误信息。
建议的实现如下:
```python
def test_env_api_key_used(self):
mod = _import_llm_client()
with patch.dict(os.environ, {"MINIMAX_API_KEY": "sk-from-env"}):
client = mod.MiniMaxClient()
def test_import_error_when_openai_missing(self):
mod = _import_llm_client()
original_import = builtins.__import__
def fake_import(name, *args, **kwargs):
if name == "openai":
raise ImportError("No module named 'openai'")
return original_import(name, *args, **kwargs)
with patch.object(builtins, "__import__", side_effect=fake_import):
with pytest.raises(ImportError, match="openai"):
mod.MiniMaxClient(api_key="dummy-key")
```
为使上述代码可以编译并运行,请确保在 `api/tests/test_minimax_provider.py` 顶部添加:
1. `import builtins`(用于补丁 `__import__`)。
2. 已存在的 `pytest` 和 `patch` 导入(`from unittest.mock import patch`)看起来已经在文件中;如果没有,请一并补上。
`match="openai"` 断言假定 `MiniMaxClient` 抛出的 ImportError 消息中包含单词 "openai",这与构造函数帮助文本的设计初衷是一致的。
</issue_to_address>
### Comment 3
<location path="api/tests/test_minimax_provider.py" line_range="184-193" />
<code_context>
+class TestMiniMaxClientIntegration:
+ """Integration tests for MiniMaxClient in services layer."""
+
+ @pytest.mark.asyncio
+ async def test_minimax_client_chat(self):
+ """Test MiniMaxClient.chat() with real API."""
</code_context>
<issue_to_address>
**suggestion (testing):** 扩展 chat 的温度测试以覆盖 >1.0 的温度,并验证 `max_tokens` 的传递
为了更完整地覆盖 chat 路径(而不仅仅是 helper),还请:
- 添加一个测试:调用 `await client.chat("test", temperature=2.0)`,并断言底层的 `create` 调用接收到的 `temperature` 为 `1.0`。
- 在已有的某个 chat 测试中,传入一个非默认的 `max_tokens`(例如 `max_tokens=123`),并断言 `create` 调用接收到相同的值。
这样可以验证被截断后的温度以及 `max_tokens` 在运行时都能正确传递到兼容 OpenAI 的底层客户端。
</issue_to_address>帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈改进后续的评审。
Original comment in English
Hey - I've found 3 issues, and left some high level feedback:
- Temperature clamping for MiniMax is implemented both in
RedBearModelFactory.get_model_paramsand inMiniMaxClient._clamp_temperature; consider centralizing this logic in a single helper to avoid drift if the allowed range changes. - In
MiniMaxClient.chat,reis imported inside the method on each call; moving this import to the module level will simplify the function and avoid repeated imports.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Temperature clamping for MiniMax is implemented both in `RedBearModelFactory.get_model_params` and in `MiniMaxClient._clamp_temperature`; consider centralizing this logic in a single helper to avoid drift if the allowed range changes.
- In `MiniMaxClient.chat`, `re` is imported inside the method on each call; moving this import to the module level will simplify the function and avoid repeated imports.
## Individual Comments
### Comment 1
<location path="api/tests/test_minimax_provider.py" line_range="2-7" />
<code_context>
+# -*- coding: UTF-8 -*-
+"""Unit tests for MiniMax LLM provider integration.
+
+Tests cover:
+- ModelProvider enum registration
+- MiniMaxClient temperature clamping, think-tag stripping
+- LLMClientFactory.create("minimax") dispatching
+
+Run: cd api && python -m pytest tests/test_minimax_provider.py -v
</code_context>
<issue_to_address>
**suggestion (testing):** Add tests for MiniMax-specific branches in RedBearModelFactory and provider routing helpers
Current tests exercise the enum, `MiniMaxClient`, and `LLMClientFactory`, but not the MiniMax-specific paths in `get_model_params` and the provider routing helpers. Please also add tests that:
- Use `RedBearModelConfig(provider=ModelProvider.MINIMAX, ...)` with various `temperature` values (negative, 0, in-range, >1.0) and assert clamping and the default `base_url`.
- Assert `get_provider_llm_class` returns `ChatOpenAI` for MiniMax for all relevant `ModelType`s.
- Assert `get_provider_embedding_class` returns `OpenAIEmbeddings` when `provider` is `MINIMAX`.
This ensures the MiniMax integration works correctly in the higher-level factory/routing layer, not only in `MiniMaxClient`.
Suggested implementation:
```python
import os
import sys
import json
import importlib
import importlib.util
from enum import Enum
from unittest.mock import patch, AsyncMock, MagicMock
import pytest
# NOTE: Import paths may need to be adjusted to match the actual project layout.
# They are written here to be explicit and discoverable.
from api.llm.model_config import RedBearModelConfig, ModelType
from api.llm.model_provider import ModelProvider
from api.llm.factory import (
get_model_params,
get_provider_llm_class,
get_provider_embedding_class,
)
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
@pytest.mark.parametrize(
"temperature,expected",
[
(-1.0, 0.0),
(0.0, 0.0),
(0.3, 0.3),
(1.0, 1.0),
(2.0, 1.0),
],
)
def test_minimax_get_model_params_temperature_clamping_and_default_base_url(
temperature: float, expected: float
):
"""MiniMax-specific paths in RedBearModelFactory: clamp temperature and set default base_url.
Ensures that when using MiniMax as the provider:
- temperature is clamped into [0.0, 1.0]
- a default base_url is provided if none is specified on the config
"""
cfg = RedBearModelConfig(
provider=ModelProvider.MINIMAX,
model="minimax-text-model",
model_type=ModelType.LLM,
temperature=temperature,
base_url=None, # force factory to apply its MiniMax default
)
params = get_model_params(cfg)
# Temperature should be clamped at the factory level for MiniMax
assert "temperature" in params
assert params["temperature"] == pytest.approx(expected)
# Default MiniMax base_url should be applied when not set on the config
assert "base_url" in params
# We don't hard-code the URL here; we just assert a non-empty default is applied.
assert isinstance(params["base_url"], str)
assert params["base_url"] # non-empty
def test_minimax_get_model_params_respects_explicit_base_url():
"""When base_url is explicitly set on the config, it should be propagated unchanged."""
custom_base_url = "https://minimax.internal.example/v1"
cfg = RedBearModelConfig(
provider=ModelProvider.MINIMAX,
model="minimax-text-model",
model_type=ModelType.LLM,
temperature=0.5,
base_url=custom_base_url,
)
params = get_model_params(cfg)
assert params["base_url"] == custom_base_url
@pytest.mark.parametrize(
"model_type",
[
ModelType.LLM,
ModelType.CHAT, # if your enum differentiates, otherwise remove
],
)
def test_get_provider_llm_class_returns_chat_openai_for_minimax(model_type: ModelType):
"""MiniMax provider should route to ChatOpenAI for all relevant LLM ModelTypes."""
llm_cls = get_provider_llm_class(provider=ModelProvider.MINIMAX, model_type=model_type)
assert llm_cls is ChatOpenAI
def test_get_provider_embedding_class_returns_openai_embeddings_for_minimax():
"""MiniMax provider should reuse OpenAIEmbeddings for embeddings."""
embedding_cls = get_provider_embedding_class(provider=ModelProvider.MINIMAX)
assert embedding_cls is OpenAIEmbeddings
```
The above changes assume the following import paths and names:
1. `RedBearModelConfig` and `ModelType` live in `api.llm.model_config`.
2. `ModelProvider` lives in `api.llm.model_provider`.
3. `get_model_params`, `get_provider_llm_class`, and `get_provider_embedding_class` live in `api.llm.factory`.
4. `ChatOpenAI` and `OpenAIEmbeddings` come from `langchain_openai`.
If your project structure differs, adjust the import paths accordingly while keeping the tests themselves the same.
Also, if your `ModelType` enum does not define `CHAT` separately from `LLM`, remove that value from the parametrization so that the test only exercises the valid `ModelType`s for your codebase.
</issue_to_address>
### Comment 2
<location path="api/tests/test_minimax_provider.py" line_range="134" />
<code_context>
+# 2. MiniMaxClient
+# ===========================================================================
+
+class TestMiniMaxClient:
+
+ def test_missing_api_key_raises(self):
</code_context>
<issue_to_address>
**suggestion (testing):** Consider adding a unit test for the ImportError path when the `openai` package is missing
Current `MiniMaxClient` tests cover API key selection and defaults but not the case where the `openai` dependency is missing. Since the constructor raises an `ImportError` with a specific help message when `from openai import AsyncOpenAI` fails, please add a test that patches out `openai` (e.g. via `sys.modules`), constructs `MiniMaxClient`, and asserts the expected `ImportError` and message.
Suggested implementation:
```python
def test_env_api_key_used(self):
mod = _import_llm_client()
with patch.dict(os.environ, {"MINIMAX_API_KEY": "sk-from-env"}):
client = mod.MiniMaxClient()
def test_import_error_when_openai_missing(self):
mod = _import_llm_client()
original_import = builtins.__import__
def fake_import(name, *args, **kwargs):
if name == "openai":
raise ImportError("No module named 'openai'")
return original_import(name, *args, **kwargs)
with patch.object(builtins, "__import__", side_effect=fake_import):
with pytest.raises(ImportError, match="openai"):
mod.MiniMaxClient(api_key="dummy-key")
```
To make this compile and run, ensure at the top of `api/tests/test_minimax_provider.py` you have:
1. `import builtins` (for patching `__import__`).
2. The existing imports for `pytest` and `patch` (`from unittest.mock import patch`) already appear to be present; if not, add them as well.
The `match="openai"` assertion assumes the `MiniMaxClient`'s ImportError message includes the word "openai", which aligns with the intent of the constructor's help text.
</issue_to_address>
### Comment 3
<location path="api/tests/test_minimax_provider.py" line_range="184-193" />
<code_context>
+class TestMiniMaxClientIntegration:
+ """Integration tests for MiniMaxClient in services layer."""
+
+ @pytest.mark.asyncio
+ async def test_minimax_client_chat(self):
+ """Test MiniMaxClient.chat() with real API."""
</code_context>
<issue_to_address>
**suggestion (testing):** Extend chat temperature tests to cover >1.0 temperatures and `max_tokens` propagation
To more completely exercise the chat path (beyond the helper), please also:
- Add a test that calls `await client.chat("test", temperature=2.0)` and asserts the underlying `create` call receives `temperature=1.0`.
- In an existing chat test, pass a non-default `max_tokens` (e.g. `max_tokens=123`) and assert the `create` call receives the same value.
This will validate that both clamped temperature and `max_tokens` are correctly wired through to the OpenAI-compatible client at runtime.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| """Unit tests for MiniMax LLM provider integration. | ||
|
|
||
| Tests cover: | ||
| - ModelProvider enum registration | ||
| - MiniMaxClient temperature clamping, think-tag stripping | ||
| - LLMClientFactory.create("minimax") dispatching |
There was a problem hiding this comment.
suggestion (testing): 为 RedBearModelFactory 中 MiniMax 相关分支以及 provider 路由 helper 增加测试用例
当前测试覆盖了枚举、MiniMaxClient 和 LLMClientFactory,但没有覆盖 get_model_params 以及 provider 路由 helper 中 MiniMax 相关的分支逻辑。请补充以下测试:
- 使用
RedBearModelConfig(provider=ModelProvider.MINIMAX, ...)配合多种temperature值(负数、0、区间内、>1.0),并断言温度被正确截断以及默认的base_url。 - 断言对 MiniMax 而言,
get_provider_llm_class在所有相关的ModelType上都返回ChatOpenAI。 - 断言当
provider为MINIMAX时,get_provider_embedding_class返回OpenAIEmbeddings。
这样可以保证 MiniMax 的集成在更高层的工厂/路由层面同样正常工作,而不仅仅是在 MiniMaxClient 中。
建议的实现如下:
import os
import sys
import json
import importlib
import importlib.util
from enum import Enum
from unittest.mock import patch, AsyncMock, MagicMock
import pytest
# NOTE: Import paths may need to be adjusted to match the actual project layout.
# They are written here to be explicit and discoverable.
from api.llm.model_config import RedBearModelConfig, ModelType
from api.llm.model_provider import ModelProvider
from api.llm.factory import (
get_model_params,
get_provider_llm_class,
get_provider_embedding_class,
)
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
@pytest.mark.parametrize(
"temperature,expected",
[
(-1.0, 0.0),
(0.0, 0.0),
(0.3, 0.3),
(1.0, 1.0),
(2.0, 1.0),
],
)
def test_minimax_get_model_params_temperature_clamping_and_default_base_url(
temperature: float, expected: float
):
"""MiniMax-specific paths in RedBearModelFactory: clamp temperature and set default base_url.
Ensures that when using MiniMax as the provider:
- temperature is clamped into [0.0, 1.0]
- a default base_url is provided if none is specified on the config
"""
cfg = RedBearModelConfig(
provider=ModelProvider.MINIMAX,
model="minimax-text-model",
model_type=ModelType.LLM,
temperature=temperature,
base_url=None, # force factory to apply its MiniMax default
)
params = get_model_params(cfg)
# Temperature should be clamped at the factory level for MiniMax
assert "temperature" in params
assert params["temperature"] == pytest.approx(expected)
# Default MiniMax base_url should be applied when not set on the config
assert "base_url" in params
# We don't hard-code the URL here; we just assert a non-empty default is applied.
assert isinstance(params["base_url"], str)
assert params["base_url"] # non-empty
def test_minimax_get_model_params_respects_explicit_base_url():
"""When base_url is explicitly set on the config, it should be propagated unchanged."""
custom_base_url = "https://minimax.internal.example/v1"
cfg = RedBearModelConfig(
provider=ModelProvider.MINIMAX,
model="minimax-text-model",
model_type=ModelType.LLM,
temperature=0.5,
base_url=custom_base_url,
)
params = get_model_params(cfg)
assert params["base_url"] == custom_base_url
@pytest.mark.parametrize(
"model_type",
[
ModelType.LLM,
ModelType.CHAT, # if your enum differentiates, otherwise remove
],
)
def test_get_provider_llm_class_returns_chat_openai_for_minimax(model_type: ModelType):
"""MiniMax provider should route to ChatOpenAI for all relevant LLM ModelTypes."""
llm_cls = get_provider_llm_class(provider=ModelProvider.MINIMAX, model_type=model_type)
assert llm_cls is ChatOpenAI
def test_get_provider_embedding_class_returns_openai_embeddings_for_minimax():
"""MiniMax provider should reuse OpenAIEmbeddings for embeddings."""
embedding_cls = get_provider_embedding_class(provider=ModelProvider.MINIMAX)
assert embedding_cls is OpenAIEmbeddings上述改动假定了以下导入路径和名称:
RedBearModelConfig和ModelType位于api.llm.model_config。ModelProvider位于api.llm.model_provider。get_model_params、get_provider_llm_class和get_provider_embedding_class位于api.llm.factory。ChatOpenAI和OpenAIEmbeddings来自langchain_openai。
如果你的项目结构不同,请在保持测试逻辑不变的前提下,调整对应的导入路径。
同时,如果你的 ModelType 枚举中没有将 CHAT 与 LLM 分开定义,请从参数化中删除该值,让测试只覆盖你代码库中实际存在的合法 ModelType。
Original comment in English
suggestion (testing): Add tests for MiniMax-specific branches in RedBearModelFactory and provider routing helpers
Current tests exercise the enum, MiniMaxClient, and LLMClientFactory, but not the MiniMax-specific paths in get_model_params and the provider routing helpers. Please also add tests that:
- Use
RedBearModelConfig(provider=ModelProvider.MINIMAX, ...)with varioustemperaturevalues (negative, 0, in-range, >1.0) and assert clamping and the defaultbase_url. - Assert
get_provider_llm_classreturnsChatOpenAIfor MiniMax for all relevantModelTypes. - Assert
get_provider_embedding_classreturnsOpenAIEmbeddingswhenproviderisMINIMAX.
This ensures the MiniMax integration works correctly in the higher-level factory/routing layer, not only in MiniMaxClient.
Suggested implementation:
import os
import sys
import json
import importlib
import importlib.util
from enum import Enum
from unittest.mock import patch, AsyncMock, MagicMock
import pytest
# NOTE: Import paths may need to be adjusted to match the actual project layout.
# They are written here to be explicit and discoverable.
from api.llm.model_config import RedBearModelConfig, ModelType
from api.llm.model_provider import ModelProvider
from api.llm.factory import (
get_model_params,
get_provider_llm_class,
get_provider_embedding_class,
)
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
@pytest.mark.parametrize(
"temperature,expected",
[
(-1.0, 0.0),
(0.0, 0.0),
(0.3, 0.3),
(1.0, 1.0),
(2.0, 1.0),
],
)
def test_minimax_get_model_params_temperature_clamping_and_default_base_url(
temperature: float, expected: float
):
"""MiniMax-specific paths in RedBearModelFactory: clamp temperature and set default base_url.
Ensures that when using MiniMax as the provider:
- temperature is clamped into [0.0, 1.0]
- a default base_url is provided if none is specified on the config
"""
cfg = RedBearModelConfig(
provider=ModelProvider.MINIMAX,
model="minimax-text-model",
model_type=ModelType.LLM,
temperature=temperature,
base_url=None, # force factory to apply its MiniMax default
)
params = get_model_params(cfg)
# Temperature should be clamped at the factory level for MiniMax
assert "temperature" in params
assert params["temperature"] == pytest.approx(expected)
# Default MiniMax base_url should be applied when not set on the config
assert "base_url" in params
# We don't hard-code the URL here; we just assert a non-empty default is applied.
assert isinstance(params["base_url"], str)
assert params["base_url"] # non-empty
def test_minimax_get_model_params_respects_explicit_base_url():
"""When base_url is explicitly set on the config, it should be propagated unchanged."""
custom_base_url = "https://minimax.internal.example/v1"
cfg = RedBearModelConfig(
provider=ModelProvider.MINIMAX,
model="minimax-text-model",
model_type=ModelType.LLM,
temperature=0.5,
base_url=custom_base_url,
)
params = get_model_params(cfg)
assert params["base_url"] == custom_base_url
@pytest.mark.parametrize(
"model_type",
[
ModelType.LLM,
ModelType.CHAT, # if your enum differentiates, otherwise remove
],
)
def test_get_provider_llm_class_returns_chat_openai_for_minimax(model_type: ModelType):
"""MiniMax provider should route to ChatOpenAI for all relevant LLM ModelTypes."""
llm_cls = get_provider_llm_class(provider=ModelProvider.MINIMAX, model_type=model_type)
assert llm_cls is ChatOpenAI
def test_get_provider_embedding_class_returns_openai_embeddings_for_minimax():
"""MiniMax provider should reuse OpenAIEmbeddings for embeddings."""
embedding_cls = get_provider_embedding_class(provider=ModelProvider.MINIMAX)
assert embedding_cls is OpenAIEmbeddingsThe above changes assume the following import paths and names:
RedBearModelConfigandModelTypelive inapi.llm.model_config.ModelProviderlives inapi.llm.model_provider.get_model_params,get_provider_llm_class, andget_provider_embedding_classlive inapi.llm.factory.ChatOpenAIandOpenAIEmbeddingscome fromlangchain_openai.
If your project structure differs, adjust the import paths accordingly while keeping the tests themselves the same.
Also, if your ModelType enum does not define CHAT separately from LLM, remove that value from the parametrization so that the test only exercises the valid ModelTypes for your codebase.
| # 2. MiniMaxClient | ||
| # =========================================================================== | ||
|
|
||
| class TestMiniMaxClient: |
There was a problem hiding this comment.
suggestion (testing): 建议为 openai 包缺失时的 ImportError 分支添加单元测试
当前 MiniMaxClient 的测试覆盖了 API key 选择和默认值,但没有覆盖 openai 依赖缺失的情况。由于当 from openai import AsyncOpenAI 失败时,构造函数会抛出带有具体帮助信息的 ImportError,请添加一个测试:通过补丁方式屏蔽 openai(例如使用 sys.modules),然后构造 MiniMaxClient,并断言抛出了预期的 ImportError 及其错误信息。
建议的实现如下:
def test_env_api_key_used(self):
mod = _import_llm_client()
with patch.dict(os.environ, {"MINIMAX_API_KEY": "sk-from-env"}):
client = mod.MiniMaxClient()
def test_import_error_when_openai_missing(self):
mod = _import_llm_client()
original_import = builtins.__import__
def fake_import(name, *args, **kwargs):
if name == "openai":
raise ImportError("No module named 'openai'")
return original_import(name, *args, **kwargs)
with patch.object(builtins, "__import__", side_effect=fake_import):
with pytest.raises(ImportError, match="openai"):
mod.MiniMaxClient(api_key="dummy-key")为使上述代码可以编译并运行,请确保在 api/tests/test_minimax_provider.py 顶部添加:
import builtins(用于补丁__import__)。- 已存在的
pytest和patch导入(from unittest.mock import patch)看起来已经在文件中;如果没有,请一并补上。
match="openai" 断言假定 MiniMaxClient 抛出的 ImportError 消息中包含单词 "openai",这与构造函数帮助文本的设计初衷是一致的。
Original comment in English
suggestion (testing): Consider adding a unit test for the ImportError path when the openai package is missing
Current MiniMaxClient tests cover API key selection and defaults but not the case where the openai dependency is missing. Since the constructor raises an ImportError with a specific help message when from openai import AsyncOpenAI fails, please add a test that patches out openai (e.g. via sys.modules), constructs MiniMaxClient, and asserts the expected ImportError and message.
Suggested implementation:
def test_env_api_key_used(self):
mod = _import_llm_client()
with patch.dict(os.environ, {"MINIMAX_API_KEY": "sk-from-env"}):
client = mod.MiniMaxClient()
def test_import_error_when_openai_missing(self):
mod = _import_llm_client()
original_import = builtins.__import__
def fake_import(name, *args, **kwargs):
if name == "openai":
raise ImportError("No module named 'openai'")
return original_import(name, *args, **kwargs)
with patch.object(builtins, "__import__", side_effect=fake_import):
with pytest.raises(ImportError, match="openai"):
mod.MiniMaxClient(api_key="dummy-key")To make this compile and run, ensure at the top of api/tests/test_minimax_provider.py you have:
import builtins(for patching__import__).- The existing imports for
pytestandpatch(from unittest.mock import patch) already appear to be present; if not, add them as well.
The match="openai" assertion assumes the MiniMaxClient's ImportError message includes the word "openai", which aligns with the intent of the constructor's help text.
| @pytest.mark.asyncio | ||
| async def test_chat_strips_think_tags(self): | ||
| mod = _import_llm_client() | ||
| with patch.dict(os.environ, {"MINIMAX_API_KEY": "sk-test"}): | ||
| client = mod.MiniMaxClient() | ||
| mock_resp = MagicMock() | ||
| mock_resp.choices = [MagicMock()] | ||
| mock_resp.choices[0].message.content = "<think>reasoning</think>\nHello!" | ||
| client.client = AsyncMock() | ||
| client.client.chat.completions.create = AsyncMock(return_value=mock_resp) |
There was a problem hiding this comment.
suggestion (testing): 扩展 chat 的温度测试以覆盖 >1.0 的温度,并验证 max_tokens 的传递
为了更完整地覆盖 chat 路径(而不仅仅是 helper),还请:
- 添加一个测试:调用
await client.chat("test", temperature=2.0),并断言底层的create调用接收到的temperature为1.0。 - 在已有的某个 chat 测试中,传入一个非默认的
max_tokens(例如max_tokens=123),并断言create调用接收到相同的值。
这样可以验证被截断后的温度以及 max_tokens 在运行时都能正确传递到兼容 OpenAI 的底层客户端。
Original comment in English
suggestion (testing): Extend chat temperature tests to cover >1.0 temperatures and max_tokens propagation
To more completely exercise the chat path (beyond the helper), please also:
- Add a test that calls
await client.chat("test", temperature=2.0)and asserts the underlyingcreatecall receivestemperature=1.0. - In an existing chat test, pass a non-default
max_tokens(e.g.max_tokens=123) and assert thecreatecall receives the same value.
This will validate that both clamped temperature and max_tokens are correctly wired through to the OpenAI-compatible client at runtime.
|
Hey @octo-patch, thanks for the contribution! The MiniMax integration looks solid overall — nice test coverage and clean structure. A few things before we can merge:
Also, Sourcery flagged a few valid points worth addressing:
Once those are sorted, happy to re-review and get this merged. |
Summary
Add MiniMax AI as a first-class LLM provider, enabling users to leverage MiniMax M2.7 and M2.7-highspeed models (204K context window) via the OpenAI-compatible API.
Backend Changes
MINIMAX— auto-populates the provider dropdown via/models/providerendpoint(0, 1]) and defaultbase_url(https://api.minimax.io/v1)ChatOpenAI/OpenAIEmbeddingsvia OpenAI-compatible APILLM_PROVIDER=minimax)MINIMAX_API_KEYconfigurationFrontend Changes
ModelManagement/utils.tsICONS mapmodelandmodelNewnamespacesModels Supported
MiniMax-M2.7MiniMax-M2.7-highspeedConfiguration
# env.example MINIMAX_API_KEY=your-api-keyUsers can also configure MiniMax via the Model Management UI by selecting the MiniMax provider and entering their API key.
Test Plan
MINIMAX_API_KEY)Files Changed (10 files, 527 additions)
api/app/models/models_model.pyMINIMAXtoModelProviderenumapi/app/core/models/base.pyapi/app/services/llm_client.pyMiniMaxClient+ factory registrationapi/env.exampleMINIMAX_API_KEYweb/src/assets/images/model/minimax.svgweb/src/views/ModelManagement/utils.tsweb/src/i18n/en.tsweb/src/i18n/zh.tsapi/tests/test_minimax_provider.pyapi/tests/test_minimax_integration.pySummary by Sourcery
在后端服务和模型管理 UI 中,通过 OpenAI 兼容的 API 将 MiniMax 添加为一等公民的 LLM 提供方。
New Features:
Enhancements:
Tests:
Original summary in English
Summary by Sourcery
Add MiniMax as a first-class LLM provider using the OpenAI-compatible API across backend services and the model management UI.
New Features:
Enhancements:
Tests: