Skip to content

feat: add MiniMax as first-class LLM provider#721

Open
octo-patch wants to merge 1 commit intoSuanmoSuanyangTechnology:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider#721
octo-patch wants to merge 1 commit intoSuanmoSuanyangTechnology:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 29, 2026

Summary

Add MiniMax AI as a first-class LLM provider, enabling users to leverage MiniMax M2.7 and M2.7-highspeed models (204K context window) via the OpenAI-compatible API.

Backend Changes

  • ModelProvider enum: Register MINIMAX — auto-populates the provider dropdown via /models/provider endpoint
  • RedBearModelFactory: Add MiniMax parameter generation with temperature clamping ((0, 1]) and default base_url (https://api.minimax.io/v1)
  • Provider routing: Route MiniMax to ChatOpenAI / OpenAIEmbeddings via OpenAI-compatible API
  • MiniMaxClient: New service-layer client with think-tag stripping for M2.7 extended thinking responses
  • LLMClientFactory: Register MiniMax for env-based provider selection (LLM_PROVIDER=minimax)
  • env.example: Add MINIMAX_API_KEY configuration

Frontend Changes

  • Add MiniMax SVG icon to model assets
  • Register icon in ModelManagement/utils.ts ICONS map
  • Add i18n translations (en/zh) for both model and modelNew namespaces

Models Supported

Model Context Use Case
MiniMax-M2.7 204K tokens General-purpose, highest quality
MiniMax-M2.7-highspeed 204K tokens Faster inference, cost-effective

Configuration

# env.example
MINIMAX_API_KEY=your-api-key

Users can also configure MiniMax via the Model Management UI by selecting the MiniMax provider and entering their API key.

Test Plan

  • 22 unit tests: enum registration, factory params, temperature clamping, think-tag stripping, client creation, factory dispatch
  • 3 integration tests with real MiniMax API calls (auto-skipped without MINIMAX_API_KEY)
  • All 25 tests pass

Files Changed (10 files, 527 additions)

File Changes
api/app/models/models_model.py Add MINIMAX to ModelProvider enum
api/app/core/models/base.py Factory params + provider routing for MiniMax
api/app/services/llm_client.py MiniMaxClient + factory registration
api/env.example Add MINIMAX_API_KEY
web/src/assets/images/model/minimax.svg MiniMax provider icon
web/src/views/ModelManagement/utils.ts Register icon in ICONS map
web/src/i18n/en.ts English translations
web/src/i18n/zh.ts Chinese translations
api/tests/test_minimax_provider.py 22 unit tests
api/tests/test_minimax_integration.py 3 integration tests

Summary by Sourcery

在后端服务和模型管理 UI 中,通过 OpenAI 兼容的 API 将 MiniMax 添加为一等公民的 LLM 提供方。

New Features:

  • 引入一个 MiniMax LLM 客户端,通过 OpenAI 兼容的 API 连接到 MiniMax 模型,并支持扩展推理(extended thinking)响应。
  • 在模型管理 UI 中启用 MiniMax 作为模型提供方的选择与配置,包括本地化的提供方标签。

Enhancements:

  • 将 MiniMax 接入通用 LLM 客户端工厂和模型参数路由逻辑中,包括提供方枚举注册和向量嵌入(embedding)支持。
  • 将 MiniMax 的 temperature 限制在支持的区间 (0.0, 1.0] 内,并规范化 API 配置,例如默认 base URL 和超时时间。

Tests:

  • 添加针对 MiniMax 提供方注册、客户端行为、工厂分发以及 temperature 处理的全面单元测试。
  • 添加对真实 MiniMax API 调用的集成测试,在未配置 API key 时会自动跳过。
Original summary in English

Summary by Sourcery

Add MiniMax as a first-class LLM provider using the OpenAI-compatible API across backend services and the model management UI.

New Features:

  • Introduce a MiniMax LLM client that connects to MiniMax models via the OpenAI-compatible API with support for extended thinking responses.
  • Enable selection and configuration of MiniMax as a model provider in the model management UI, including localized provider labels.

Enhancements:

  • Wire MiniMax into the generic LLM client factory and model parameter routing, including provider enum registration and embedding support.
  • Clamp MiniMax temperatures to the supported (0.0, 1.0] range and normalize API configuration such as default base URL and timeouts.

Tests:

  • Add comprehensive unit tests for MiniMax provider registration, client behavior, factory dispatch, and temperature handling.
  • Add integration tests exercising real MiniMax API calls, auto-skipped when no API key is configured.

Add MiniMax AI as a new model provider, enabling users to leverage
MiniMax M2.7 and M2.7-highspeed models (204K context) via the
OpenAI-compatible API.

Backend:
- Register MINIMAX in ModelProvider enum (auto-populates provider dropdown)
- Add MiniMax support in RedBearModelFactory with temperature clamping
  (MiniMax requires temp in (0, 1]) and default base_url
- Route MiniMax to ChatOpenAI/OpenAIEmbeddings via OpenAI-compat API
- Add MiniMaxClient in services/llm_client.py with think-tag stripping
  for M2.7 extended thinking responses
- Register in LLMClientFactory for env-based provider selection
- Add MINIMAX_API_KEY to env.example

Frontend:
- Add MiniMax SVG icon to model assets
- Register icon in ModelManagement utils.ts
- Add i18n translations (en/zh) for both model and modelNew namespaces

Tests:
- 22 unit tests covering enum, factory, temperature clamping, think-tag
  stripping, client creation, and factory dispatch
- 3 integration tests with real MiniMax API calls (skipped without key)
@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai bot commented Mar 29, 2026

审阅者指南

通过现有的兼容 OpenAI 的技术栈,将 MiniMax 作为一等 LLM 提供方接入,包括后端客户端/工厂路由、模型配置参数传递、temperature 限制与 think 标签剥离,以及前端提供方注册与国际化(i18n);上述能力均由针对性的单元测试与集成测试覆盖。

MiniMax 提供方聊天流程时序图

sequenceDiagram
    actor User
    participant ModelManagementUI
    participant BackendAPI
    participant RedBearModelFactory
    participant LLMClientFactory
    participant MiniMaxClient
    participant MiniMaxAPI

    User->>ModelManagementUI: Select provider minimax and enter API key
    ModelManagementUI->>BackendAPI: Save model config (provider=minimax)
    BackendAPI->>RedBearModelFactory: get_model_params(config provider=minimax)
    RedBearModelFactory-->>BackendAPI: params (base_url, api_key, temperature)

    User->>ModelManagementUI: Send chat message
    ModelManagementUI->>BackendAPI: POST /chat (provider=minimax, prompt)
    BackendAPI->>LLMClientFactory: create(provider=minimax, api_key, model, base_url)
    LLMClientFactory->>MiniMaxClient: __init__(api_key, model, base_url)
    MiniMaxClient-->>LLMClientFactory: client instance
    LLMClientFactory-->>BackendAPI: MiniMaxClient

    BackendAPI->>MiniMaxClient: chat(prompt, temperature, max_tokens)
    MiniMaxClient->>MiniMaxClient: _clamp_temperature(temperature)
    MiniMaxClient->>MiniMaxAPI: chat.completions.create(model, messages, temperature)
    MiniMaxAPI-->>MiniMaxClient: response with content (may include think tags)
    MiniMaxClient->>MiniMaxClient: strip <think>...</think> tags
    MiniMaxClient-->>BackendAPI: cleaned content
    BackendAPI-->>ModelManagementUI: chat response
    ModelManagementUI-->>User: Display MiniMax answer
Loading

MiniMax LLM 集成类图

classDiagram
    class BaseLLMClient {
        <<abstract>>
        +chat(prompt, **kwargs): str
    }

    class MiniMaxClient {
        -api_key: str
        -model: str
        -base_url: str
        -client: AsyncOpenAI
        +MiniMaxClient(api_key, model, base_url)
        +chat(prompt, **kwargs): str
        +_clamp_temperature(temperature): float
    }

    class LLMClientFactory {
        +create(provider, **kwargs): BaseLLMClient
    }

    class ModelProvider {
        <<enum>>
        OPENAI
        AZURE
        ANTHROPIC
        MINIMAX
        OLLAMA
        XINFERENCE
        GPUSTACK
        DASHSCOPE
    }

    class RedBearModelConfig {
        +model_name: str
        +base_url: str
        +api_key: str
        +timeout: float
        +max_retries: int
        +extra_params: dict
        +provider: ModelProvider
    }

    class RedBearModelFactory {
        +get_model_params(config): dict
        +get_provider_llm_class(config, type): type
        +get_provider_embedding_class(provider): type
    }

    class ChatOpenAI {

    }

    class OpenAIEmbeddings {

    }

    BaseLLMClient <|-- MiniMaxClient
    LLMClientFactory --> BaseLLMClient : create
    LLMClientFactory --> MiniMaxClient : provider minimax
    RedBearModelFactory --> RedBearModelConfig : uses
    RedBearModelFactory --> ModelProvider : checks
    RedBearModelFactory --> ChatOpenAI : returns_for_minimax
    RedBearModelFactory --> OpenAIEmbeddings : returns_for_minimax
    ModelProvider ..> MiniMaxClient : value minimax
Loading

文件级变更

Change Details Files
将 MiniMax 作为新的 ModelProvider 引入,并通过现有兼容 OpenAI 的对话/Embedding 基础设施进行路由,同时支持提供方特定的参数处理。
  • 在 ModelProvider 枚举中添加 MINIMAX,使其可以作为可选后端提供方出现
  • 扩展 RedBearModelFactory 的 get_model_params,用于处理 MiniMax 特定默认值(base_url、httpx.Timeout)以及对 extra_params 中 temperature 的取值限制到 (0, 1]
  • 在提供方解析逻辑中,将 MINIMAX 映射到 ChatOpenAI(用于对话模型)以及 OpenAIEmbeddings(用于 Embedding)
api/app/models/models_model.py
api/app/core/models/base.py
使用 OpenAI 的 AsyncOpenAI 客户端新增专用 MiniMaxClient,支持 API Key 解析、temperature 限制以及 MiniMax 特定响应后处理,并在 LLMClientFactory 中注册。
  • 实现 MiniMaxClient,对 AsyncOpenAI.chat.completions.create 进行封装,提供默认 model/base_url、基于环境变量的 API Key 解析以及错误日志记录
  • 在 MiniMaxClient 中将请求的 temperature 限制在 MiniMax 支持的 (0.0, 1.0] 范围内,与工厂侧的限制逻辑保持一致
  • 在返回内容前,从 MiniMax 响应中剥离 <think>...</think> 推理标签
  • 扩展 LLMClientFactory.create 以识别 provider "minimax" 并返回 MiniMaxClient;在工厂的文档字符串中补充 minimax,并确保通过 LLM_PROVIDER 环境变量实现基于环境的选择
api/app/services/llm_client.py
在前端暴露 MiniMax 的配置和品牌展示,使其可以在模型管理 UI 中被选择和管理。
  • 新增 MiniMax SVG 图标资源,并在 Model Management 视图所使用的 ICONS 映射中注册
  • 在 model 和 modelNew 命名空间中为 minimax 提供英文和中文翻译键值
web/src/assets/images/model/minimax.svg
web/src/views/ModelManagement/utils.ts
web/src/i18n/en.ts
web/src/i18n/zh.ts
为 MiniMax 提供方提供配置与自动化测试覆盖,包括单元级行为测试以及可选的在线集成测试。
  • 在 api/env.example 中添加 MINIMAX_API_KEY 配置项
  • 创建单元测试以验证 ModelProvider 枚举注册、MiniMaxClient 行为(API Key 解析、temperature 限制、think 标签剥离、错误传播)以及 LLMClientFactory 对 minimax 的分发
  • 新增集成测试,在存在 MINIMAX_API_KEY 时,对真实 MiniMax API 进行 MiniMaxClient 与 LLMClientFactory 的调用验证;当缺少 MINIMAX_API_KEY 时有条件跳过
api/env.example
api/tests/test_minimax_provider.py
api/tests/test_minimax_integration.py

提示与命令

与 Sourcery 交互

  • 触发新审阅: 在 Pull Request 上评论 @sourcery-ai review
  • 继续讨论: 直接回复 Sourcery 的审阅评论。
  • 基于审阅评论生成 GitHub Issue: 在审阅评论下回复,要求 Sourcery 从该评论创建 Issue。你也可以直接回复 @sourcery-ai issue 来从该评论创建 Issue。
  • 生成 Pull Request 标题: 在 Pull Request 标题的任意位置写上 @sourcery-ai,即可随时生成标题。也可以在 Pull Request 中评论 @sourcery-ai title 来(重新)生成标题。
  • 生成 Pull Request 摘要: 在 Pull Request 正文任意位置写上 @sourcery-ai summary,即可在对应位置生成 PR 摘要。也可以在 Pull Request 中评论 @sourcery-ai summary 来(重新)生成摘要。
  • 生成审阅者指南: 在 Pull Request 中评论 @sourcery-ai guide,即可随时(重新)生成审阅者指南。
  • 一次性解决所有 Sourcery 评论: 在 Pull Request 中评论 @sourcery-ai resolve,可将所有 Sourcery 评论标记为已解决。适用于你已经处理完所有评论且不希望再看到它们的情况。
  • 一次性关闭所有 Sourcery 审阅: 在 Pull Request 中评论 @sourcery-ai dismiss,可关闭所有已有的 Sourcery 审阅。特别适用于你想在一次全新的审阅中重新开始 —— 别忘了再评论 @sourcery-ai review 来触发新的审阅!

自定义你的体验

访问你的 仪表盘 以:

  • 启用或关闭诸如 Sourcery 生成的 Pull Request 摘要、审阅者指南等审阅功能。
  • 更改审阅语言。
  • 添加、移除或编辑自定义审阅指令。
  • 调整其他审阅设置。

获取帮助

Original review guide in English

Reviewer's Guide

Adds MiniMax as a first-class LLM provider wired through the existing OpenAI-compatible stack, including backend client/factory routing, model config plumbing, temperature clamping and think-tag stripping, plus frontend provider registration and i18n, all covered by targeted unit and integration tests.

Sequence diagram for MiniMax provider chat flow

sequenceDiagram
    actor User
    participant ModelManagementUI
    participant BackendAPI
    participant RedBearModelFactory
    participant LLMClientFactory
    participant MiniMaxClient
    participant MiniMaxAPI

    User->>ModelManagementUI: Select provider minimax and enter API key
    ModelManagementUI->>BackendAPI: Save model config (provider=minimax)
    BackendAPI->>RedBearModelFactory: get_model_params(config provider=minimax)
    RedBearModelFactory-->>BackendAPI: params (base_url, api_key, temperature)

    User->>ModelManagementUI: Send chat message
    ModelManagementUI->>BackendAPI: POST /chat (provider=minimax, prompt)
    BackendAPI->>LLMClientFactory: create(provider=minimax, api_key, model, base_url)
    LLMClientFactory->>MiniMaxClient: __init__(api_key, model, base_url)
    MiniMaxClient-->>LLMClientFactory: client instance
    LLMClientFactory-->>BackendAPI: MiniMaxClient

    BackendAPI->>MiniMaxClient: chat(prompt, temperature, max_tokens)
    MiniMaxClient->>MiniMaxClient: _clamp_temperature(temperature)
    MiniMaxClient->>MiniMaxAPI: chat.completions.create(model, messages, temperature)
    MiniMaxAPI-->>MiniMaxClient: response with content (may include think tags)
    MiniMaxClient->>MiniMaxClient: strip <think>...</think> tags
    MiniMaxClient-->>BackendAPI: cleaned content
    BackendAPI-->>ModelManagementUI: chat response
    ModelManagementUI-->>User: Display MiniMax answer
Loading

Class diagram for MiniMax LLM integration

classDiagram
    class BaseLLMClient {
        <<abstract>>
        +chat(prompt, **kwargs): str
    }

    class MiniMaxClient {
        -api_key: str
        -model: str
        -base_url: str
        -client: AsyncOpenAI
        +MiniMaxClient(api_key, model, base_url)
        +chat(prompt, **kwargs): str
        +_clamp_temperature(temperature): float
    }

    class LLMClientFactory {
        +create(provider, **kwargs): BaseLLMClient
    }

    class ModelProvider {
        <<enum>>
        OPENAI
        AZURE
        ANTHROPIC
        MINIMAX
        OLLAMA
        XINFERENCE
        GPUSTACK
        DASHSCOPE
    }

    class RedBearModelConfig {
        +model_name: str
        +base_url: str
        +api_key: str
        +timeout: float
        +max_retries: int
        +extra_params: dict
        +provider: ModelProvider
    }

    class RedBearModelFactory {
        +get_model_params(config): dict
        +get_provider_llm_class(config, type): type
        +get_provider_embedding_class(provider): type
    }

    class ChatOpenAI {

    }

    class OpenAIEmbeddings {

    }

    BaseLLMClient <|-- MiniMaxClient
    LLMClientFactory --> BaseLLMClient : create
    LLMClientFactory --> MiniMaxClient : provider minimax
    RedBearModelFactory --> RedBearModelConfig : uses
    RedBearModelFactory --> ModelProvider : checks
    RedBearModelFactory --> ChatOpenAI : returns_for_minimax
    RedBearModelFactory --> OpenAIEmbeddings : returns_for_minimax
    ModelProvider ..> MiniMaxClient : value minimax
Loading

File-Level Changes

Change Details Files
Introduce MiniMax as a new ModelProvider and route it through existing OpenAI-compatible chat/embedding infrastructure with provider-specific parameter handling.
  • Add MINIMAX to ModelProvider enum so it appears as a selectable backend provider
  • Extend RedBearModelFactory get_model_params to handle MiniMax-specific defaults (base_url, httpx.Timeout) and temperature clamping to (0, 1] for extra_params
  • Map MINIMAX to ChatOpenAI for chat models and to OpenAIEmbeddings for embeddings in provider resolution logic
api/app/models/models_model.py
api/app/core/models/base.py
Add a dedicated MiniMaxClient using the OpenAI AsyncOpenAI client, with API-key resolution, temperature clamping, and MiniMax-specific response post-processing, and register it in LLMClientFactory.
  • Implement MiniMaxClient that wraps AsyncOpenAI.chat.completions.create with default model/base_url, env-based API key lookup, and error logging
  • Clamp requested temperature to MiniMax’s supported (0.0, 1.0] range in MiniMaxClient, mirroring the factory-side clamping
  • Strip ... reasoning tags from MiniMax responses before returning content
  • Extend LLMClientFactory.create to recognize provider "minimax" and return MiniMaxClient; document minimax in the factory docstring and ensure env-based selection via LLM_PROVIDER
api/app/services/llm_client.py
Expose MiniMax configuration and branding on the frontend so it can be selected and managed via the Model Management UI.
  • Add MiniMax SVG icon asset and register it in the ICONS map used by the Model Management view
  • Add English and Chinese translation keys for the minimax provider in both model and modelNew namespaces
web/src/assets/images/model/minimax.svg
web/src/views/ModelManagement/utils.ts
web/src/i18n/en.ts
web/src/i18n/zh.ts
Provide configuration and automated test coverage for the MiniMax provider, including unit-level behavior and optional live integration tests.
  • Add MINIMAX_API_KEY to api/env.example for configuration
  • Create unit tests to validate ModelProvider enum registration, MiniMaxClient behavior (API key resolution, temperature clamping, think-tag stripping, error propagation), and LLMClientFactory dispatch for minimax
  • Add integration tests that exercise MiniMaxClient and LLMClientFactory against the real MiniMax API, conditionally skipped when MINIMAX_API_KEY is absent
api/env.example
api/tests/test_minimax_provider.py
api/tests/test_minimax_integration.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - 我发现了 3 个问题,并给出了一些整体反馈:

  • MiniMax 的温度截断逻辑同时实现于 RedBearModelFactory.get_model_paramsMiniMaxClient._clamp_temperature 中;建议将这部分逻辑集中到一个公共的 helper 中,这样在允许范围调整时可以避免两处实现出现偏差。
  • MiniMaxClient.chat 中,re 每次调用时都在方法内部导入;建议把这个导入移动到模块级,以简化函数并避免重复导入。
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Temperature clamping for MiniMax is implemented both in `RedBearModelFactory.get_model_params` and in `MiniMaxClient._clamp_temperature`; consider centralizing this logic in a single helper to avoid drift if the allowed range changes.
- In `MiniMaxClient.chat`, `re` is imported inside the method on each call; moving this import to the module level will simplify the function and avoid repeated imports.

## Individual Comments

### Comment 1
<location path="api/tests/test_minimax_provider.py" line_range="2-7" />
<code_context>
+# -*- coding: UTF-8 -*-
+"""Unit tests for MiniMax LLM provider integration.
+
+Tests cover:
+- ModelProvider enum registration
+- MiniMaxClient temperature clamping, think-tag stripping
+- LLMClientFactory.create("minimax") dispatching
+
+Run: cd api && python -m pytest tests/test_minimax_provider.py -v
</code_context>
<issue_to_address>
**suggestion (testing):** 为 RedBearModelFactory 中 MiniMax 相关分支以及 provider 路由 helper 增加测试用例

当前测试覆盖了枚举、`MiniMaxClient``LLMClientFactory`,但没有覆盖 `get_model_params` 以及 provider 路由 helper 中 MiniMax 相关的分支逻辑。请补充以下测试:

- 使用 `RedBearModelConfig(provider=ModelProvider.MINIMAX, ...)` 配合多种 `temperature` 值(负数、0、区间内、>1.0),并断言温度被正确截断以及默认的 `base_url`- 断言对 MiniMax 而言,`get_provider_llm_class` 在所有相关的 `ModelType` 上都返回 `ChatOpenAI`- 断言当 `provider``MINIMAX` 时,`get_provider_embedding_class` 返回 `OpenAIEmbeddings`。

这样可以保证 MiniMax 的集成在更高层的工厂/路由层面同样正常工作,而不仅仅是在 `MiniMaxClient` 中。

建议的实现如下:

```python
import os
import sys
import json
import importlib
import importlib.util
from enum import Enum
from unittest.mock import patch, AsyncMock, MagicMock

import pytest

# NOTE: Import paths may need to be adjusted to match the actual project layout.
# They are written here to be explicit and discoverable.
from api.llm.model_config import RedBearModelConfig, ModelType
from api.llm.model_provider import ModelProvider
from api.llm.factory import (
    get_model_params,
    get_provider_llm_class,
    get_provider_embedding_class,
)
from langchain_openai import ChatOpenAI, OpenAIEmbeddings


@pytest.mark.parametrize(
    "temperature,expected",
    [
        (-1.0, 0.0),
        (0.0, 0.0),
        (0.3, 0.3),
        (1.0, 1.0),
        (2.0, 1.0),
    ],
)
def test_minimax_get_model_params_temperature_clamping_and_default_base_url(
    temperature: float, expected: float
):
    """MiniMax-specific paths in RedBearModelFactory: clamp temperature and set default base_url.

    Ensures that when using MiniMax as the provider:
    - temperature is clamped into [0.0, 1.0]
    - a default base_url is provided if none is specified on the config
    """
    cfg = RedBearModelConfig(
        provider=ModelProvider.MINIMAX,
        model="minimax-text-model",
        model_type=ModelType.LLM,
        temperature=temperature,
        base_url=None,  # force factory to apply its MiniMax default
    )

    params = get_model_params(cfg)

    # Temperature should be clamped at the factory level for MiniMax
    assert "temperature" in params
    assert params["temperature"] == pytest.approx(expected)

    # Default MiniMax base_url should be applied when not set on the config
    assert "base_url" in params
    # We don't hard-code the URL here; we just assert a non-empty default is applied.
    assert isinstance(params["base_url"], str)
    assert params["base_url"]  # non-empty


def test_minimax_get_model_params_respects_explicit_base_url():
    """When base_url is explicitly set on the config, it should be propagated unchanged."""
    custom_base_url = "https://minimax.internal.example/v1"
    cfg = RedBearModelConfig(
        provider=ModelProvider.MINIMAX,
        model="minimax-text-model",
        model_type=ModelType.LLM,
        temperature=0.5,
        base_url=custom_base_url,
    )

    params = get_model_params(cfg)

    assert params["base_url"] == custom_base_url


@pytest.mark.parametrize(
    "model_type",
    [
        ModelType.LLM,
        ModelType.CHAT,  # if your enum differentiates, otherwise remove
    ],
)
def test_get_provider_llm_class_returns_chat_openai_for_minimax(model_type: ModelType):
    """MiniMax provider should route to ChatOpenAI for all relevant LLM ModelTypes."""
    llm_cls = get_provider_llm_class(provider=ModelProvider.MINIMAX, model_type=model_type)

    assert llm_cls is ChatOpenAI


def test_get_provider_embedding_class_returns_openai_embeddings_for_minimax():
    """MiniMax provider should reuse OpenAIEmbeddings for embeddings."""
    embedding_cls = get_provider_embedding_class(provider=ModelProvider.MINIMAX)

    assert embedding_cls is OpenAIEmbeddings

```

上述改动假定了以下导入路径和名称:

1. `RedBearModelConfig``ModelType` 位于 `api.llm.model_config`2. `ModelProvider` 位于 `api.llm.model_provider`3. `get_model_params``get_provider_llm_class``get_provider_embedding_class` 位于 `api.llm.factory`4. `ChatOpenAI``OpenAIEmbeddings` 来自 `langchain_openai`。

如果你的项目结构不同,请在保持测试逻辑不变的前提下,调整对应的导入路径。

同时,如果你的 `ModelType` 枚举中没有将 `CHAT``LLM` 分开定义,请从参数化中删除该值,让测试只覆盖你代码库中实际存在的合法 `ModelType`。
</issue_to_address>

### Comment 2
<location path="api/tests/test_minimax_provider.py" line_range="134" />
<code_context>
+# 2. MiniMaxClient
+# ===========================================================================
+
+class TestMiniMaxClient:
+
+    def test_missing_api_key_raises(self):
</code_context>
<issue_to_address>
**suggestion (testing):** 建议为 `openai` 包缺失时的 ImportError 分支添加单元测试

当前 `MiniMaxClient` 的测试覆盖了 API key 选择和默认值,但没有覆盖 `openai` 依赖缺失的情况。由于当 `from openai import AsyncOpenAI` 失败时,构造函数会抛出带有具体帮助信息的 `ImportError`,请添加一个测试:通过补丁方式屏蔽 `openai`(例如使用 `sys.modules`),然后构造 `MiniMaxClient`,并断言抛出了预期的 `ImportError` 及其错误信息。

建议的实现如下:

```python
    def test_env_api_key_used(self):
        mod = _import_llm_client()
        with patch.dict(os.environ, {"MINIMAX_API_KEY": "sk-from-env"}):
            client = mod.MiniMaxClient()

    def test_import_error_when_openai_missing(self):
        mod = _import_llm_client()

        original_import = builtins.__import__

        def fake_import(name, *args, **kwargs):
            if name == "openai":
                raise ImportError("No module named 'openai'")
            return original_import(name, *args, **kwargs)

        with patch.object(builtins, "__import__", side_effect=fake_import):
            with pytest.raises(ImportError, match="openai"):
                mod.MiniMaxClient(api_key="dummy-key")

```

为使上述代码可以编译并运行,请确保在 `api/tests/test_minimax_provider.py` 顶部添加:

1. `import builtins`(用于补丁 `__import__`)。
2. 已存在的 `pytest``patch` 导入(`from unittest.mock import patch`)看起来已经在文件中;如果没有,请一并补上。

`match="openai"` 断言假定 `MiniMaxClient` 抛出的 ImportError 消息中包含单词 "openai",这与构造函数帮助文本的设计初衷是一致的。
</issue_to_address>

### Comment 3
<location path="api/tests/test_minimax_provider.py" line_range="184-193" />
<code_context>
+class TestMiniMaxClientIntegration:
+    """Integration tests for MiniMaxClient in services layer."""
+
+    @pytest.mark.asyncio
+    async def test_minimax_client_chat(self):
+        """Test MiniMaxClient.chat() with real API."""
</code_context>
<issue_to_address>
**suggestion (testing):** 扩展 chat 的温度测试以覆盖 >1.0 的温度,并验证 `max_tokens` 的传递

为了更完整地覆盖 chat 路径(而不仅仅是 helper),还请:

- 添加一个测试:调用 `await client.chat("test", temperature=2.0)`,并断言底层的 `create` 调用接收到的 `temperature``1.0`- 在已有的某个 chat 测试中,传入一个非默认的 `max_tokens`(例如 `max_tokens=123`),并断言 `create` 调用接收到相同的值。

这样可以验证被截断后的温度以及 `max_tokens` 在运行时都能正确传递到兼容 OpenAI 的底层客户端。
</issue_to_address>

Sourcery 对开源项目是免费的——如果你觉得我们的评审有帮助,欢迎分享 ✨
帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈改进后续的评审。
Original comment in English

Hey - I've found 3 issues, and left some high level feedback:

  • Temperature clamping for MiniMax is implemented both in RedBearModelFactory.get_model_params and in MiniMaxClient._clamp_temperature; consider centralizing this logic in a single helper to avoid drift if the allowed range changes.
  • In MiniMaxClient.chat, re is imported inside the method on each call; moving this import to the module level will simplify the function and avoid repeated imports.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Temperature clamping for MiniMax is implemented both in `RedBearModelFactory.get_model_params` and in `MiniMaxClient._clamp_temperature`; consider centralizing this logic in a single helper to avoid drift if the allowed range changes.
- In `MiniMaxClient.chat`, `re` is imported inside the method on each call; moving this import to the module level will simplify the function and avoid repeated imports.

## Individual Comments

### Comment 1
<location path="api/tests/test_minimax_provider.py" line_range="2-7" />
<code_context>
+# -*- coding: UTF-8 -*-
+"""Unit tests for MiniMax LLM provider integration.
+
+Tests cover:
+- ModelProvider enum registration
+- MiniMaxClient temperature clamping, think-tag stripping
+- LLMClientFactory.create("minimax") dispatching
+
+Run: cd api && python -m pytest tests/test_minimax_provider.py -v
</code_context>
<issue_to_address>
**suggestion (testing):** Add tests for MiniMax-specific branches in RedBearModelFactory and provider routing helpers

Current tests exercise the enum, `MiniMaxClient`, and `LLMClientFactory`, but not the MiniMax-specific paths in `get_model_params` and the provider routing helpers. Please also add tests that:

- Use `RedBearModelConfig(provider=ModelProvider.MINIMAX, ...)` with various `temperature` values (negative, 0, in-range, >1.0) and assert clamping and the default `base_url`.
- Assert `get_provider_llm_class` returns `ChatOpenAI` for MiniMax for all relevant `ModelType`s.
- Assert `get_provider_embedding_class` returns `OpenAIEmbeddings` when `provider` is `MINIMAX`.

This ensures the MiniMax integration works correctly in the higher-level factory/routing layer, not only in `MiniMaxClient`.

Suggested implementation:

```python
import os
import sys
import json
import importlib
import importlib.util
from enum import Enum
from unittest.mock import patch, AsyncMock, MagicMock

import pytest

# NOTE: Import paths may need to be adjusted to match the actual project layout.
# They are written here to be explicit and discoverable.
from api.llm.model_config import RedBearModelConfig, ModelType
from api.llm.model_provider import ModelProvider
from api.llm.factory import (
    get_model_params,
    get_provider_llm_class,
    get_provider_embedding_class,
)
from langchain_openai import ChatOpenAI, OpenAIEmbeddings


@pytest.mark.parametrize(
    "temperature,expected",
    [
        (-1.0, 0.0),
        (0.0, 0.0),
        (0.3, 0.3),
        (1.0, 1.0),
        (2.0, 1.0),
    ],
)
def test_minimax_get_model_params_temperature_clamping_and_default_base_url(
    temperature: float, expected: float
):
    """MiniMax-specific paths in RedBearModelFactory: clamp temperature and set default base_url.

    Ensures that when using MiniMax as the provider:
    - temperature is clamped into [0.0, 1.0]
    - a default base_url is provided if none is specified on the config
    """
    cfg = RedBearModelConfig(
        provider=ModelProvider.MINIMAX,
        model="minimax-text-model",
        model_type=ModelType.LLM,
        temperature=temperature,
        base_url=None,  # force factory to apply its MiniMax default
    )

    params = get_model_params(cfg)

    # Temperature should be clamped at the factory level for MiniMax
    assert "temperature" in params
    assert params["temperature"] == pytest.approx(expected)

    # Default MiniMax base_url should be applied when not set on the config
    assert "base_url" in params
    # We don't hard-code the URL here; we just assert a non-empty default is applied.
    assert isinstance(params["base_url"], str)
    assert params["base_url"]  # non-empty


def test_minimax_get_model_params_respects_explicit_base_url():
    """When base_url is explicitly set on the config, it should be propagated unchanged."""
    custom_base_url = "https://minimax.internal.example/v1"
    cfg = RedBearModelConfig(
        provider=ModelProvider.MINIMAX,
        model="minimax-text-model",
        model_type=ModelType.LLM,
        temperature=0.5,
        base_url=custom_base_url,
    )

    params = get_model_params(cfg)

    assert params["base_url"] == custom_base_url


@pytest.mark.parametrize(
    "model_type",
    [
        ModelType.LLM,
        ModelType.CHAT,  # if your enum differentiates, otherwise remove
    ],
)
def test_get_provider_llm_class_returns_chat_openai_for_minimax(model_type: ModelType):
    """MiniMax provider should route to ChatOpenAI for all relevant LLM ModelTypes."""
    llm_cls = get_provider_llm_class(provider=ModelProvider.MINIMAX, model_type=model_type)

    assert llm_cls is ChatOpenAI


def test_get_provider_embedding_class_returns_openai_embeddings_for_minimax():
    """MiniMax provider should reuse OpenAIEmbeddings for embeddings."""
    embedding_cls = get_provider_embedding_class(provider=ModelProvider.MINIMAX)

    assert embedding_cls is OpenAIEmbeddings

```

The above changes assume the following import paths and names:

1. `RedBearModelConfig` and `ModelType` live in `api.llm.model_config`.
2. `ModelProvider` lives in `api.llm.model_provider`.
3. `get_model_params`, `get_provider_llm_class`, and `get_provider_embedding_class` live in `api.llm.factory`.
4. `ChatOpenAI` and `OpenAIEmbeddings` come from `langchain_openai`.

If your project structure differs, adjust the import paths accordingly while keeping the tests themselves the same.

Also, if your `ModelType` enum does not define `CHAT` separately from `LLM`, remove that value from the parametrization so that the test only exercises the valid `ModelType`s for your codebase.
</issue_to_address>

### Comment 2
<location path="api/tests/test_minimax_provider.py" line_range="134" />
<code_context>
+# 2. MiniMaxClient
+# ===========================================================================
+
+class TestMiniMaxClient:
+
+    def test_missing_api_key_raises(self):
</code_context>
<issue_to_address>
**suggestion (testing):** Consider adding a unit test for the ImportError path when the `openai` package is missing

Current `MiniMaxClient` tests cover API key selection and defaults but not the case where the `openai` dependency is missing. Since the constructor raises an `ImportError` with a specific help message when `from openai import AsyncOpenAI` fails, please add a test that patches out `openai` (e.g. via `sys.modules`), constructs `MiniMaxClient`, and asserts the expected `ImportError` and message.

Suggested implementation:

```python
    def test_env_api_key_used(self):
        mod = _import_llm_client()
        with patch.dict(os.environ, {"MINIMAX_API_KEY": "sk-from-env"}):
            client = mod.MiniMaxClient()

    def test_import_error_when_openai_missing(self):
        mod = _import_llm_client()

        original_import = builtins.__import__

        def fake_import(name, *args, **kwargs):
            if name == "openai":
                raise ImportError("No module named 'openai'")
            return original_import(name, *args, **kwargs)

        with patch.object(builtins, "__import__", side_effect=fake_import):
            with pytest.raises(ImportError, match="openai"):
                mod.MiniMaxClient(api_key="dummy-key")

```

To make this compile and run, ensure at the top of `api/tests/test_minimax_provider.py` you have:

1. `import builtins` (for patching `__import__`).
2. The existing imports for `pytest` and `patch` (`from unittest.mock import patch`) already appear to be present; if not, add them as well.

The `match="openai"` assertion assumes the `MiniMaxClient`'s ImportError message includes the word "openai", which aligns with the intent of the constructor's help text.
</issue_to_address>

### Comment 3
<location path="api/tests/test_minimax_provider.py" line_range="184-193" />
<code_context>
+class TestMiniMaxClientIntegration:
+    """Integration tests for MiniMaxClient in services layer."""
+
+    @pytest.mark.asyncio
+    async def test_minimax_client_chat(self):
+        """Test MiniMaxClient.chat() with real API."""
</code_context>
<issue_to_address>
**suggestion (testing):** Extend chat temperature tests to cover >1.0 temperatures and `max_tokens` propagation

To more completely exercise the chat path (beyond the helper), please also:

- Add a test that calls `await client.chat("test", temperature=2.0)` and asserts the underlying `create` call receives `temperature=1.0`.
- In an existing chat test, pass a non-default `max_tokens` (e.g. `max_tokens=123`) and assert the `create` call receives the same value.

This will validate that both clamped temperature and `max_tokens` are correctly wired through to the OpenAI-compatible client at runtime.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +2 to +7
"""Unit tests for MiniMax LLM provider integration.

Tests cover:
- ModelProvider enum registration
- MiniMaxClient temperature clamping, think-tag stripping
- LLMClientFactory.create("minimax") dispatching
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): 为 RedBearModelFactory 中 MiniMax 相关分支以及 provider 路由 helper 增加测试用例

当前测试覆盖了枚举、MiniMaxClientLLMClientFactory,但没有覆盖 get_model_params 以及 provider 路由 helper 中 MiniMax 相关的分支逻辑。请补充以下测试:

  • 使用 RedBearModelConfig(provider=ModelProvider.MINIMAX, ...) 配合多种 temperature 值(负数、0、区间内、>1.0),并断言温度被正确截断以及默认的 base_url
  • 断言对 MiniMax 而言,get_provider_llm_class 在所有相关的 ModelType 上都返回 ChatOpenAI
  • 断言当 providerMINIMAX 时,get_provider_embedding_class 返回 OpenAIEmbeddings

这样可以保证 MiniMax 的集成在更高层的工厂/路由层面同样正常工作,而不仅仅是在 MiniMaxClient 中。

建议的实现如下:

import os
import sys
import json
import importlib
import importlib.util
from enum import Enum
from unittest.mock import patch, AsyncMock, MagicMock

import pytest

# NOTE: Import paths may need to be adjusted to match the actual project layout.
# They are written here to be explicit and discoverable.
from api.llm.model_config import RedBearModelConfig, ModelType
from api.llm.model_provider import ModelProvider
from api.llm.factory import (
    get_model_params,
    get_provider_llm_class,
    get_provider_embedding_class,
)
from langchain_openai import ChatOpenAI, OpenAIEmbeddings


@pytest.mark.parametrize(
    "temperature,expected",
    [
        (-1.0, 0.0),
        (0.0, 0.0),
        (0.3, 0.3),
        (1.0, 1.0),
        (2.0, 1.0),
    ],
)
def test_minimax_get_model_params_temperature_clamping_and_default_base_url(
    temperature: float, expected: float
):
    """MiniMax-specific paths in RedBearModelFactory: clamp temperature and set default base_url.

    Ensures that when using MiniMax as the provider:
    - temperature is clamped into [0.0, 1.0]
    - a default base_url is provided if none is specified on the config
    """
    cfg = RedBearModelConfig(
        provider=ModelProvider.MINIMAX,
        model="minimax-text-model",
        model_type=ModelType.LLM,
        temperature=temperature,
        base_url=None,  # force factory to apply its MiniMax default
    )

    params = get_model_params(cfg)

    # Temperature should be clamped at the factory level for MiniMax
    assert "temperature" in params
    assert params["temperature"] == pytest.approx(expected)

    # Default MiniMax base_url should be applied when not set on the config
    assert "base_url" in params
    # We don't hard-code the URL here; we just assert a non-empty default is applied.
    assert isinstance(params["base_url"], str)
    assert params["base_url"]  # non-empty


def test_minimax_get_model_params_respects_explicit_base_url():
    """When base_url is explicitly set on the config, it should be propagated unchanged."""
    custom_base_url = "https://minimax.internal.example/v1"
    cfg = RedBearModelConfig(
        provider=ModelProvider.MINIMAX,
        model="minimax-text-model",
        model_type=ModelType.LLM,
        temperature=0.5,
        base_url=custom_base_url,
    )

    params = get_model_params(cfg)

    assert params["base_url"] == custom_base_url


@pytest.mark.parametrize(
    "model_type",
    [
        ModelType.LLM,
        ModelType.CHAT,  # if your enum differentiates, otherwise remove
    ],
)
def test_get_provider_llm_class_returns_chat_openai_for_minimax(model_type: ModelType):
    """MiniMax provider should route to ChatOpenAI for all relevant LLM ModelTypes."""
    llm_cls = get_provider_llm_class(provider=ModelProvider.MINIMAX, model_type=model_type)

    assert llm_cls is ChatOpenAI


def test_get_provider_embedding_class_returns_openai_embeddings_for_minimax():
    """MiniMax provider should reuse OpenAIEmbeddings for embeddings."""
    embedding_cls = get_provider_embedding_class(provider=ModelProvider.MINIMAX)

    assert embedding_cls is OpenAIEmbeddings

上述改动假定了以下导入路径和名称:

  1. RedBearModelConfigModelType 位于 api.llm.model_config
  2. ModelProvider 位于 api.llm.model_provider
  3. get_model_paramsget_provider_llm_classget_provider_embedding_class 位于 api.llm.factory
  4. ChatOpenAIOpenAIEmbeddings 来自 langchain_openai

如果你的项目结构不同,请在保持测试逻辑不变的前提下,调整对应的导入路径。

同时,如果你的 ModelType 枚举中没有将 CHATLLM 分开定义,请从参数化中删除该值,让测试只覆盖你代码库中实际存在的合法 ModelType

Original comment in English

suggestion (testing): Add tests for MiniMax-specific branches in RedBearModelFactory and provider routing helpers

Current tests exercise the enum, MiniMaxClient, and LLMClientFactory, but not the MiniMax-specific paths in get_model_params and the provider routing helpers. Please also add tests that:

  • Use RedBearModelConfig(provider=ModelProvider.MINIMAX, ...) with various temperature values (negative, 0, in-range, >1.0) and assert clamping and the default base_url.
  • Assert get_provider_llm_class returns ChatOpenAI for MiniMax for all relevant ModelTypes.
  • Assert get_provider_embedding_class returns OpenAIEmbeddings when provider is MINIMAX.

This ensures the MiniMax integration works correctly in the higher-level factory/routing layer, not only in MiniMaxClient.

Suggested implementation:

import os
import sys
import json
import importlib
import importlib.util
from enum import Enum
from unittest.mock import patch, AsyncMock, MagicMock

import pytest

# NOTE: Import paths may need to be adjusted to match the actual project layout.
# They are written here to be explicit and discoverable.
from api.llm.model_config import RedBearModelConfig, ModelType
from api.llm.model_provider import ModelProvider
from api.llm.factory import (
    get_model_params,
    get_provider_llm_class,
    get_provider_embedding_class,
)
from langchain_openai import ChatOpenAI, OpenAIEmbeddings


@pytest.mark.parametrize(
    "temperature,expected",
    [
        (-1.0, 0.0),
        (0.0, 0.0),
        (0.3, 0.3),
        (1.0, 1.0),
        (2.0, 1.0),
    ],
)
def test_minimax_get_model_params_temperature_clamping_and_default_base_url(
    temperature: float, expected: float
):
    """MiniMax-specific paths in RedBearModelFactory: clamp temperature and set default base_url.

    Ensures that when using MiniMax as the provider:
    - temperature is clamped into [0.0, 1.0]
    - a default base_url is provided if none is specified on the config
    """
    cfg = RedBearModelConfig(
        provider=ModelProvider.MINIMAX,
        model="minimax-text-model",
        model_type=ModelType.LLM,
        temperature=temperature,
        base_url=None,  # force factory to apply its MiniMax default
    )

    params = get_model_params(cfg)

    # Temperature should be clamped at the factory level for MiniMax
    assert "temperature" in params
    assert params["temperature"] == pytest.approx(expected)

    # Default MiniMax base_url should be applied when not set on the config
    assert "base_url" in params
    # We don't hard-code the URL here; we just assert a non-empty default is applied.
    assert isinstance(params["base_url"], str)
    assert params["base_url"]  # non-empty


def test_minimax_get_model_params_respects_explicit_base_url():
    """When base_url is explicitly set on the config, it should be propagated unchanged."""
    custom_base_url = "https://minimax.internal.example/v1"
    cfg = RedBearModelConfig(
        provider=ModelProvider.MINIMAX,
        model="minimax-text-model",
        model_type=ModelType.LLM,
        temperature=0.5,
        base_url=custom_base_url,
    )

    params = get_model_params(cfg)

    assert params["base_url"] == custom_base_url


@pytest.mark.parametrize(
    "model_type",
    [
        ModelType.LLM,
        ModelType.CHAT,  # if your enum differentiates, otherwise remove
    ],
)
def test_get_provider_llm_class_returns_chat_openai_for_minimax(model_type: ModelType):
    """MiniMax provider should route to ChatOpenAI for all relevant LLM ModelTypes."""
    llm_cls = get_provider_llm_class(provider=ModelProvider.MINIMAX, model_type=model_type)

    assert llm_cls is ChatOpenAI


def test_get_provider_embedding_class_returns_openai_embeddings_for_minimax():
    """MiniMax provider should reuse OpenAIEmbeddings for embeddings."""
    embedding_cls = get_provider_embedding_class(provider=ModelProvider.MINIMAX)

    assert embedding_cls is OpenAIEmbeddings

The above changes assume the following import paths and names:

  1. RedBearModelConfig and ModelType live in api.llm.model_config.
  2. ModelProvider lives in api.llm.model_provider.
  3. get_model_params, get_provider_llm_class, and get_provider_embedding_class live in api.llm.factory.
  4. ChatOpenAI and OpenAIEmbeddings come from langchain_openai.

If your project structure differs, adjust the import paths accordingly while keeping the tests themselves the same.

Also, if your ModelType enum does not define CHAT separately from LLM, remove that value from the parametrization so that the test only exercises the valid ModelTypes for your codebase.

# 2. MiniMaxClient
# ===========================================================================

class TestMiniMaxClient:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): 建议为 openai 包缺失时的 ImportError 分支添加单元测试

当前 MiniMaxClient 的测试覆盖了 API key 选择和默认值,但没有覆盖 openai 依赖缺失的情况。由于当 from openai import AsyncOpenAI 失败时,构造函数会抛出带有具体帮助信息的 ImportError,请添加一个测试:通过补丁方式屏蔽 openai(例如使用 sys.modules),然后构造 MiniMaxClient,并断言抛出了预期的 ImportError 及其错误信息。

建议的实现如下:

    def test_env_api_key_used(self):
        mod = _import_llm_client()
        with patch.dict(os.environ, {"MINIMAX_API_KEY": "sk-from-env"}):
            client = mod.MiniMaxClient()

    def test_import_error_when_openai_missing(self):
        mod = _import_llm_client()

        original_import = builtins.__import__

        def fake_import(name, *args, **kwargs):
            if name == "openai":
                raise ImportError("No module named 'openai'")
            return original_import(name, *args, **kwargs)

        with patch.object(builtins, "__import__", side_effect=fake_import):
            with pytest.raises(ImportError, match="openai"):
                mod.MiniMaxClient(api_key="dummy-key")

为使上述代码可以编译并运行,请确保在 api/tests/test_minimax_provider.py 顶部添加:

  1. import builtins(用于补丁 __import__)。
  2. 已存在的 pytestpatch 导入(from unittest.mock import patch)看起来已经在文件中;如果没有,请一并补上。

match="openai" 断言假定 MiniMaxClient 抛出的 ImportError 消息中包含单词 "openai",这与构造函数帮助文本的设计初衷是一致的。

Original comment in English

suggestion (testing): Consider adding a unit test for the ImportError path when the openai package is missing

Current MiniMaxClient tests cover API key selection and defaults but not the case where the openai dependency is missing. Since the constructor raises an ImportError with a specific help message when from openai import AsyncOpenAI fails, please add a test that patches out openai (e.g. via sys.modules), constructs MiniMaxClient, and asserts the expected ImportError and message.

Suggested implementation:

    def test_env_api_key_used(self):
        mod = _import_llm_client()
        with patch.dict(os.environ, {"MINIMAX_API_KEY": "sk-from-env"}):
            client = mod.MiniMaxClient()

    def test_import_error_when_openai_missing(self):
        mod = _import_llm_client()

        original_import = builtins.__import__

        def fake_import(name, *args, **kwargs):
            if name == "openai":
                raise ImportError("No module named 'openai'")
            return original_import(name, *args, **kwargs)

        with patch.object(builtins, "__import__", side_effect=fake_import):
            with pytest.raises(ImportError, match="openai"):
                mod.MiniMaxClient(api_key="dummy-key")

To make this compile and run, ensure at the top of api/tests/test_minimax_provider.py you have:

  1. import builtins (for patching __import__).
  2. The existing imports for pytest and patch (from unittest.mock import patch) already appear to be present; if not, add them as well.

The match="openai" assertion assumes the MiniMaxClient's ImportError message includes the word "openai", which aligns with the intent of the constructor's help text.

Comment on lines +184 to +193
@pytest.mark.asyncio
async def test_chat_strips_think_tags(self):
mod = _import_llm_client()
with patch.dict(os.environ, {"MINIMAX_API_KEY": "sk-test"}):
client = mod.MiniMaxClient()
mock_resp = MagicMock()
mock_resp.choices = [MagicMock()]
mock_resp.choices[0].message.content = "<think>reasoning</think>\nHello!"
client.client = AsyncMock()
client.client.chat.completions.create = AsyncMock(return_value=mock_resp)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): 扩展 chat 的温度测试以覆盖 >1.0 的温度,并验证 max_tokens 的传递

为了更完整地覆盖 chat 路径(而不仅仅是 helper),还请:

  • 添加一个测试:调用 await client.chat("test", temperature=2.0),并断言底层的 create 调用接收到的 temperature1.0
  • 在已有的某个 chat 测试中,传入一个非默认的 max_tokens(例如 max_tokens=123),并断言 create 调用接收到相同的值。

这样可以验证被截断后的温度以及 max_tokens 在运行时都能正确传递到兼容 OpenAI 的底层客户端。

Original comment in English

suggestion (testing): Extend chat temperature tests to cover >1.0 temperatures and max_tokens propagation

To more completely exercise the chat path (beyond the helper), please also:

  • Add a test that calls await client.chat("test", temperature=2.0) and asserts the underlying create call receives temperature=1.0.
  • In an existing chat test, pass a non-default max_tokens (e.g. max_tokens=123) and assert the create call receives the same value.

This will validate that both clamped temperature and max_tokens are correctly wired through to the OpenAI-compatible client at runtime.

@keeees
Copy link
Copy Markdown
Collaborator

keeees commented Apr 8, 2026

Hey @octo-patch, thanks for the contribution! The MiniMax integration looks solid overall — nice test coverage and clean structure.

A few things before we can merge:

  • Please retarget this PR to the develop branch instead of main. We merge feature work into develop first
  • Once retargeted, please rebase/resolve any conflicts against develop.
  • The MiniMax SVG icon — we can't accept AI-generated artwork due to licensing concerns.

Also, Sourcery flagged a few valid points worth addressing:

  • The temperature clamping logic is duplicated in both RedBearModelFactory.get_model_params and MiniMaxClient._clamp_temperature. Extract it to a shared helper to keep them in sync.
  • Move the import re in MiniMaxClient.chat to module-level.

Once those are sorted, happy to re-review and get this merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants