Hi NanoResearch team,
First of all, thank you for your excellent work on this project!
I encountered an issue while using MiniMax-M2.7 as the backend model. In the experiment stage, many generated code files contain unexpected <think></think> tokens at the very beginning, which causes the code to fail during execution.
Problem Description
- Generated code files start with
<think></think>
- This leads to syntax errors and prevents the code from running properly
Configuration
Here is my current configuration:
{
"research": {
"base_url": "https://api.minimaxi.com/v1",
"api_key": "xxx",
"openalex_api_key": "xxx",
"s2_api_key": "xxx",
"template_format": "neurips2025",
"execution_profile": "local_quick",
"writing_mode": "hybrid",
"max_retries": 2,
"auto_create_env": true,
"auto_download_resources": true,
"ideation": { "model": "MiniMax-M2.7", "temperature": 0.5, "max_tokens": 16384, "timeout": 600.0 },
"planning": { "model": "MiniMax-M2.7", "temperature": 0.2, "max_tokens": 16384, "timeout": 600.0 },
"experiment": { "model": "MiniMax-M2.7", "temperature": 0.1, "max_tokens": 16384, "timeout": 600.0 },
"code_gen": { "model": "MiniMax-M2.7", "temperature": 0.1, "max_tokens": 16384, "timeout": 600.0 },
"writing": { "model": "MiniMax-M2.7", "temperature": 0.4, "max_tokens": 16384, "timeout": 600.0 },
"figure_prompt": { "model": "MiniMax-M2.7", "temperature": 0.1, "max_tokens": 16384, "timeout": 300.0 },
"figure_code": { "model": "MiniMax-M2.7", "temperature": 0.1, "max_tokens": 16384, "timeout": 300.0 },
"evidence_extraction": { "model": "MiniMax-M2.7", "temperature": 0.1, "max_tokens": 16384, "timeout": 300.0 },
"revision": { "model": "MiniMax-M2.7", "temperature": 0.3, "max_tokens": 16384, "timeout": 600.0 },
"review": { "model": "MiniMax-M2.7", "temperature": 0.3, "max_tokens": 16384, "timeout": 300.0 },
"figure_gen": {
"model": "gemini-3.1-flash-image-preview",
"image_backend": "gemini",
"temperature": null,
"timeout": 300.0
}
}
}
Question
Do you have any suggestions on how to prevent these <think></think> tokens from appearing in the generated code?
Any guidance would be greatly appreciated!
Hi NanoResearch team,
First of all, thank you for your excellent work on this project!
I encountered an issue while using MiniMax-M2.7 as the backend model. In the
experimentstage, many generated code files contain unexpected<think></think>tokens at the very beginning, which causes the code to fail during execution.Problem Description
<think></think>Configuration
Here is my current configuration:
{ "research": { "base_url": "https://api.minimaxi.com/v1", "api_key": "xxx", "openalex_api_key": "xxx", "s2_api_key": "xxx", "template_format": "neurips2025", "execution_profile": "local_quick", "writing_mode": "hybrid", "max_retries": 2, "auto_create_env": true, "auto_download_resources": true, "ideation": { "model": "MiniMax-M2.7", "temperature": 0.5, "max_tokens": 16384, "timeout": 600.0 }, "planning": { "model": "MiniMax-M2.7", "temperature": 0.2, "max_tokens": 16384, "timeout": 600.0 }, "experiment": { "model": "MiniMax-M2.7", "temperature": 0.1, "max_tokens": 16384, "timeout": 600.0 }, "code_gen": { "model": "MiniMax-M2.7", "temperature": 0.1, "max_tokens": 16384, "timeout": 600.0 }, "writing": { "model": "MiniMax-M2.7", "temperature": 0.4, "max_tokens": 16384, "timeout": 600.0 }, "figure_prompt": { "model": "MiniMax-M2.7", "temperature": 0.1, "max_tokens": 16384, "timeout": 300.0 }, "figure_code": { "model": "MiniMax-M2.7", "temperature": 0.1, "max_tokens": 16384, "timeout": 300.0 }, "evidence_extraction": { "model": "MiniMax-M2.7", "temperature": 0.1, "max_tokens": 16384, "timeout": 300.0 }, "revision": { "model": "MiniMax-M2.7", "temperature": 0.3, "max_tokens": 16384, "timeout": 600.0 }, "review": { "model": "MiniMax-M2.7", "temperature": 0.3, "max_tokens": 16384, "timeout": 300.0 }, "figure_gen": { "model": "gemini-3.1-flash-image-preview", "image_backend": "gemini", "temperature": null, "timeout": 300.0 } } }Question
Do you have any suggestions on how to prevent these
<think></think>tokens from appearing in the generated code?Any guidance would be greatly appreciated!