diff --git a/docs/features/experimental/concurrent-file-edits.md b/docs/features/experimental/concurrent-file-edits.md index 22521562..bfeec688 100644 --- a/docs/features/experimental/concurrent-file-edits.md +++ b/docs/features/experimental/concurrent-file-edits.md @@ -91,7 +91,7 @@ This feature leverages the [`apply_diff`](/advanced-usage/available-tools/apply- ## Best Practices ### When to Enable -- Using capable AI models (Claude 3.5 Sonnet, GPT-4, etc.) +- Using capable AI models (Claude Sonnet 4 or Claude 3.7 Sonnet, GPT-4.1/GPT-4o, GPT-5 family) - Comfortable reviewing multiple changes at once ### When to Keep Disabled diff --git a/docs/providers/claude-code.md b/docs/providers/claude-code.md index fdf4b8cb..371d04f7 100644 --- a/docs/providers/claude-code.md +++ b/docs/providers/claude-code.md @@ -99,11 +99,15 @@ export CLAUDE_CODE_MAX_OUTPUT_TOKENS=32768 # Set to 32k tokens The Claude Code provider supports these Claude models: - **Claude Opus 4.1** (Most capable) -- **Claude Opus 4** +- **Claude Opus 4** - **Claude Sonnet 4** (Latest, recommended) - **Claude 3.7 Sonnet** -- **Claude 3.5 Sonnet** -- **Claude 3.5 Haiku** (Fast responses) + +:::note Legacy models +These older models may still be available via the Claude CLI depending on your subscription, but are no longer recommended in new setups: +- Claude 3.5 Sonnet +- Claude 3.5 Haiku +::: The specific models available depend on your Claude CLI subscription and plan. diff --git a/docs/providers/litellm.md b/docs/providers/litellm.md index 6f775f31..75505eef 100644 --- a/docs/providers/litellm.md +++ b/docs/providers/litellm.md @@ -133,7 +133,7 @@ When you configure the LiteLLM provider, Roo Code interacts with your LiteLLM se * `supportsImages`: Determined from `model_info.supports_vision` provided by LiteLLM. * `supportsPromptCache`: Determined from `model_info.supports_prompt_caching` provided by LiteLLM. * `inputPrice` / `outputPrice`: Calculated from `model_info.input_cost_per_token` and `model_info.output_cost_per_token` from LiteLLM. - * `supportsComputerUse`: This flag is set to `true` if the underlying model identifier (from `litellm_params.model`, e.g., `openrouter/anthropic/claude-3.5-sonnet`) matches one of the Anthropic models predefined in Roo Code as suitable for "computer use" (see `COMPUTER_USE_MODELS` in technical details). + * `supportsComputerUse`: This flag is set to `true` if the underlying model identifier (from `litellm_params.model`, e.g., `openrouter/anthropic/claude-3.7-sonnet-20250219`) matches one of the Anthropic models predefined in Roo Code as suitable for "computer use" (see `COMPUTER_USE_MODELS` in technical details). Roo Code uses default values for some of these properties if they are not explicitly provided by your LiteLLM server's `/model/info` endpoint for a given model. The defaults are: * `maxTokens`: 8192 diff --git a/docs/providers/openai-compatible.md b/docs/providers/openai-compatible.md index 48359620..59b217df 100644 --- a/docs/providers/openai-compatible.md +++ b/docs/providers/openai-compatible.md @@ -59,7 +59,6 @@ While this provider type allows connecting to various endpoints, if you are conn * `o1` * `o1-preview` * `o1-mini` -* `gpt-4.5-preview` * `gpt-4o` * `gpt-4o-mini` diff --git a/docs/providers/openai.md b/docs/providers/openai.md index 521ff697..f30786f9 100644 --- a/docs/providers/openai.md +++ b/docs/providers/openai.md @@ -84,7 +84,6 @@ Original reasoning models: ### GPT-4o Family Optimized GPT-4 models: -* `gpt-4.5-preview` * `gpt-4o` - Optimized GPT-4 * `gpt-4o-mini` - Smaller optimized variant diff --git a/docs/providers/vscode-lm.md b/docs/providers/vscode-lm.md index 1fa6e570..c2cbd2f3 100644 --- a/docs/providers/vscode-lm.md +++ b/docs/providers/vscode-lm.md @@ -37,7 +37,7 @@ Roo Code includes *experimental* support for the [VS Code Language Model API](ht 1. **Open Roo Code Settings:** Click the gear icon () in the Roo Code panel. 2. **Select Provider:** Choose "VS Code LM API" from the "API Provider" dropdown. 3. **Select Model:** The "Language Model" dropdown will (eventually) list available models. The format is `vendor/family`. For example, if you have Copilot, you might see options like: - * `copilot - claude-3.5-sonnet` + * `copilot - claude-3.7-sonnet` * `copilot - o3-mini` * `copilot - o1-ga` * `copilot - gemini-2.0-flash`