Skip to content

Add Anthropic's Claude Opus 4.7 model support#2354

Open
PeterDaveHello wants to merge 1 commit intoThe-PR-Agent:mainfrom
PeterDaveHelloKitchen:add-claude-opus-4-7-support
Open

Add Anthropic's Claude Opus 4.7 model support#2354
PeterDaveHello wants to merge 1 commit intoThe-PR-Agent:mainfrom
PeterDaveHelloKitchen:add-claude-opus-4-7-support

Conversation

@PeterDaveHello
Copy link
Copy Markdown
Contributor

@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

Review Summary by Qodo

Add Claude Opus 4.7 model support across providers

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Add Claude Opus 4.7 model support across multiple providers
• Set context length to 1,000,000 tokens for new model
• Support Anthropic, Vertex AI, and Bedrock provider variants
• Add comprehensive unit tests for all model identifier formats
Diagram
flowchart LR
  A["Claude Opus 4.7<br/>Model Addition"] --> B["Anthropic Provider"]
  A --> C["Vertex AI Provider"]
  A --> D["Bedrock Provider"]
  B --> E["1M Token Context"]
  C --> E
  D --> E
  A --> F["Unit Tests"]
  F --> G["6 Model Variants"]
Loading

Grey Divider

File Changes

1. pr_agent/algo/__init__.py ✨ Enhancement +6/-0

Register Claude Opus 4.7 model identifiers with context lengths

• Added vertex_ai/claude-opus-4-7 with 1,000,000 token context length
• Added anthropic/claude-opus-4-7 with 1,000,000 token context length
• Added claude-opus-4-7 with 1,000,000 token context length
• Added bedrock/anthropic.claude-opus-4-7 with 1,000,000 token context length
• Added bedrock/global.anthropic.claude-opus-4-7 and bedrock/us.anthropic.claude-opus-4-7
 variants with 1,000,000 token context length

pr_agent/algo/init.py


2. tests/unittest/test_get_max_tokens.py 🧪 Tests +23/-0

Add unit tests for Claude Opus 4.7 max tokens

• Added parametrized test test_claude_opus_4_7_model_max_tokens covering 6 model identifier
 formats
• Tests verify all Claude Opus 4.7 variants return 1,000,000 max tokens
• Covers Anthropic, Vertex AI, and Bedrock provider formats

tests/unittest/test_get_max_tokens.py


Grey Divider

Qodo Logo

@qodo-free-for-open-source-projects
Copy link
Copy Markdown
Contributor

qodo-free-for-open-source-projects Bot commented Apr 28, 2026

Code Review by Qodo

🐞 Bugs (1) 📘 Rule violations (1)

Grey Divider


Action required

1. Bedrock Opus ID mismatch 🐞 Bug ≡ Correctness
Description
The new Bedrock Claude Opus 4.7 MAX_TOKENS keys omit the -v1:0 suffix, while existing Bedrock
Claude Opus entries and the repo’s Bedrock configuration docs use -v1:0. Since get_max_tokens()
only does exact dictionary lookup, users configuring bedrock/...-v1:0 will hit an exception and
PR-Agent will fail for that model string.
Code

pr_agent/algo/init.py[156]

+    'bedrock/anthropic.claude-opus-4-7': 1000000,
Evidence
MAX_TOKENS contains Bedrock Claude Opus entries with -v1:0 for prior Opus versions, but the
newly-added Opus 4.7 Bedrock entries do not. get_max_tokens() requires an exact match in
MAX_TOKENS and raises if the key is missing, and the documentation examples for Bedrock models
include -v1:0, making a mismatch likely in real configuration.

pr_agent/algo/init.py[151-174]
pr_agent/algo/utils.py[991-1012]
docs/docs/usage-guide/changing_a_model.md[231-244]
tests/unittest/test_get_max_tokens.py[149-181]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Bedrock Claude Opus 4.7 was added to `MAX_TOKENS` without the `-v1:0` suffix, but existing Bedrock Claude Opus IDs in `MAX_TOKENS` (and the repo’s Bedrock docs/examples) use `-v1:0`. Because `get_max_tokens()` requires an exact match, using the `...-v1:0` model string will raise and break execution.

### Issue Context
- Prior Bedrock Opus entries are suffixed with `-v1:0`.
- The new Opus 4.7 Bedrock entries are not.
- Tests currently validate only the unsuffixed Opus 4.7 Bedrock strings, so they won’t catch the `-v1:0` variant failing.

### Fix Focus Areas
- pr_agent/algo/__init__.py[151-174]
- tests/unittest/test_get_max_tokens.py[149-181]

### What to change
1. Add `MAX_TOKENS` entries for the expected Bedrock Opus 4.7 IDs that include `-v1:0` (e.g., `bedrock/anthropic.claude-opus-4-7-v1:0`, `bedrock/global.anthropic.claude-opus-4-7-v1:0`, `bedrock/us.anthropic.claude-opus-4-7-v1:0`).
2. Update/extend the unit test to include these `-v1:0` variants (and keep the unsuffixed ones too if you intend to support both spellings).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

2. Single quotes in fake_settings 📘 Rule violation ⚙ Maintainability
Description
The newly added test builds fake_settings using single-quoted strings, diverging from the
project’s stated preference for double quotes and the surrounding test style. This introduces
inconsistent formatting and may trigger lint/style churn in future edits.
Code

tests/unittest/test_get_max_tokens.py[R161-166]

+        fake_settings = type('', (), {
+            'config': type('', (), {
+                'custom_model_max_tokens': 0,
+                'max_model_tokens': 0
+            })()
+        })()
Evidence
PR Compliance ID 8 and 21 require adhering to repository formatting conventions (including
preferring double quotes) and keeping code compliant with lint/format tooling. The added lines use
single quotes for strings/keys in the new test setup, unlike nearby sections using double quotes.

AGENTS.md
tests/unittest/test_get_max_tokens.py[161-166]
Best Practice: Learned patterns

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The new test uses single quotes for string literals and dict keys, which conflicts with the project’s preference for double quotes and increases formatting inconsistency within the file.

## Issue Context
This was introduced in the newly added `test_claude_opus_4_7_model_max_tokens` setup for `fake_settings`.

## Fix Focus Areas
- tests/unittest/test_get_max_tokens.py[161-166]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

Qodo Logo

Comment thread pr_agent/algo/__init__.py
'bedrock/anthropic.claude-opus-4-1-20250805-v1:0': 200000,
'bedrock/anthropic.claude-opus-4-6-20260120-v1:0': 200000,
'bedrock/anthropic.claude-opus-4-6-v1:0': 200000,
'bedrock/anthropic.claude-opus-4-7': 1000000,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Bedrock opus id mismatch 🐞 Bug ≡ Correctness

The new Bedrock Claude Opus 4.7 MAX_TOKENS keys omit the -v1:0 suffix, while existing Bedrock
Claude Opus entries and the repo’s Bedrock configuration docs use -v1:0. Since get_max_tokens()
only does exact dictionary lookup, users configuring bedrock/...-v1:0 will hit an exception and
PR-Agent will fail for that model string.
Agent Prompt
### Issue description
Bedrock Claude Opus 4.7 was added to `MAX_TOKENS` without the `-v1:0` suffix, but existing Bedrock Claude Opus IDs in `MAX_TOKENS` (and the repo’s Bedrock docs/examples) use `-v1:0`. Because `get_max_tokens()` requires an exact match, using the `...-v1:0` model string will raise and break execution.

### Issue Context
- Prior Bedrock Opus entries are suffixed with `-v1:0`.
- The new Opus 4.7 Bedrock entries are not.
- Tests currently validate only the unsuffixed Opus 4.7 Bedrock strings, so they won’t catch the `-v1:0` variant failing.

### Fix Focus Areas
- pr_agent/algo/__init__.py[151-174]
- tests/unittest/test_get_max_tokens.py[149-181]

### What to change
1. Add `MAX_TOKENS` entries for the expected Bedrock Opus 4.7 IDs that include `-v1:0` (e.g., `bedrock/anthropic.claude-opus-4-7-v1:0`, `bedrock/global.anthropic.claude-opus-4-7-v1:0`, `bedrock/us.anthropic.claude-opus-4-7-v1:0`).
2. Update/extend the unit test to include these `-v1:0` variants (and keep the unsuffixed ones too if you intend to support both spellings).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant