Skip to content

fix: enable OutputSchema processor for non-Gemini models#582

Open
GuyGoldenberg wants to merge 1 commit intogoogle:mainfrom
GuyGoldenberg:fix/output-schema-non-gemini-models
Open

fix: enable OutputSchema processor for non-Gemini models#582
GuyGoldenberg wants to merge 1 commit intogoogle:mainfrom
GuyGoldenberg:fix/output-schema-non-gemini-models

Conversation

@GuyGoldenberg
Copy link

@GuyGoldenberg GuyGoldenberg commented Feb 19, 2026

Fixes OutputSchema not working for non-Gemini models (e.g., Bedrock/Claude).

The set_model_response tool workaround was only activating for Gemini API models, leaving non-Gemini models without structured output support when tools are present.

Test plan: Added unit tests, all pass.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @GuyGoldenberg, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves an issue where non-Gemini large language models, such as Bedrock and Claude, were not correctly engaging the OutputSchema processor when used with tools. Previously, the system's logic for determining if a model required this processor was too restrictive, only activating for specific Gemini API versions and bypassing the necessary set_model_response tool workaround for other models. The updated logic ensures that all non-Gemini models now properly utilize the OutputSchema processor, thereby enabling consistent structured output handling and tool integration across a wider array of LLM providers.

Highlights

  • OutputSchema Processor for Non-Gemini Models: The OutputSchema processor is now enabled for non-Gemini models (e.g., Bedrock/Claude) that do not natively support ResponseSchema with tools, ensuring proper structured output handling.
  • Updated NeedsOutputSchemaProcessor Logic: The NeedsOutputSchemaProcessor function's logic was modified to return true for all non-Gemini models, which ensures the set_model_response tool workaround is correctly applied.
  • Enhanced Test Coverage: New unit tests were added for non-Gemini models in variant_test.go, and an integration test named InjectsToolForNonGeminiModels was introduced in outputschema_processor_test.go to validate the changes.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • internal/llminternal/googlellm/variant.go
    • Modified the NeedsOutputSchemaProcessor function to include non-Gemini models in its determination logic.
    • Updated comments to accurately reflect the new behavior for both non-Gemini and Gemini models.
  • internal/llminternal/googlellm/variant_test.go
    • Added context and iter imports.
    • Removed a specific test case for 'NonGemini_Vertex' from TestNeedsOutputSchemaProcessor.
    • Introduced a new test function, TestNeedsOutputSchemaProcessorNonGemini, to specifically test the processor's behavior with various non-Gemini models.
    • Defined a mockNonGoogleLLM struct and its associated methods to simulate non-Google LLMs for testing purposes.
  • internal/llminternal/outputschema_processor_test.go
    • Added an iter import.
    • Defined a mockNonGeminiLLM struct and its methods to represent non-Gemini models in tests.
    • Added a new integration test case, InjectsToolForNonGeminiModels, to verify that the set_model_response tool is correctly injected for non-Gemini models when OutputSchema is used.
Activity
  • Unit tests were added for non-Gemini models in variant_test.go.
  • An integration test InjectsToolForNonGeminiModels was added in outputschema_processor_test.go.
  • All existing tests and the full test suite passed.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly enables the OutputSchema processor for non-Gemini models by updating the NeedsOutputSchemaProcessor logic. The change is well-supported by new unit tests for non-Gemini models and a new integration test to ensure the set_model_response tool is injected as expected. The code is clear and the changes effectively address the issue described. I have one suggestion to improve maintainability by reducing test code duplication.

@GuyGoldenberg GuyGoldenberg force-pushed the fix/output-schema-non-gemini-models branch from bce7fd9 to 46a6fb0 Compare February 19, 2026 16:18
Non-Gemini models (e.g., Bedrock/Claude) don't support native
ResponseSchema with tools. Previously, NeedsOutputSchemaProcessor
only returned true for Gemini API models <= 2.5, causing the
set_model_response tool workaround to never activate for non-Gemini
models.

This fix makes NeedsOutputSchemaProcessor return true for any
non-Gemini model, ensuring the set_model_response tool workaround
is applied when OutputSchema is used with tools.
@GuyGoldenberg GuyGoldenberg force-pushed the fix/output-schema-non-gemini-models branch from 46a6fb0 to 310a1f4 Compare February 19, 2026 16:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant