feat: Add EvaluationPlugin for agent invocation evaluation and retry#166
Open
afarntrog wants to merge 4 commits intostrands-agents:mainfrom
Open
feat: Add EvaluationPlugin for agent invocation evaluation and retry#166afarntrog wants to merge 4 commits intostrands-agents:mainfrom
afarntrog wants to merge 4 commits intostrands-agents:mainfrom
Conversation
Introduce an EvaluationPlugin that hooks into agent invocations to evaluate outputs against expected results and automatically retries with improved system prompts on failure. - Add EvaluationPlugin class that wraps agent __call__ to intercept invocations, run evaluators, and retry with LLM-suggested prompt improvements when evaluations fail - Add improvement suggestion prompt template for generating better system prompts based on evaluation feedback - Add comprehensive test suite covering plugin initialization, wrapping, evaluation execution, retry logic, and edge cases - Update ruff config to ignore line-length in plugin prompt templates
- Add comprehensive docstrings to all methods in EvaluationPlugin - Change _suggest_improvements to return ImprovementSuggestion instead of str - Add debug logging with reasoning when applying improved system prompt - Replace Union[X, Y] with modern X | Y syntax - Use dict default in kwargs.get() instead of
- Export new module from the package's public API - Bump minimum strands-agents dependency from 1.0.0 to 1.28.0 to support functionality required by the plugins module
afarntrog
commented
Mar 18, 2026
Comment on lines
+74
to
+83
|
|
||
| def wrapped_call(self_agent: Any, prompt: Any = None, **kwargs: Any) -> Any: | ||
| return plugin._invoke_with_evaluation(self_agent, original_call, prompt, **kwargs) | ||
|
|
||
| wrapped_class = type( | ||
| agent.__class__.__name__, | ||
| (agent.__class__,), | ||
| {"__call__": wrapped_call}, | ||
| ) | ||
| agent.__class__ = wrapped_class |
Contributor
Author
There was a problem hiding this comment.
We need to do this monkey patching because hooks alone will not give us the retry pattern we need:
What EvaluationPlugin needs to do:
- Intercept an agent invocation
- Let it run
- Evaluate the result
- If evaluation fails, modify the system prompt and run the agent again
- Repeat up to
max_retriestimes
What hooks give you:
BeforeInvocationEventfires before the agent runs. You can modify messages. No result yet.AfterInvocationEventfires after the agent runs. You get the result. But you're inside_run_loop'sfinally block, andstream_asyncstill holds the invocation lock. If you callagent()from here → deadlock.
Hooks give you "before" and "after", not "around". There's no way to say "evaluate this result and, if it's bad, run the whole thing again" from within a hook. The retry requires calling agent(), which requires the lock, which is held by the invocation that's firing your hook. __class__ swap helps because:
This replaces __call__ — every call to agent(prompt) now goes through _invoke_with_evaluation, which runs evaluators and handles retries.
Contributor
Author
|
Although there can be/is some overlap with steering the primary difference between these are:
|
Contributor
Author
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Adds a new
EvaluationPluginthat integrates with the Strands agents plugin system to evaluate agent output after each invocation and automatically retry with improved system prompts on failure.Changes
New:
strands_evals.pluginsmoduleEvaluationPlugin— A StrandsPluginthat wraps agent invocations to:max_retries)ImprovementSuggestion(reasoning + system prompt) from the improvement LLMimprovement_suggestion.py— Prompt template for the improvement suggestion agent that analyzes evaluation failures and proposes system prompt modificationsinvocation_stateOther changes
strands-agentsminimum version from>=1.0.0to>=1.28.0(required for the plugin interface)plugins/prompt_templates/Testing
Let me know if you want any tweaks to this.
Related Issues
Documentation PR
Type of Change
New feature
Testing
How have you tested the change? Verify that the changes do not break functionality or introduce warnings in consuming repositories: agents-docs, agents-tools, agents-cli
hatch run prepareChecklist
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.