Skip to content

Conversation

@zbirenbaum
Copy link
Contributor

This addresses an issue that came up while investigating a bug report impacting a wandb/weave user using our agents SDK TracingProcessor

Fixes a race condition where:

  1. The streaming response to the query to the agent finishes before the input guardrails finish executing
  2. The parent span for both the streaming response and the input guardrails is completed
  3. The entire trace gets completed
  4. The input guardrails complete and try to trigger on_span_end in the tracing processor for a span that has already completed in a trace that has already completed

It may be the case that this only occurs when there is no next step, since tripwires are awaited there but I'm not sure

Photos to illustrate the issue:
Before Fix:
image

After Fix:
image

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines 1141 to 1155
if streamed_result._input_guardrails_task:
await streamed_result._input_guardrails_task
if current_span:
current_span.finish(reset_current=True)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Awaiting guardrail task can skip span/trace cleanup on errors

The new await streamed_result._input_guardrails_task happens before current_span.finish() and streamed_result.trace.finish(), but it is not wrapped in any error handling. If the guardrail task raises or is cancelled (for example when a streamed run is cancelled immediately or a guardrail itself errors), the await propagates the exception out of the finally block and the subsequent span/trace cleanup never executes. That leaves spans open and the trace unfinished—the exact state this change intends to prevent. Consider awaiting the guardrail task inside a try/except or contextlib.suppress so that cleanup always runs even when the task fails or is cancelled.

Useful? React with 👍 / 👎.

@seratch seratch added bug Something isn't working feature:core feature:tracing labels Oct 17, 2025

streamed_result.is_complete = True
finally:
if streamed_result._input_guardrails_task:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a better place to put this might be where the output guardrails are awaited (await both the input and output rails with gather or something if they haven't finished by when output ones are running) but this preserves the existing functionality. Would like input if someone with more context has any.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, input guardrails run in parallel, and there’s no requirement for them to finish before executing subsequent agent code or its output guardrails. We might later add an option to customize this behavior so that subsequent logic waits until the input guardrails complete, but that’s not the case yet.

For now, simply awaiting the input guardrails in the finally clause should be fine.

Copy link
Member

@seratch seratch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lint etc. are not successful with your second commit. Can you fix the issue too?


streamed_result.is_complete = True
finally:
if streamed_result._input_guardrails_task:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, input guardrails run in parallel, and there’s no requirement for them to finish before executing subsequent agent code or its output guardrails. We might later add an option to customize this behavior so that subsequent logic waits until the input guardrails complete, but that’s not the case yet.

For now, simply awaiting the input guardrails in the finally clause should be fine.

streamed_result.input_guardrail_results = await streamed_result._input_guardrails_task
except Exception:
# Exceptions will be checked in the stream_events loop
output_guardrail_results = []
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this output guardrail related local variable is necessary here. Instead, can you add debug-level logging for debugging what's going on during local development?

Copy link
Contributor Author

@zbirenbaum zbirenbaum Oct 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated it to be just an await with a debug log on exception. I think the tripwire exception might be suppressed so I added a raise.

Regarding the comment above:
That makes sense to me, definitely don't want to wait until the guardrail completes to start getting a response. It should complete at some point before the trace ends and be able to raise an error before completing the span and trace if tripped

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated it to be just an await with a debug log on exception. I think the tripwire exception might be suppressed by this however.

Yeah, after leaving the above comment, I was wondering how things go in the scenario where the tripwire comes up there. The suggested change here should be better than the current behavior in terms of the tracing consistency, but I see the need to take care of the tripwire pattern too.

Would it be possible to ask you to do some manual tests to see how it actually works with this change? The tripwire should show up in the case in any tracing providers. If you have more time to take for this, adding unit tests verifying the pattern would be greatly appreciated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm definitely open to taking some time to ensure it works as intended, I am a little limited on time today so may need to followup tomorrow or Monday if that's alright.

I already have a repro file I can use to test the before and after behavior so shouldn't be hard. I'll take a look at the unit tests and see if I can add a few for the relevant scenarios

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you so much!

Copy link
Contributor Author

@zbirenbaum zbirenbaum Oct 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I managed to confirm the branch in its current state addresses the inconsistency in trace completion and the trace structure/code behavior is consistent in the following scenarios:

  1. Input guardrail completes first (prior to changes - happy path)
  2. Input guardrail completes first (after changes - happy path)
  3. Input guardrail completes after main request (after changes - scenario PR intends to fix)

I also added some unit tests for various delay scenarios and exception behavior:

  • On main: test_parent_span_and_trace_finish_after_slow_input_guardrail is the only test that fails
  • On this branch: test_parent_span_and_trace_finish_after_slow_input_guardrail passes

The raise statement in the exception turned out not to be necessary to propagate the tripwire error and could short circuit completion so I took it out and left the debug statement. I think this should be good to go.

@zbirenbaum zbirenbaum requested a review from seratch October 20, 2025 23:10
Copy link
Member

@seratch seratch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, this looks great to me. Left one comment. Also, can you resolve the lint error? You can run make lint, make mypy, and make tests before pushing commits.

@@ -0,0 +1,228 @@
import asyncio
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We still support python 3.9, so this CI build fails: https://github.com/openai/openai-agents-python/actions/runs/18673545373/job/53240062121?pr=1921

Can you add this line at the top of this file?

from __future__ import annotations

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added!

Copy link
Member

@seratch seratch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

@seratch seratch merged commit 59a8b0f into openai:main Oct 21, 2025
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants